Nov 25 19:19:31 np0005535963 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 25 19:19:31 np0005535963 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 25 19:19:31 np0005535963 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 19:19:31 np0005535963 kernel: BIOS-provided physical RAM map:
Nov 25 19:19:31 np0005535963 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 25 19:19:31 np0005535963 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 25 19:19:31 np0005535963 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 25 19:19:31 np0005535963 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 25 19:19:31 np0005535963 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 25 19:19:31 np0005535963 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 25 19:19:31 np0005535963 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 25 19:19:31 np0005535963 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 25 19:19:31 np0005535963 kernel: NX (Execute Disable) protection: active
Nov 25 19:19:31 np0005535963 kernel: APIC: Static calls initialized
Nov 25 19:19:31 np0005535963 kernel: SMBIOS 2.8 present.
Nov 25 19:19:31 np0005535963 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 25 19:19:31 np0005535963 kernel: Hypervisor detected: KVM
Nov 25 19:19:31 np0005535963 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 25 19:19:31 np0005535963 kernel: kvm-clock: using sched offset of 4520946080 cycles
Nov 25 19:19:31 np0005535963 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 25 19:19:31 np0005535963 kernel: tsc: Detected 2800.000 MHz processor
Nov 25 19:19:31 np0005535963 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 25 19:19:31 np0005535963 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 25 19:19:31 np0005535963 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 25 19:19:31 np0005535963 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 25 19:19:31 np0005535963 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 25 19:19:31 np0005535963 kernel: Using GB pages for direct mapping
Nov 25 19:19:31 np0005535963 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 25 19:19:31 np0005535963 kernel: ACPI: Early table checksum verification disabled
Nov 25 19:19:31 np0005535963 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 25 19:19:31 np0005535963 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 19:19:31 np0005535963 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 19:19:31 np0005535963 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 19:19:31 np0005535963 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 25 19:19:31 np0005535963 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 19:19:31 np0005535963 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 19:19:31 np0005535963 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 25 19:19:31 np0005535963 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 25 19:19:31 np0005535963 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 25 19:19:31 np0005535963 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 25 19:19:31 np0005535963 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 25 19:19:31 np0005535963 kernel: No NUMA configuration found
Nov 25 19:19:31 np0005535963 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 25 19:19:31 np0005535963 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Nov 25 19:19:31 np0005535963 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 25 19:19:31 np0005535963 kernel: Zone ranges:
Nov 25 19:19:31 np0005535963 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 25 19:19:31 np0005535963 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 25 19:19:31 np0005535963 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 19:19:31 np0005535963 kernel:  Device   empty
Nov 25 19:19:31 np0005535963 kernel: Movable zone start for each node
Nov 25 19:19:31 np0005535963 kernel: Early memory node ranges
Nov 25 19:19:31 np0005535963 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 25 19:19:31 np0005535963 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 25 19:19:31 np0005535963 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 19:19:31 np0005535963 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 25 19:19:31 np0005535963 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 25 19:19:31 np0005535963 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 25 19:19:31 np0005535963 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 25 19:19:31 np0005535963 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 25 19:19:31 np0005535963 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 25 19:19:31 np0005535963 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 25 19:19:31 np0005535963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 25 19:19:31 np0005535963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 25 19:19:31 np0005535963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 25 19:19:31 np0005535963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 25 19:19:31 np0005535963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 25 19:19:31 np0005535963 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 25 19:19:31 np0005535963 kernel: TSC deadline timer available
Nov 25 19:19:31 np0005535963 kernel: CPU topo: Max. logical packages:   8
Nov 25 19:19:31 np0005535963 kernel: CPU topo: Max. logical dies:       8
Nov 25 19:19:31 np0005535963 kernel: CPU topo: Max. dies per package:   1
Nov 25 19:19:31 np0005535963 kernel: CPU topo: Max. threads per core:   1
Nov 25 19:19:31 np0005535963 kernel: CPU topo: Num. cores per package:     1
Nov 25 19:19:31 np0005535963 kernel: CPU topo: Num. threads per package:   1
Nov 25 19:19:31 np0005535963 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 25 19:19:31 np0005535963 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 25 19:19:31 np0005535963 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 25 19:19:31 np0005535963 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 25 19:19:31 np0005535963 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 25 19:19:31 np0005535963 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 25 19:19:31 np0005535963 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 25 19:19:31 np0005535963 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 25 19:19:31 np0005535963 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 25 19:19:31 np0005535963 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 25 19:19:31 np0005535963 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 25 19:19:31 np0005535963 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 25 19:19:31 np0005535963 kernel: Booting paravirtualized kernel on KVM
Nov 25 19:19:31 np0005535963 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 25 19:19:31 np0005535963 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 25 19:19:31 np0005535963 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 25 19:19:31 np0005535963 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 25 19:19:31 np0005535963 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 19:19:31 np0005535963 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 25 19:19:31 np0005535963 kernel: random: crng init done
Nov 25 19:19:31 np0005535963 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 25 19:19:31 np0005535963 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 25 19:19:31 np0005535963 kernel: Fallback order for Node 0: 0 
Nov 25 19:19:31 np0005535963 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 25 19:19:31 np0005535963 kernel: Policy zone: Normal
Nov 25 19:19:31 np0005535963 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 25 19:19:31 np0005535963 kernel: software IO TLB: area num 8.
Nov 25 19:19:31 np0005535963 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 25 19:19:31 np0005535963 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 25 19:19:31 np0005535963 kernel: ftrace: allocated 193 pages with 3 groups
Nov 25 19:19:31 np0005535963 kernel: Dynamic Preempt: voluntary
Nov 25 19:19:31 np0005535963 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 25 19:19:31 np0005535963 kernel: rcu: #011RCU event tracing is enabled.
Nov 25 19:19:31 np0005535963 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 25 19:19:31 np0005535963 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 25 19:19:31 np0005535963 kernel: #011Rude variant of Tasks RCU enabled.
Nov 25 19:19:31 np0005535963 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 25 19:19:31 np0005535963 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 25 19:19:31 np0005535963 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 25 19:19:31 np0005535963 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 19:19:31 np0005535963 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 19:19:31 np0005535963 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 19:19:31 np0005535963 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 25 19:19:31 np0005535963 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 25 19:19:31 np0005535963 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 25 19:19:31 np0005535963 kernel: Console: colour VGA+ 80x25
Nov 25 19:19:31 np0005535963 kernel: printk: console [ttyS0] enabled
Nov 25 19:19:31 np0005535963 kernel: ACPI: Core revision 20230331
Nov 25 19:19:31 np0005535963 kernel: APIC: Switch to symmetric I/O mode setup
Nov 25 19:19:31 np0005535963 kernel: x2apic enabled
Nov 25 19:19:31 np0005535963 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 25 19:19:31 np0005535963 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 25 19:19:31 np0005535963 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Nov 25 19:19:31 np0005535963 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 25 19:19:31 np0005535963 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 25 19:19:31 np0005535963 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 25 19:19:31 np0005535963 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 25 19:19:31 np0005535963 kernel: Spectre V2 : Mitigation: Retpolines
Nov 25 19:19:31 np0005535963 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 25 19:19:31 np0005535963 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 25 19:19:31 np0005535963 kernel: RETBleed: Mitigation: untrained return thunk
Nov 25 19:19:31 np0005535963 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 25 19:19:31 np0005535963 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 25 19:19:31 np0005535963 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 25 19:19:31 np0005535963 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 25 19:19:31 np0005535963 kernel: x86/bugs: return thunk changed
Nov 25 19:19:31 np0005535963 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 25 19:19:31 np0005535963 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 25 19:19:31 np0005535963 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 25 19:19:31 np0005535963 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 25 19:19:31 np0005535963 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 25 19:19:31 np0005535963 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 25 19:19:31 np0005535963 kernel: Freeing SMP alternatives memory: 40K
Nov 25 19:19:31 np0005535963 kernel: pid_max: default: 32768 minimum: 301
Nov 25 19:19:31 np0005535963 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 25 19:19:31 np0005535963 kernel: landlock: Up and running.
Nov 25 19:19:31 np0005535963 kernel: Yama: becoming mindful.
Nov 25 19:19:31 np0005535963 kernel: SELinux:  Initializing.
Nov 25 19:19:31 np0005535963 kernel: LSM support for eBPF active
Nov 25 19:19:31 np0005535963 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 19:19:31 np0005535963 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 19:19:31 np0005535963 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 25 19:19:31 np0005535963 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 25 19:19:31 np0005535963 kernel: ... version:                0
Nov 25 19:19:31 np0005535963 kernel: ... bit width:              48
Nov 25 19:19:31 np0005535963 kernel: ... generic registers:      6
Nov 25 19:19:31 np0005535963 kernel: ... value mask:             0000ffffffffffff
Nov 25 19:19:31 np0005535963 kernel: ... max period:             00007fffffffffff
Nov 25 19:19:31 np0005535963 kernel: ... fixed-purpose events:   0
Nov 25 19:19:31 np0005535963 kernel: ... event mask:             000000000000003f
Nov 25 19:19:31 np0005535963 kernel: signal: max sigframe size: 1776
Nov 25 19:19:31 np0005535963 kernel: rcu: Hierarchical SRCU implementation.
Nov 25 19:19:31 np0005535963 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 25 19:19:31 np0005535963 kernel: smp: Bringing up secondary CPUs ...
Nov 25 19:19:31 np0005535963 kernel: smpboot: x86: Booting SMP configuration:
Nov 25 19:19:31 np0005535963 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 25 19:19:31 np0005535963 kernel: smp: Brought up 1 node, 8 CPUs
Nov 25 19:19:31 np0005535963 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Nov 25 19:19:31 np0005535963 kernel: node 0 deferred pages initialised in 8ms
Nov 25 19:19:31 np0005535963 kernel: Memory: 7765864K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616276K reserved, 0K cma-reserved)
Nov 25 19:19:31 np0005535963 kernel: devtmpfs: initialized
Nov 25 19:19:31 np0005535963 kernel: x86/mm: Memory block size: 128MB
Nov 25 19:19:31 np0005535963 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 25 19:19:31 np0005535963 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 25 19:19:31 np0005535963 kernel: pinctrl core: initialized pinctrl subsystem
Nov 25 19:19:31 np0005535963 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 25 19:19:31 np0005535963 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 25 19:19:31 np0005535963 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 25 19:19:31 np0005535963 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 25 19:19:31 np0005535963 kernel: audit: initializing netlink subsys (disabled)
Nov 25 19:19:31 np0005535963 kernel: audit: type=2000 audit(1764116370.028:1): state=initialized audit_enabled=0 res=1
Nov 25 19:19:31 np0005535963 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 25 19:19:31 np0005535963 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 25 19:19:31 np0005535963 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 25 19:19:31 np0005535963 kernel: cpuidle: using governor menu
Nov 25 19:19:31 np0005535963 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 25 19:19:31 np0005535963 kernel: PCI: Using configuration type 1 for base access
Nov 25 19:19:31 np0005535963 kernel: PCI: Using configuration type 1 for extended access
Nov 25 19:19:31 np0005535963 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 25 19:19:31 np0005535963 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 25 19:19:31 np0005535963 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 25 19:19:31 np0005535963 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 25 19:19:31 np0005535963 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 25 19:19:31 np0005535963 kernel: Demotion targets for Node 0: null
Nov 25 19:19:31 np0005535963 kernel: cryptd: max_cpu_qlen set to 1000
Nov 25 19:19:31 np0005535963 kernel: ACPI: Added _OSI(Module Device)
Nov 25 19:19:31 np0005535963 kernel: ACPI: Added _OSI(Processor Device)
Nov 25 19:19:31 np0005535963 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 25 19:19:31 np0005535963 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 25 19:19:31 np0005535963 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 25 19:19:31 np0005535963 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 25 19:19:31 np0005535963 kernel: ACPI: Interpreter enabled
Nov 25 19:19:31 np0005535963 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 25 19:19:31 np0005535963 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 25 19:19:31 np0005535963 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 25 19:19:31 np0005535963 kernel: PCI: Using E820 reservations for host bridge windows
Nov 25 19:19:31 np0005535963 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 25 19:19:31 np0005535963 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 25 19:19:31 np0005535963 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [3] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [4] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [5] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [6] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [7] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [8] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [9] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [10] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [11] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [12] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [13] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [14] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [15] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [16] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [17] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [18] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [19] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [20] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [21] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [22] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [23] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [24] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [25] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [26] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [27] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [28] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [29] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [30] registered
Nov 25 19:19:31 np0005535963 kernel: acpiphp: Slot [31] registered
Nov 25 19:19:31 np0005535963 kernel: PCI host bridge to bus 0000:00
Nov 25 19:19:31 np0005535963 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 25 19:19:31 np0005535963 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 25 19:19:31 np0005535963 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 25 19:19:31 np0005535963 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 25 19:19:31 np0005535963 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 25 19:19:31 np0005535963 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 25 19:19:31 np0005535963 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 25 19:19:31 np0005535963 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 25 19:19:31 np0005535963 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 25 19:19:31 np0005535963 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 25 19:19:31 np0005535963 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 25 19:19:31 np0005535963 kernel: iommu: Default domain type: Translated
Nov 25 19:19:31 np0005535963 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 25 19:19:31 np0005535963 kernel: SCSI subsystem initialized
Nov 25 19:19:31 np0005535963 kernel: ACPI: bus type USB registered
Nov 25 19:19:31 np0005535963 kernel: usbcore: registered new interface driver usbfs
Nov 25 19:19:31 np0005535963 kernel: usbcore: registered new interface driver hub
Nov 25 19:19:31 np0005535963 kernel: usbcore: registered new device driver usb
Nov 25 19:19:31 np0005535963 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 25 19:19:31 np0005535963 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 25 19:19:31 np0005535963 kernel: PTP clock support registered
Nov 25 19:19:31 np0005535963 kernel: EDAC MC: Ver: 3.0.0
Nov 25 19:19:31 np0005535963 kernel: NetLabel: Initializing
Nov 25 19:19:31 np0005535963 kernel: NetLabel:  domain hash size = 128
Nov 25 19:19:31 np0005535963 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 25 19:19:31 np0005535963 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 25 19:19:31 np0005535963 kernel: PCI: Using ACPI for IRQ routing
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 25 19:19:31 np0005535963 kernel: vgaarb: loaded
Nov 25 19:19:31 np0005535963 kernel: clocksource: Switched to clocksource kvm-clock
Nov 25 19:19:31 np0005535963 kernel: VFS: Disk quotas dquot_6.6.0
Nov 25 19:19:31 np0005535963 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 25 19:19:31 np0005535963 kernel: pnp: PnP ACPI init
Nov 25 19:19:31 np0005535963 kernel: pnp: PnP ACPI: found 5 devices
Nov 25 19:19:31 np0005535963 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 25 19:19:31 np0005535963 kernel: NET: Registered PF_INET protocol family
Nov 25 19:19:31 np0005535963 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 25 19:19:31 np0005535963 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 25 19:19:31 np0005535963 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 25 19:19:31 np0005535963 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 25 19:19:31 np0005535963 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 25 19:19:31 np0005535963 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 25 19:19:31 np0005535963 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 25 19:19:31 np0005535963 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 19:19:31 np0005535963 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 19:19:31 np0005535963 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 25 19:19:31 np0005535963 kernel: NET: Registered PF_XDP protocol family
Nov 25 19:19:31 np0005535963 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 25 19:19:31 np0005535963 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 25 19:19:31 np0005535963 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 25 19:19:31 np0005535963 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 25 19:19:31 np0005535963 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 25 19:19:31 np0005535963 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 25 19:19:31 np0005535963 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 94868 usecs
Nov 25 19:19:31 np0005535963 kernel: PCI: CLS 0 bytes, default 64
Nov 25 19:19:31 np0005535963 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 25 19:19:31 np0005535963 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 25 19:19:31 np0005535963 kernel: ACPI: bus type thunderbolt registered
Nov 25 19:19:31 np0005535963 kernel: Trying to unpack rootfs image as initramfs...
Nov 25 19:19:31 np0005535963 kernel: Initialise system trusted keyrings
Nov 25 19:19:31 np0005535963 kernel: Key type blacklist registered
Nov 25 19:19:31 np0005535963 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 25 19:19:31 np0005535963 kernel: zbud: loaded
Nov 25 19:19:31 np0005535963 kernel: integrity: Platform Keyring initialized
Nov 25 19:19:31 np0005535963 kernel: integrity: Machine keyring initialized
Nov 25 19:19:31 np0005535963 kernel: Freeing initrd memory: 85868K
Nov 25 19:19:31 np0005535963 kernel: NET: Registered PF_ALG protocol family
Nov 25 19:19:31 np0005535963 kernel: xor: automatically using best checksumming function   avx       
Nov 25 19:19:31 np0005535963 kernel: Key type asymmetric registered
Nov 25 19:19:31 np0005535963 kernel: Asymmetric key parser 'x509' registered
Nov 25 19:19:31 np0005535963 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 25 19:19:31 np0005535963 kernel: io scheduler mq-deadline registered
Nov 25 19:19:31 np0005535963 kernel: io scheduler kyber registered
Nov 25 19:19:31 np0005535963 kernel: io scheduler bfq registered
Nov 25 19:19:31 np0005535963 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 25 19:19:31 np0005535963 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 25 19:19:31 np0005535963 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 25 19:19:31 np0005535963 kernel: ACPI: button: Power Button [PWRF]
Nov 25 19:19:31 np0005535963 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 25 19:19:31 np0005535963 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 25 19:19:31 np0005535963 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 25 19:19:31 np0005535963 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 25 19:19:31 np0005535963 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 25 19:19:31 np0005535963 kernel: Non-volatile memory driver v1.3
Nov 25 19:19:31 np0005535963 kernel: rdac: device handler registered
Nov 25 19:19:31 np0005535963 kernel: hp_sw: device handler registered
Nov 25 19:19:31 np0005535963 kernel: emc: device handler registered
Nov 25 19:19:31 np0005535963 kernel: alua: device handler registered
Nov 25 19:19:31 np0005535963 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 25 19:19:31 np0005535963 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 25 19:19:31 np0005535963 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 25 19:19:31 np0005535963 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 25 19:19:31 np0005535963 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 25 19:19:31 np0005535963 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 25 19:19:31 np0005535963 kernel: usb usb1: Product: UHCI Host Controller
Nov 25 19:19:31 np0005535963 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 25 19:19:31 np0005535963 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 25 19:19:31 np0005535963 kernel: hub 1-0:1.0: USB hub found
Nov 25 19:19:31 np0005535963 kernel: hub 1-0:1.0: 2 ports detected
Nov 25 19:19:31 np0005535963 kernel: usbcore: registered new interface driver usbserial_generic
Nov 25 19:19:31 np0005535963 kernel: usbserial: USB Serial support registered for generic
Nov 25 19:19:31 np0005535963 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 25 19:19:31 np0005535963 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 25 19:19:31 np0005535963 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 25 19:19:31 np0005535963 kernel: mousedev: PS/2 mouse device common for all mice
Nov 25 19:19:31 np0005535963 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 25 19:19:31 np0005535963 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 25 19:19:31 np0005535963 kernel: rtc_cmos 00:04: registered as rtc0
Nov 25 19:19:31 np0005535963 kernel: rtc_cmos 00:04: setting system clock to 2025-11-26T00:19:30 UTC (1764116370)
Nov 25 19:19:31 np0005535963 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 25 19:19:31 np0005535963 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 25 19:19:31 np0005535963 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 25 19:19:31 np0005535963 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 25 19:19:31 np0005535963 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 25 19:19:31 np0005535963 kernel: usbcore: registered new interface driver usbhid
Nov 25 19:19:31 np0005535963 kernel: usbhid: USB HID core driver
Nov 25 19:19:31 np0005535963 kernel: drop_monitor: Initializing network drop monitor service
Nov 25 19:19:31 np0005535963 kernel: Initializing XFRM netlink socket
Nov 25 19:19:31 np0005535963 kernel: NET: Registered PF_INET6 protocol family
Nov 25 19:19:31 np0005535963 kernel: Segment Routing with IPv6
Nov 25 19:19:31 np0005535963 kernel: NET: Registered PF_PACKET protocol family
Nov 25 19:19:31 np0005535963 kernel: mpls_gso: MPLS GSO support
Nov 25 19:19:31 np0005535963 kernel: IPI shorthand broadcast: enabled
Nov 25 19:19:31 np0005535963 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 25 19:19:31 np0005535963 kernel: AES CTR mode by8 optimization enabled
Nov 25 19:19:31 np0005535963 kernel: sched_clock: Marking stable (1257009420, 141491190)->(1525796540, -127295930)
Nov 25 19:19:31 np0005535963 kernel: registered taskstats version 1
Nov 25 19:19:31 np0005535963 kernel: Loading compiled-in X.509 certificates
Nov 25 19:19:31 np0005535963 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 19:19:31 np0005535963 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 25 19:19:31 np0005535963 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 25 19:19:31 np0005535963 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 25 19:19:31 np0005535963 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 25 19:19:31 np0005535963 kernel: Demotion targets for Node 0: null
Nov 25 19:19:31 np0005535963 kernel: page_owner is disabled
Nov 25 19:19:31 np0005535963 kernel: Key type .fscrypt registered
Nov 25 19:19:31 np0005535963 kernel: Key type fscrypt-provisioning registered
Nov 25 19:19:31 np0005535963 kernel: Key type big_key registered
Nov 25 19:19:31 np0005535963 kernel: Key type encrypted registered
Nov 25 19:19:31 np0005535963 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 25 19:19:31 np0005535963 kernel: Loading compiled-in module X.509 certificates
Nov 25 19:19:31 np0005535963 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 19:19:31 np0005535963 kernel: ima: Allocated hash algorithm: sha256
Nov 25 19:19:31 np0005535963 kernel: ima: No architecture policies found
Nov 25 19:19:31 np0005535963 kernel: evm: Initialising EVM extended attributes:
Nov 25 19:19:31 np0005535963 kernel: evm: security.selinux
Nov 25 19:19:31 np0005535963 kernel: evm: security.SMACK64 (disabled)
Nov 25 19:19:31 np0005535963 kernel: evm: security.SMACK64EXEC (disabled)
Nov 25 19:19:31 np0005535963 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 25 19:19:31 np0005535963 kernel: evm: security.SMACK64MMAP (disabled)
Nov 25 19:19:31 np0005535963 kernel: evm: security.apparmor (disabled)
Nov 25 19:19:31 np0005535963 kernel: evm: security.ima
Nov 25 19:19:31 np0005535963 kernel: evm: security.capability
Nov 25 19:19:31 np0005535963 kernel: evm: HMAC attrs: 0x1
Nov 25 19:19:31 np0005535963 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 25 19:19:31 np0005535963 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 25 19:19:31 np0005535963 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 25 19:19:31 np0005535963 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 25 19:19:31 np0005535963 kernel: usb 1-1: Manufacturer: QEMU
Nov 25 19:19:31 np0005535963 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 25 19:19:31 np0005535963 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 25 19:19:31 np0005535963 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 25 19:19:31 np0005535963 kernel: Running certificate verification RSA selftest
Nov 25 19:19:31 np0005535963 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 25 19:19:31 np0005535963 kernel: Running certificate verification ECDSA selftest
Nov 25 19:19:31 np0005535963 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 25 19:19:31 np0005535963 kernel: clk: Disabling unused clocks
Nov 25 19:19:31 np0005535963 kernel: Freeing unused decrypted memory: 2028K
Nov 25 19:19:31 np0005535963 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 25 19:19:31 np0005535963 kernel: Write protecting the kernel read-only data: 30720k
Nov 25 19:19:31 np0005535963 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 25 19:19:31 np0005535963 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 25 19:19:31 np0005535963 kernel: Run /init as init process
Nov 25 19:19:31 np0005535963 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 19:19:31 np0005535963 systemd: Detected virtualization kvm.
Nov 25 19:19:31 np0005535963 systemd: Detected architecture x86-64.
Nov 25 19:19:31 np0005535963 systemd: Running in initrd.
Nov 25 19:19:31 np0005535963 systemd: No hostname configured, using default hostname.
Nov 25 19:19:31 np0005535963 systemd: Hostname set to <localhost>.
Nov 25 19:19:31 np0005535963 systemd: Initializing machine ID from VM UUID.
Nov 25 19:19:31 np0005535963 systemd: Queued start job for default target Initrd Default Target.
Nov 25 19:19:31 np0005535963 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 19:19:31 np0005535963 systemd: Reached target Local Encrypted Volumes.
Nov 25 19:19:31 np0005535963 systemd: Reached target Initrd /usr File System.
Nov 25 19:19:31 np0005535963 systemd: Reached target Local File Systems.
Nov 25 19:19:31 np0005535963 systemd: Reached target Path Units.
Nov 25 19:19:31 np0005535963 systemd: Reached target Slice Units.
Nov 25 19:19:31 np0005535963 systemd: Reached target Swaps.
Nov 25 19:19:31 np0005535963 systemd: Reached target Timer Units.
Nov 25 19:19:31 np0005535963 systemd: Listening on D-Bus System Message Bus Socket.
Nov 25 19:19:31 np0005535963 systemd: Listening on Journal Socket (/dev/log).
Nov 25 19:19:31 np0005535963 systemd: Listening on Journal Socket.
Nov 25 19:19:31 np0005535963 systemd: Listening on udev Control Socket.
Nov 25 19:19:31 np0005535963 systemd: Listening on udev Kernel Socket.
Nov 25 19:19:31 np0005535963 systemd: Reached target Socket Units.
Nov 25 19:19:31 np0005535963 systemd: Starting Create List of Static Device Nodes...
Nov 25 19:19:31 np0005535963 systemd: Starting Journal Service...
Nov 25 19:19:31 np0005535963 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 19:19:31 np0005535963 systemd: Starting Apply Kernel Variables...
Nov 25 19:19:31 np0005535963 systemd: Starting Create System Users...
Nov 25 19:19:31 np0005535963 systemd: Starting Setup Virtual Console...
Nov 25 19:19:31 np0005535963 systemd: Finished Create List of Static Device Nodes.
Nov 25 19:19:31 np0005535963 systemd: Finished Apply Kernel Variables.
Nov 25 19:19:31 np0005535963 systemd: Finished Create System Users.
Nov 25 19:19:31 np0005535963 systemd-journald[306]: Journal started
Nov 25 19:19:31 np0005535963 systemd-journald[306]: Runtime Journal (/run/log/journal/2220aeb194e14f3194a220ade60d36f9) is 8.0M, max 153.6M, 145.6M free.
Nov 25 19:19:31 np0005535963 systemd-sysusers[310]: Creating group 'users' with GID 100.
Nov 25 19:19:31 np0005535963 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Nov 25 19:19:31 np0005535963 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 25 19:19:31 np0005535963 systemd: Started Journal Service.
Nov 25 19:19:31 np0005535963 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 19:19:31 np0005535963 systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 19:19:31 np0005535963 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 19:19:31 np0005535963 systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 19:19:31 np0005535963 systemd[1]: Finished Setup Virtual Console.
Nov 25 19:19:31 np0005535963 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 25 19:19:31 np0005535963 systemd[1]: Starting dracut cmdline hook...
Nov 25 19:19:31 np0005535963 dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Nov 25 19:19:31 np0005535963 dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 19:19:31 np0005535963 systemd[1]: Finished dracut cmdline hook.
Nov 25 19:19:31 np0005535963 systemd[1]: Starting dracut pre-udev hook...
Nov 25 19:19:31 np0005535963 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 25 19:19:31 np0005535963 kernel: device-mapper: uevent: version 1.0.3
Nov 25 19:19:31 np0005535963 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 25 19:19:31 np0005535963 kernel: RPC: Registered named UNIX socket transport module.
Nov 25 19:19:31 np0005535963 kernel: RPC: Registered udp transport module.
Nov 25 19:19:31 np0005535963 kernel: RPC: Registered tcp transport module.
Nov 25 19:19:31 np0005535963 kernel: RPC: Registered tcp-with-tls transport module.
Nov 25 19:19:31 np0005535963 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 25 19:19:32 np0005535963 rpc.statd[445]: Version 2.5.4 starting
Nov 25 19:19:32 np0005535963 rpc.statd[445]: Initializing NSM state
Nov 25 19:19:32 np0005535963 rpc.idmapd[450]: Setting log level to 0
Nov 25 19:19:32 np0005535963 systemd[1]: Finished dracut pre-udev hook.
Nov 25 19:19:32 np0005535963 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 19:19:32 np0005535963 systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 19:19:32 np0005535963 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 19:19:32 np0005535963 systemd[1]: Starting dracut pre-trigger hook...
Nov 25 19:19:32 np0005535963 systemd[1]: Finished dracut pre-trigger hook.
Nov 25 19:19:32 np0005535963 systemd[1]: Starting Coldplug All udev Devices...
Nov 25 19:19:32 np0005535963 systemd[1]: Created slice Slice /system/modprobe.
Nov 25 19:19:32 np0005535963 systemd[1]: Starting Load Kernel Module configfs...
Nov 25 19:19:32 np0005535963 systemd[1]: Finished Coldplug All udev Devices.
Nov 25 19:19:32 np0005535963 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 19:19:32 np0005535963 systemd[1]: Finished Load Kernel Module configfs.
Nov 25 19:19:32 np0005535963 systemd[1]: Mounting Kernel Configuration File System...
Nov 25 19:19:32 np0005535963 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 19:19:32 np0005535963 systemd[1]: Reached target Network.
Nov 25 19:19:32 np0005535963 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 19:19:32 np0005535963 systemd[1]: Starting dracut initqueue hook...
Nov 25 19:19:32 np0005535963 systemd[1]: Mounted Kernel Configuration File System.
Nov 25 19:19:32 np0005535963 systemd[1]: Reached target System Initialization.
Nov 25 19:19:32 np0005535963 systemd[1]: Reached target Basic System.
Nov 25 19:19:32 np0005535963 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 25 19:19:32 np0005535963 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 25 19:19:32 np0005535963 kernel: vda: vda1
Nov 25 19:19:32 np0005535963 systemd-udevd[511]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 19:19:32 np0005535963 kernel: scsi host0: ata_piix
Nov 25 19:19:32 np0005535963 kernel: scsi host1: ata_piix
Nov 25 19:19:32 np0005535963 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 25 19:19:32 np0005535963 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 25 19:19:32 np0005535963 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 25 19:19:32 np0005535963 systemd[1]: Reached target Initrd Root Device.
Nov 25 19:19:32 np0005535963 kernel: ata1: found unknown device (class 0)
Nov 25 19:19:32 np0005535963 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 25 19:19:32 np0005535963 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 25 19:19:32 np0005535963 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 25 19:19:32 np0005535963 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 25 19:19:32 np0005535963 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 25 19:19:32 np0005535963 systemd[1]: Finished dracut initqueue hook.
Nov 25 19:19:32 np0005535963 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 19:19:32 np0005535963 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 25 19:19:32 np0005535963 systemd[1]: Reached target Remote File Systems.
Nov 25 19:19:32 np0005535963 systemd[1]: Starting dracut pre-mount hook...
Nov 25 19:19:32 np0005535963 systemd[1]: Finished dracut pre-mount hook.
Nov 25 19:19:32 np0005535963 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 25 19:19:32 np0005535963 systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Nov 25 19:19:32 np0005535963 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 25 19:19:32 np0005535963 systemd[1]: Mounting /sysroot...
Nov 25 19:19:33 np0005535963 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 25 19:19:33 np0005535963 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 25 19:19:33 np0005535963 kernel: XFS (vda1): Ending clean mount
Nov 25 19:19:33 np0005535963 systemd[1]: Mounted /sysroot.
Nov 25 19:19:33 np0005535963 systemd[1]: Reached target Initrd Root File System.
Nov 25 19:19:33 np0005535963 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 25 19:19:33 np0005535963 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 25 19:19:33 np0005535963 systemd[1]: Reached target Initrd File Systems.
Nov 25 19:19:33 np0005535963 systemd[1]: Reached target Initrd Default Target.
Nov 25 19:19:33 np0005535963 systemd[1]: Starting dracut mount hook...
Nov 25 19:19:33 np0005535963 systemd[1]: Finished dracut mount hook.
Nov 25 19:19:33 np0005535963 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 25 19:19:33 np0005535963 rpc.idmapd[450]: exiting on signal 15
Nov 25 19:19:33 np0005535963 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 25 19:19:33 np0005535963 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target Network.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target Timer Units.
Nov 25 19:19:33 np0005535963 systemd[1]: dbus.socket: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 25 19:19:33 np0005535963 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target Initrd Default Target.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target Basic System.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target Initrd Root Device.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target Initrd /usr File System.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target Path Units.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target Remote File Systems.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target Slice Units.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target Socket Units.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target System Initialization.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target Local File Systems.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target Swaps.
Nov 25 19:19:33 np0005535963 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped dracut mount hook.
Nov 25 19:19:33 np0005535963 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped dracut pre-mount hook.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 25 19:19:33 np0005535963 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 25 19:19:33 np0005535963 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped dracut initqueue hook.
Nov 25 19:19:33 np0005535963 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped Apply Kernel Variables.
Nov 25 19:19:33 np0005535963 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 25 19:19:33 np0005535963 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped Coldplug All udev Devices.
Nov 25 19:19:33 np0005535963 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped dracut pre-trigger hook.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 25 19:19:33 np0005535963 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped Setup Virtual Console.
Nov 25 19:19:33 np0005535963 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 25 19:19:33 np0005535963 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 25 19:19:33 np0005535963 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Closed udev Control Socket.
Nov 25 19:19:33 np0005535963 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Closed udev Kernel Socket.
Nov 25 19:19:33 np0005535963 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped dracut pre-udev hook.
Nov 25 19:19:33 np0005535963 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped dracut cmdline hook.
Nov 25 19:19:33 np0005535963 systemd[1]: Starting Cleanup udev Database...
Nov 25 19:19:33 np0005535963 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 25 19:19:33 np0005535963 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 25 19:19:33 np0005535963 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Stopped Create System Users.
Nov 25 19:19:33 np0005535963 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 25 19:19:33 np0005535963 systemd[1]: Finished Cleanup udev Database.
Nov 25 19:19:33 np0005535963 systemd[1]: Reached target Switch Root.
Nov 25 19:19:33 np0005535963 systemd[1]: Starting Switch Root...
Nov 25 19:19:33 np0005535963 systemd[1]: Switching root.
Nov 25 19:19:33 np0005535963 systemd-journald[306]: Journal stopped
Nov 25 19:19:34 np0005535963 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 25 19:19:34 np0005535963 kernel: audit: type=1404 audit(1764116373.954:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 25 19:19:34 np0005535963 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 19:19:34 np0005535963 kernel: SELinux:  policy capability open_perms=1
Nov 25 19:19:34 np0005535963 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 19:19:34 np0005535963 kernel: SELinux:  policy capability always_check_network=0
Nov 25 19:19:34 np0005535963 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 19:19:34 np0005535963 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 19:19:34 np0005535963 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 19:19:34 np0005535963 kernel: audit: type=1403 audit(1764116374.141:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 25 19:19:34 np0005535963 systemd: Successfully loaded SELinux policy in 196.272ms.
Nov 25 19:19:34 np0005535963 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 39.191ms.
Nov 25 19:19:34 np0005535963 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 19:19:34 np0005535963 systemd: Detected virtualization kvm.
Nov 25 19:19:34 np0005535963 systemd: Detected architecture x86-64.
Nov 25 19:19:34 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:19:34 np0005535963 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 25 19:19:34 np0005535963 systemd: Stopped Switch Root.
Nov 25 19:19:34 np0005535963 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 25 19:19:34 np0005535963 systemd: Created slice Slice /system/getty.
Nov 25 19:19:34 np0005535963 systemd: Created slice Slice /system/serial-getty.
Nov 25 19:19:34 np0005535963 systemd: Created slice Slice /system/sshd-keygen.
Nov 25 19:19:34 np0005535963 systemd: Created slice User and Session Slice.
Nov 25 19:19:34 np0005535963 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 19:19:34 np0005535963 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 25 19:19:34 np0005535963 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 25 19:19:34 np0005535963 systemd: Reached target Local Encrypted Volumes.
Nov 25 19:19:34 np0005535963 systemd: Stopped target Switch Root.
Nov 25 19:19:34 np0005535963 systemd: Stopped target Initrd File Systems.
Nov 25 19:19:34 np0005535963 systemd: Stopped target Initrd Root File System.
Nov 25 19:19:34 np0005535963 systemd: Reached target Local Integrity Protected Volumes.
Nov 25 19:19:34 np0005535963 systemd: Reached target Path Units.
Nov 25 19:19:34 np0005535963 systemd: Reached target rpc_pipefs.target.
Nov 25 19:19:34 np0005535963 systemd: Reached target Slice Units.
Nov 25 19:19:34 np0005535963 systemd: Reached target Swaps.
Nov 25 19:19:34 np0005535963 systemd: Reached target Local Verity Protected Volumes.
Nov 25 19:19:34 np0005535963 systemd: Listening on RPCbind Server Activation Socket.
Nov 25 19:19:34 np0005535963 systemd: Reached target RPC Port Mapper.
Nov 25 19:19:34 np0005535963 systemd: Listening on Process Core Dump Socket.
Nov 25 19:19:34 np0005535963 systemd: Listening on initctl Compatibility Named Pipe.
Nov 25 19:19:34 np0005535963 systemd: Listening on udev Control Socket.
Nov 25 19:19:34 np0005535963 systemd: Listening on udev Kernel Socket.
Nov 25 19:19:34 np0005535963 systemd: Mounting Huge Pages File System...
Nov 25 19:19:34 np0005535963 systemd: Mounting POSIX Message Queue File System...
Nov 25 19:19:34 np0005535963 systemd: Mounting Kernel Debug File System...
Nov 25 19:19:34 np0005535963 systemd: Mounting Kernel Trace File System...
Nov 25 19:19:34 np0005535963 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 19:19:34 np0005535963 systemd: Starting Create List of Static Device Nodes...
Nov 25 19:19:34 np0005535963 systemd: Starting Load Kernel Module configfs...
Nov 25 19:19:34 np0005535963 systemd: Starting Load Kernel Module drm...
Nov 25 19:19:34 np0005535963 systemd: Starting Load Kernel Module efi_pstore...
Nov 25 19:19:34 np0005535963 systemd: Starting Load Kernel Module fuse...
Nov 25 19:19:34 np0005535963 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 25 19:19:34 np0005535963 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 25 19:19:34 np0005535963 systemd: Stopped File System Check on Root Device.
Nov 25 19:19:34 np0005535963 systemd: Stopped Journal Service.
Nov 25 19:19:34 np0005535963 systemd: Starting Journal Service...
Nov 25 19:19:34 np0005535963 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 19:19:34 np0005535963 systemd: Starting Generate network units from Kernel command line...
Nov 25 19:19:34 np0005535963 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 19:19:34 np0005535963 systemd: Starting Remount Root and Kernel File Systems...
Nov 25 19:19:34 np0005535963 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 25 19:19:34 np0005535963 systemd: Starting Apply Kernel Variables...
Nov 25 19:19:34 np0005535963 kernel: fuse: init (API version 7.37)
Nov 25 19:19:34 np0005535963 systemd: Starting Coldplug All udev Devices...
Nov 25 19:19:34 np0005535963 systemd: Mounted Huge Pages File System.
Nov 25 19:19:34 np0005535963 systemd: Mounted POSIX Message Queue File System.
Nov 25 19:19:34 np0005535963 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 25 19:19:34 np0005535963 systemd: Mounted Kernel Debug File System.
Nov 25 19:19:34 np0005535963 systemd: Mounted Kernel Trace File System.
Nov 25 19:19:34 np0005535963 systemd-journald[680]: Journal started
Nov 25 19:19:34 np0005535963 systemd-journald[680]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 25 19:19:34 np0005535963 systemd[1]: Queued start job for default target Multi-User System.
Nov 25 19:19:34 np0005535963 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 25 19:19:34 np0005535963 systemd: Started Journal Service.
Nov 25 19:19:34 np0005535963 systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 19:19:34 np0005535963 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 19:19:34 np0005535963 systemd[1]: Finished Load Kernel Module configfs.
Nov 25 19:19:34 np0005535963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 25 19:19:34 np0005535963 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 25 19:19:34 np0005535963 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 25 19:19:34 np0005535963 systemd[1]: Finished Load Kernel Module fuse.
Nov 25 19:19:34 np0005535963 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 25 19:19:34 np0005535963 systemd[1]: Finished Generate network units from Kernel command line.
Nov 25 19:19:34 np0005535963 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 25 19:19:34 np0005535963 systemd[1]: Finished Apply Kernel Variables.
Nov 25 19:19:34 np0005535963 kernel: ACPI: bus type drm_connector registered
Nov 25 19:19:34 np0005535963 systemd[1]: Mounting FUSE Control File System...
Nov 25 19:19:34 np0005535963 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 19:19:34 np0005535963 systemd[1]: Starting Rebuild Hardware Database...
Nov 25 19:19:34 np0005535963 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 25 19:19:34 np0005535963 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 25 19:19:34 np0005535963 systemd[1]: Starting Load/Save OS Random Seed...
Nov 25 19:19:34 np0005535963 systemd[1]: Starting Create System Users...
Nov 25 19:19:34 np0005535963 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 25 19:19:34 np0005535963 systemd[1]: Finished Load Kernel Module drm.
Nov 25 19:19:34 np0005535963 systemd[1]: Mounted FUSE Control File System.
Nov 25 19:19:34 np0005535963 systemd-journald[680]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 25 19:19:34 np0005535963 systemd-journald[680]: Received client request to flush runtime journal.
Nov 25 19:19:34 np0005535963 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 25 19:19:34 np0005535963 systemd[1]: Finished Load/Save OS Random Seed.
Nov 25 19:19:34 np0005535963 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 19:19:35 np0005535963 systemd[1]: Finished Create System Users.
Nov 25 19:19:35 np0005535963 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 19:19:35 np0005535963 systemd[1]: Finished Coldplug All udev Devices.
Nov 25 19:19:35 np0005535963 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 19:19:35 np0005535963 systemd[1]: Reached target Preparation for Local File Systems.
Nov 25 19:19:35 np0005535963 systemd[1]: Reached target Local File Systems.
Nov 25 19:19:35 np0005535963 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 25 19:19:35 np0005535963 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 25 19:19:35 np0005535963 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 25 19:19:35 np0005535963 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 25 19:19:35 np0005535963 systemd[1]: Starting Automatic Boot Loader Update...
Nov 25 19:19:35 np0005535963 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 25 19:19:35 np0005535963 systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 19:19:35 np0005535963 bootctl[699]: Couldn't find EFI system partition, skipping.
Nov 25 19:19:35 np0005535963 systemd[1]: Finished Automatic Boot Loader Update.
Nov 25 19:19:35 np0005535963 systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 19:19:35 np0005535963 systemd[1]: Starting Security Auditing Service...
Nov 25 19:19:35 np0005535963 systemd[1]: Starting RPC Bind...
Nov 25 19:19:35 np0005535963 systemd[1]: Starting Rebuild Journal Catalog...
Nov 25 19:19:35 np0005535963 auditd[705]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 25 19:19:35 np0005535963 auditd[705]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 25 19:19:35 np0005535963 systemd[1]: Started RPC Bind.
Nov 25 19:19:35 np0005535963 systemd[1]: Finished Rebuild Journal Catalog.
Nov 25 19:19:35 np0005535963 augenrules[710]: /sbin/augenrules: No change
Nov 25 19:19:35 np0005535963 augenrules[725]: No rules
Nov 25 19:19:35 np0005535963 augenrules[725]: enabled 1
Nov 25 19:19:35 np0005535963 augenrules[725]: failure 1
Nov 25 19:19:35 np0005535963 augenrules[725]: pid 705
Nov 25 19:19:35 np0005535963 augenrules[725]: rate_limit 0
Nov 25 19:19:35 np0005535963 augenrules[725]: backlog_limit 8192
Nov 25 19:19:35 np0005535963 augenrules[725]: lost 0
Nov 25 19:19:35 np0005535963 augenrules[725]: backlog 2
Nov 25 19:19:35 np0005535963 augenrules[725]: backlog_wait_time 60000
Nov 25 19:19:35 np0005535963 augenrules[725]: backlog_wait_time_actual 0
Nov 25 19:19:35 np0005535963 augenrules[725]: enabled 1
Nov 25 19:19:35 np0005535963 augenrules[725]: failure 1
Nov 25 19:19:35 np0005535963 augenrules[725]: pid 705
Nov 25 19:19:35 np0005535963 augenrules[725]: rate_limit 0
Nov 25 19:19:35 np0005535963 augenrules[725]: backlog_limit 8192
Nov 25 19:19:35 np0005535963 augenrules[725]: lost 0
Nov 25 19:19:35 np0005535963 augenrules[725]: backlog 1
Nov 25 19:19:35 np0005535963 augenrules[725]: backlog_wait_time 60000
Nov 25 19:19:35 np0005535963 augenrules[725]: backlog_wait_time_actual 0
Nov 25 19:19:35 np0005535963 augenrules[725]: enabled 1
Nov 25 19:19:35 np0005535963 augenrules[725]: failure 1
Nov 25 19:19:35 np0005535963 augenrules[725]: pid 705
Nov 25 19:19:35 np0005535963 augenrules[725]: rate_limit 0
Nov 25 19:19:35 np0005535963 augenrules[725]: backlog_limit 8192
Nov 25 19:19:35 np0005535963 augenrules[725]: lost 0
Nov 25 19:19:35 np0005535963 augenrules[725]: backlog 4
Nov 25 19:19:35 np0005535963 augenrules[725]: backlog_wait_time 60000
Nov 25 19:19:35 np0005535963 augenrules[725]: backlog_wait_time_actual 0
Nov 25 19:19:35 np0005535963 systemd[1]: Started Security Auditing Service.
Nov 25 19:19:35 np0005535963 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 25 19:19:35 np0005535963 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 25 19:19:35 np0005535963 systemd[1]: Finished Rebuild Hardware Database.
Nov 25 19:19:35 np0005535963 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 19:19:35 np0005535963 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 25 19:19:35 np0005535963 systemd[1]: Starting Update is Completed...
Nov 25 19:19:35 np0005535963 systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 19:19:35 np0005535963 systemd[1]: Finished Update is Completed.
Nov 25 19:19:35 np0005535963 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 19:19:35 np0005535963 systemd[1]: Reached target System Initialization.
Nov 25 19:19:35 np0005535963 systemd[1]: Started dnf makecache --timer.
Nov 25 19:19:35 np0005535963 systemd[1]: Started Daily rotation of log files.
Nov 25 19:19:35 np0005535963 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 25 19:19:35 np0005535963 systemd[1]: Reached target Timer Units.
Nov 25 19:19:35 np0005535963 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 19:19:35 np0005535963 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 25 19:19:35 np0005535963 systemd[1]: Reached target Socket Units.
Nov 25 19:19:35 np0005535963 systemd[1]: Starting D-Bus System Message Bus...
Nov 25 19:19:35 np0005535963 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 19:19:35 np0005535963 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 25 19:19:35 np0005535963 systemd[1]: Starting Load Kernel Module configfs...
Nov 25 19:19:35 np0005535963 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 19:19:35 np0005535963 systemd[1]: Finished Load Kernel Module configfs.
Nov 25 19:19:35 np0005535963 systemd-udevd[741]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 19:19:35 np0005535963 systemd[1]: Started D-Bus System Message Bus.
Nov 25 19:19:35 np0005535963 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 25 19:19:35 np0005535963 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 25 19:19:35 np0005535963 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 25 19:19:35 np0005535963 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 25 19:19:35 np0005535963 systemd[1]: Reached target Basic System.
Nov 25 19:19:35 np0005535963 dbus-broker-lau[762]: Ready
Nov 25 19:19:35 np0005535963 systemd[1]: Starting NTP client/server...
Nov 25 19:19:35 np0005535963 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 25 19:19:35 np0005535963 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 25 19:19:35 np0005535963 systemd[1]: Starting IPv4 firewall with iptables...
Nov 25 19:19:35 np0005535963 systemd[1]: Started irqbalance daemon.
Nov 25 19:19:35 np0005535963 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 25 19:19:35 np0005535963 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 19:19:35 np0005535963 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 19:19:35 np0005535963 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 19:19:35 np0005535963 systemd[1]: Reached target sshd-keygen.target.
Nov 25 19:19:35 np0005535963 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 25 19:19:35 np0005535963 systemd[1]: Reached target User and Group Name Lookups.
Nov 25 19:19:35 np0005535963 chronyd[795]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 19:19:35 np0005535963 chronyd[795]: Loaded 0 symmetric keys
Nov 25 19:19:35 np0005535963 chronyd[795]: Using right/UTC timezone to obtain leap second data
Nov 25 19:19:35 np0005535963 chronyd[795]: Loaded seccomp filter (level 2)
Nov 25 19:19:35 np0005535963 systemd[1]: Starting User Login Management...
Nov 25 19:19:35 np0005535963 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 25 19:19:35 np0005535963 systemd[1]: Started NTP client/server.
Nov 25 19:19:35 np0005535963 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 25 19:19:35 np0005535963 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 25 19:19:35 np0005535963 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 25 19:19:35 np0005535963 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 25 19:19:35 np0005535963 kernel: Console: switching to colour dummy device 80x25
Nov 25 19:19:35 np0005535963 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 25 19:19:35 np0005535963 kernel: [drm] features: -context_init
Nov 25 19:19:35 np0005535963 systemd-logind[800]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 19:19:35 np0005535963 systemd-logind[800]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 19:19:35 np0005535963 systemd-logind[800]: New seat seat0.
Nov 25 19:19:35 np0005535963 systemd[1]: Started User Login Management.
Nov 25 19:19:35 np0005535963 kernel: [drm] number of scanouts: 1
Nov 25 19:19:35 np0005535963 kernel: [drm] number of cap sets: 0
Nov 25 19:19:36 np0005535963 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 25 19:19:36 np0005535963 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 25 19:19:36 np0005535963 kernel: Console: switching to colour frame buffer device 128x48
Nov 25 19:19:36 np0005535963 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 25 19:19:36 np0005535963 kernel: kvm_amd: TSC scaling supported
Nov 25 19:19:36 np0005535963 kernel: kvm_amd: Nested Virtualization enabled
Nov 25 19:19:36 np0005535963 kernel: kvm_amd: Nested Paging enabled
Nov 25 19:19:36 np0005535963 kernel: kvm_amd: LBR virtualization supported
Nov 25 19:19:36 np0005535963 iptables.init[782]: iptables: Applying firewall rules: [  OK  ]
Nov 25 19:19:36 np0005535963 systemd[1]: Finished IPv4 firewall with iptables.
Nov 25 19:19:36 np0005535963 cloud-init[841]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 26 Nov 2025 00:19:36 +0000. Up 7.14 seconds.
Nov 25 19:19:36 np0005535963 systemd[1]: run-cloud\x2dinit-tmp-tmpqtp318a_.mount: Deactivated successfully.
Nov 25 19:19:36 np0005535963 systemd[1]: Starting Hostname Service...
Nov 25 19:19:36 np0005535963 systemd[1]: Started Hostname Service.
Nov 25 19:19:36 np0005535963 systemd-hostnamed[855]: Hostname set to <np0005535963.novalocal> (static)
Nov 25 19:19:37 np0005535963 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 25 19:19:37 np0005535963 systemd[1]: Reached target Preparation for Network.
Nov 25 19:19:37 np0005535963 systemd[1]: Starting Network Manager...
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.1755] NetworkManager (version 1.54.1-1.el9) is starting... (boot:3ecb6427-f9e8-4e80-8be1-c37e67edd798)
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.1762] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.1905] manager[0x55b9c790b080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.1966] hostname: hostname: using hostnamed
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.1967] hostname: static hostname changed from (none) to "np0005535963.novalocal"
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.1971] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2169] manager[0x55b9c790b080]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2170] manager[0x55b9c790b080]: rfkill: WWAN hardware radio set enabled
Nov 25 19:19:37 np0005535963 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2342] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2342] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2343] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2345] manager: Networking is enabled by state file
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2350] settings: Loaded settings plugin: keyfile (internal)
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2405] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2444] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2492] dhcp: init: Using DHCP client 'internal'
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2497] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2520] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2539] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2553] device (lo): Activation: starting connection 'lo' (ab48b975-dcbe-41f6-95b4-36e306e95236)
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2570] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2576] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:19:37 np0005535963 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2621] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2629] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2633] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2636] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2641] device (eth0): carrier: link connected
Nov 25 19:19:37 np0005535963 systemd[1]: Started Network Manager.
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2646] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2659] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2670] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 19:19:37 np0005535963 systemd[1]: Reached target Network.
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2677] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2679] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2683] manager: NetworkManager state is now CONNECTING
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2686] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2702] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2706] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 19:19:37 np0005535963 systemd[1]: Starting Network Manager Wait Online...
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2780] dhcp4 (eth0): state changed new lease, address=38.102.83.107
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2786] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2802] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:19:37 np0005535963 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 25 19:19:37 np0005535963 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2909] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2911] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2917] device (lo): Activation: successful, device activated.
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2923] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2925] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2928] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2931] device (eth0): Activation: successful, device activated.
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2936] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 19:19:37 np0005535963 NetworkManager[859]: <info>  [1764116377.2939] manager: startup complete
Nov 25 19:19:37 np0005535963 systemd[1]: Finished Network Manager Wait Online.
Nov 25 19:19:37 np0005535963 systemd[1]: Starting Cloud-init: Network Stage...
Nov 25 19:19:37 np0005535963 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 25 19:19:37 np0005535963 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 19:19:37 np0005535963 systemd[1]: Reached target NFS client services.
Nov 25 19:19:37 np0005535963 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 19:19:37 np0005535963 systemd[1]: Reached target Remote File Systems.
Nov 25 19:19:37 np0005535963 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 19:19:37 np0005535963 cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 26 Nov 2025 00:19:37 +0000. Up 8.32 seconds.
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: |  eth0  | True |        38.102.83.107         | 255.255.255.0 | global | fa:16:3e:dd:6f:62 |
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fedd:6f62/64 |       .       |  link  | fa:16:3e:dd:6f:62 |
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 25 19:19:37 np0005535963 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 19:19:39 np0005535963 cloud-init[922]: Generating public/private rsa key pair.
Nov 25 19:19:39 np0005535963 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 25 19:19:39 np0005535963 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 25 19:19:39 np0005535963 cloud-init[922]: The key fingerprint is:
Nov 25 19:19:39 np0005535963 cloud-init[922]: SHA256:+CLsn8VsiMs9OvghAACz/y7X4xeNNx8VccCze7mUDgo root@np0005535963.novalocal
Nov 25 19:19:39 np0005535963 cloud-init[922]: The key's randomart image is:
Nov 25 19:19:39 np0005535963 cloud-init[922]: +---[RSA 3072]----+
Nov 25 19:19:39 np0005535963 cloud-init[922]: |=            .oo.|
Nov 25 19:19:39 np0005535963 cloud-init[922]: |.o            +. |
Nov 25 19:19:39 np0005535963 cloud-init[922]: |o              + |
Nov 25 19:19:39 np0005535963 cloud-init[922]: |..     .      o  |
Nov 25 19:19:39 np0005535963 cloud-init[922]: |. .   . So   . .o|
Nov 25 19:19:39 np0005535963 cloud-init[922]: | . o . =oE+ ...+.|
Nov 25 19:19:39 np0005535963 cloud-init[922]: |  ..*.o *o.o..+..|
Nov 25 19:19:39 np0005535963 cloud-init[922]: |  o=o=+=.  ..  o |
Nov 25 19:19:39 np0005535963 cloud-init[922]: |   +B==+         |
Nov 25 19:19:39 np0005535963 cloud-init[922]: +----[SHA256]-----+
Nov 25 19:19:39 np0005535963 cloud-init[922]: Generating public/private ecdsa key pair.
Nov 25 19:19:39 np0005535963 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 25 19:19:39 np0005535963 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 25 19:19:39 np0005535963 cloud-init[922]: The key fingerprint is:
Nov 25 19:19:39 np0005535963 cloud-init[922]: SHA256:TvBL3TQR4F//wPQvy1zf4qUMhBSpy6NVZl56vjKclEk root@np0005535963.novalocal
Nov 25 19:19:39 np0005535963 cloud-init[922]: The key's randomart image is:
Nov 25 19:19:39 np0005535963 cloud-init[922]: +---[ECDSA 256]---+
Nov 25 19:19:39 np0005535963 cloud-init[922]: |          ooo.   |
Nov 25 19:19:39 np0005535963 cloud-init[922]: |         ... .   |
Nov 25 19:19:39 np0005535963 cloud-init[922]: |      .  .o o o  |
Nov 25 19:19:39 np0005535963 cloud-init[922]: |       o.oE=.= o |
Nov 25 19:19:39 np0005535963 cloud-init[922]: |       .SBo=+ o o|
Nov 25 19:19:39 np0005535963 cloud-init[922]: |       +=.*..  .o|
Nov 25 19:19:39 np0005535963 cloud-init[922]: |       oo+ +. . =|
Nov 25 19:19:39 np0005535963 cloud-init[922]: |      .   = .=.*o|
Nov 25 19:19:39 np0005535963 cloud-init[922]: |           o.oB.o|
Nov 25 19:19:39 np0005535963 cloud-init[922]: +----[SHA256]-----+
Nov 25 19:19:39 np0005535963 cloud-init[922]: Generating public/private ed25519 key pair.
Nov 25 19:19:39 np0005535963 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 25 19:19:39 np0005535963 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 25 19:19:39 np0005535963 cloud-init[922]: The key fingerprint is:
Nov 25 19:19:39 np0005535963 cloud-init[922]: SHA256:OFlNcf8mQk0TURpsxz1A53V2c5BiM7SzuaiWm1dKfC4 root@np0005535963.novalocal
Nov 25 19:19:39 np0005535963 cloud-init[922]: The key's randomart image is:
Nov 25 19:19:39 np0005535963 cloud-init[922]: +--[ED25519 256]--+
Nov 25 19:19:39 np0005535963 cloud-init[922]: |          oo++OXO|
Nov 25 19:19:39 np0005535963 cloud-init[922]: |         o .==**X|
Nov 25 19:19:39 np0005535963 cloud-init[922]: |        . ..+=+o.|
Nov 25 19:19:39 np0005535963 cloud-init[922]: |       +   . + . |
Nov 25 19:19:39 np0005535963 cloud-init[922]: |      + S.  + . o|
Nov 25 19:19:39 np0005535963 cloud-init[922]: |       .  o.oo o |
Nov 25 19:19:39 np0005535963 cloud-init[922]: |         o.=.    |
Nov 25 19:19:39 np0005535963 cloud-init[922]: |        ooE .    |
Nov 25 19:19:39 np0005535963 cloud-init[922]: |       .+o .     |
Nov 25 19:19:39 np0005535963 cloud-init[922]: +----[SHA256]-----+
Nov 25 19:19:39 np0005535963 sm-notify[1005]: Version 2.5.4 starting
Nov 25 19:19:39 np0005535963 systemd[1]: Finished Cloud-init: Network Stage.
Nov 25 19:19:39 np0005535963 systemd[1]: Reached target Cloud-config availability.
Nov 25 19:19:39 np0005535963 systemd[1]: Reached target Network is Online.
Nov 25 19:19:39 np0005535963 systemd[1]: Starting Cloud-init: Config Stage...
Nov 25 19:19:39 np0005535963 systemd[1]: Starting Crash recovery kernel arming...
Nov 25 19:19:39 np0005535963 systemd[1]: Starting Notify NFS peers of a restart...
Nov 25 19:19:39 np0005535963 systemd[1]: Starting System Logging Service...
Nov 25 19:19:39 np0005535963 systemd[1]: Starting OpenSSH server daemon...
Nov 25 19:19:39 np0005535963 systemd[1]: Starting Permit User Sessions...
Nov 25 19:19:39 np0005535963 systemd[1]: Started Notify NFS peers of a restart.
Nov 25 19:19:39 np0005535963 systemd[1]: Started OpenSSH server daemon.
Nov 25 19:19:39 np0005535963 systemd[1]: Finished Permit User Sessions.
Nov 25 19:19:39 np0005535963 systemd[1]: Started Command Scheduler.
Nov 25 19:19:39 np0005535963 systemd[1]: Started Getty on tty1.
Nov 25 19:19:39 np0005535963 systemd[1]: Started Serial Getty on ttyS0.
Nov 25 19:19:39 np0005535963 systemd[1]: Reached target Login Prompts.
Nov 25 19:19:39 np0005535963 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Nov 25 19:19:39 np0005535963 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 25 19:19:39 np0005535963 systemd[1]: Started System Logging Service.
Nov 25 19:19:39 np0005535963 systemd[1]: Reached target Multi-User System.
Nov 25 19:19:39 np0005535963 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 25 19:19:39 np0005535963 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 25 19:19:39 np0005535963 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 25 19:19:39 np0005535963 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 19:19:39 np0005535963 kdumpctl[1019]: kdump: No kdump initial ramdisk found.
Nov 25 19:19:39 np0005535963 kdumpctl[1019]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 25 19:19:39 np0005535963 cloud-init[1072]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 26 Nov 2025 00:19:39 +0000. Up 10.17 seconds.
Nov 25 19:19:39 np0005535963 systemd[1]: Finished Cloud-init: Config Stage.
Nov 25 19:19:39 np0005535963 systemd[1]: Starting Cloud-init: Final Stage...
Nov 25 19:19:39 np0005535963 cloud-init[1215]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 26 Nov 2025 00:19:39 +0000. Up 10.59 seconds.
Nov 25 19:19:39 np0005535963 cloud-init[1229]: #############################################################
Nov 25 19:19:40 np0005535963 cloud-init[1231]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 25 19:19:40 np0005535963 cloud-init[1236]: 256 SHA256:TvBL3TQR4F//wPQvy1zf4qUMhBSpy6NVZl56vjKclEk root@np0005535963.novalocal (ECDSA)
Nov 25 19:19:40 np0005535963 cloud-init[1243]: 256 SHA256:OFlNcf8mQk0TURpsxz1A53V2c5BiM7SzuaiWm1dKfC4 root@np0005535963.novalocal (ED25519)
Nov 25 19:19:40 np0005535963 cloud-init[1249]: 3072 SHA256:+CLsn8VsiMs9OvghAACz/y7X4xeNNx8VccCze7mUDgo root@np0005535963.novalocal (RSA)
Nov 25 19:19:40 np0005535963 cloud-init[1251]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 25 19:19:40 np0005535963 cloud-init[1254]: #############################################################
Nov 25 19:19:40 np0005535963 cloud-init[1215]: Cloud-init v. 24.4-7.el9 finished at Wed, 26 Nov 2025 00:19:40 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.81 seconds
Nov 25 19:19:40 np0005535963 systemd[1]: Finished Cloud-init: Final Stage.
Nov 25 19:19:40 np0005535963 systemd[1]: Reached target Cloud-init target.
Nov 25 19:19:40 np0005535963 dracut[1283]: dracut-057-102.git20250818.el9
Nov 25 19:19:40 np0005535963 dracut[1285]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 19:19:41 np0005535963 dracut[1285]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: memstrack is not available
Nov 25 19:19:42 np0005535963 dracut[1285]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 19:19:42 np0005535963 chronyd[795]: Selected source 162.159.200.1 (2.centos.pool.ntp.org)
Nov 25 19:19:42 np0005535963 chronyd[795]: System clock TAI offset set to 37 seconds
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 19:19:42 np0005535963 dracut[1285]: memstrack is not available
Nov 25 19:19:42 np0005535963 dracut[1285]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 19:19:42 np0005535963 dracut[1285]: *** Including module: systemd ***
Nov 25 19:19:43 np0005535963 dracut[1285]: *** Including module: fips ***
Nov 25 19:19:43 np0005535963 dracut[1285]: *** Including module: systemd-initrd ***
Nov 25 19:19:43 np0005535963 dracut[1285]: *** Including module: i18n ***
Nov 25 19:19:43 np0005535963 dracut[1285]: *** Including module: drm ***
Nov 25 19:19:44 np0005535963 dracut[1285]: *** Including module: prefixdevname ***
Nov 25 19:19:44 np0005535963 dracut[1285]: *** Including module: kernel-modules ***
Nov 25 19:19:44 np0005535963 kernel: block vda: the capability attribute has been deprecated.
Nov 25 19:19:45 np0005535963 dracut[1285]: *** Including module: kernel-modules-extra ***
Nov 25 19:19:45 np0005535963 dracut[1285]: *** Including module: qemu ***
Nov 25 19:19:45 np0005535963 dracut[1285]: *** Including module: fstab-sys ***
Nov 25 19:19:45 np0005535963 dracut[1285]: *** Including module: rootfs-block ***
Nov 25 19:19:45 np0005535963 dracut[1285]: *** Including module: terminfo ***
Nov 25 19:19:45 np0005535963 dracut[1285]: *** Including module: udev-rules ***
Nov 25 19:19:46 np0005535963 dracut[1285]: Skipping udev rule: 91-permissions.rules
Nov 25 19:19:46 np0005535963 dracut[1285]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 25 19:19:46 np0005535963 irqbalance[791]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 25 19:19:46 np0005535963 irqbalance[791]: IRQ 25 affinity is now unmanaged
Nov 25 19:19:46 np0005535963 irqbalance[791]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 25 19:19:46 np0005535963 irqbalance[791]: IRQ 31 affinity is now unmanaged
Nov 25 19:19:46 np0005535963 irqbalance[791]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 25 19:19:46 np0005535963 irqbalance[791]: IRQ 28 affinity is now unmanaged
Nov 25 19:19:46 np0005535963 irqbalance[791]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 25 19:19:46 np0005535963 irqbalance[791]: IRQ 32 affinity is now unmanaged
Nov 25 19:19:46 np0005535963 irqbalance[791]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 25 19:19:46 np0005535963 irqbalance[791]: IRQ 30 affinity is now unmanaged
Nov 25 19:19:46 np0005535963 irqbalance[791]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 25 19:19:46 np0005535963 irqbalance[791]: IRQ 29 affinity is now unmanaged
Nov 25 19:19:46 np0005535963 dracut[1285]: *** Including module: virtiofs ***
Nov 25 19:19:46 np0005535963 dracut[1285]: *** Including module: dracut-systemd ***
Nov 25 19:19:46 np0005535963 dracut[1285]: *** Including module: usrmount ***
Nov 25 19:19:46 np0005535963 dracut[1285]: *** Including module: base ***
Nov 25 19:19:46 np0005535963 dracut[1285]: *** Including module: fs-lib ***
Nov 25 19:19:46 np0005535963 dracut[1285]: *** Including module: kdumpbase ***
Nov 25 19:19:47 np0005535963 dracut[1285]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 25 19:19:47 np0005535963 dracut[1285]:  microcode_ctl module: mangling fw_dir
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 25 19:19:47 np0005535963 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: configuration "intel" is ignored
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 25 19:19:47 np0005535963 dracut[1285]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 25 19:19:48 np0005535963 dracut[1285]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 25 19:19:48 np0005535963 dracut[1285]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 25 19:19:48 np0005535963 dracut[1285]: *** Including module: openssl ***
Nov 25 19:19:48 np0005535963 dracut[1285]: *** Including module: shutdown ***
Nov 25 19:19:48 np0005535963 dracut[1285]: *** Including module: squash ***
Nov 25 19:19:48 np0005535963 dracut[1285]: *** Including modules done ***
Nov 25 19:19:48 np0005535963 dracut[1285]: *** Installing kernel module dependencies ***
Nov 25 19:19:49 np0005535963 dracut[1285]: *** Installing kernel module dependencies done ***
Nov 25 19:19:49 np0005535963 dracut[1285]: *** Resolving executable dependencies ***
Nov 25 19:19:50 np0005535963 dracut[1285]: *** Resolving executable dependencies done ***
Nov 25 19:19:50 np0005535963 dracut[1285]: *** Generating early-microcode cpio image ***
Nov 25 19:19:50 np0005535963 dracut[1285]: *** Store current command line parameters ***
Nov 25 19:19:50 np0005535963 dracut[1285]: Stored kernel commandline:
Nov 25 19:19:50 np0005535963 dracut[1285]: No dracut internal kernel commandline stored in the initramfs
Nov 25 19:19:51 np0005535963 dracut[1285]: *** Install squash loader ***
Nov 25 19:19:52 np0005535963 dracut[1285]: *** Squashing the files inside the initramfs ***
Nov 25 19:19:53 np0005535963 dracut[1285]: *** Squashing the files inside the initramfs done ***
Nov 25 19:19:53 np0005535963 dracut[1285]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 25 19:19:53 np0005535963 dracut[1285]: *** Hardlinking files ***
Nov 25 19:19:53 np0005535963 dracut[1285]: *** Hardlinking files done ***
Nov 25 19:19:53 np0005535963 dracut[1285]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 25 19:19:54 np0005535963 kdumpctl[1019]: kdump: kexec: loaded kdump kernel
Nov 25 19:19:54 np0005535963 kdumpctl[1019]: kdump: Starting kdump: [OK]
Nov 25 19:19:54 np0005535963 systemd[1]: Finished Crash recovery kernel arming.
Nov 25 19:19:54 np0005535963 systemd[1]: Startup finished in 1.751s (kernel) + 2.883s (initrd) + 20.331s (userspace) = 24.967s.
Nov 25 19:19:56 np0005535963 systemd[1]: Created slice User Slice of UID 1000.
Nov 25 19:19:56 np0005535963 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 25 19:19:56 np0005535963 systemd-logind[800]: New session 1 of user zuul.
Nov 25 19:19:56 np0005535963 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 25 19:19:56 np0005535963 systemd[1]: Starting User Manager for UID 1000...
Nov 25 19:19:56 np0005535963 systemd[4300]: Queued start job for default target Main User Target.
Nov 25 19:19:56 np0005535963 systemd[4300]: Created slice User Application Slice.
Nov 25 19:19:56 np0005535963 systemd[4300]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 19:19:56 np0005535963 systemd[4300]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 19:19:56 np0005535963 systemd[4300]: Reached target Paths.
Nov 25 19:19:56 np0005535963 systemd[4300]: Reached target Timers.
Nov 25 19:19:56 np0005535963 systemd[4300]: Starting D-Bus User Message Bus Socket...
Nov 25 19:19:56 np0005535963 systemd[4300]: Starting Create User's Volatile Files and Directories...
Nov 25 19:19:56 np0005535963 systemd[4300]: Finished Create User's Volatile Files and Directories.
Nov 25 19:19:56 np0005535963 systemd[4300]: Listening on D-Bus User Message Bus Socket.
Nov 25 19:19:56 np0005535963 systemd[4300]: Reached target Sockets.
Nov 25 19:19:56 np0005535963 systemd[4300]: Reached target Basic System.
Nov 25 19:19:56 np0005535963 systemd[4300]: Reached target Main User Target.
Nov 25 19:19:56 np0005535963 systemd[4300]: Startup finished in 167ms.
Nov 25 19:19:56 np0005535963 systemd[1]: Started User Manager for UID 1000.
Nov 25 19:19:56 np0005535963 systemd[1]: Started Session 1 of User zuul.
Nov 25 19:19:56 np0005535963 python3[4382]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:19:59 np0005535963 python3[4410]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:20:05 np0005535963 python3[4468]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:20:06 np0005535963 python3[4508]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 25 19:20:07 np0005535963 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 19:20:08 np0005535963 python3[4536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0u14Q4/CsnLwIecJgpWvPEkKn7ssiuCUPRU4c1/zc3B7XwnQOBNB67YgdXqpzUMRPpAb2K9clHIRVXUv+6Sa05iWAU1RWStDS5Pa0fa5GUlUDzHX/UA4ZlFGEfPFqp7WRXWUbxQTTdMwd6ebb1WF8sM+oTDa6hsjkJ8IxIuretrbwxO3ccE4OFlQ6nSD3lkd6TKX/Did2sghUYrKUJc+ov3tVacjQXAKxVXeMY9sRaLWL9KfgiAzVpiqQVu+IjridKtvLYvXGaRX1OK/MIDJYGSu+Puh2c4ENTaw1BENsQIP6hSq4GwiPxgH96GeHg8p1jIbLUdM1I3tOmEkInz3wmPJcTixDCZ4dAzjmTMiasIGBN8cX4OsgFIkQDLLcfG2eAmgn/GYOEkqbYczpF2+9nHD8zy6i6sKq7MCh2xNNUovi1BuQfONlje5u1L7SvznhrTkrds3HDcGJWT0uqoJbtSUqoq4HCmpcZq5o6OKrx2FMs87VMQpKfZFcO8ECvr8= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:09 np0005535963 python3[4560]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:09 np0005535963 python3[4659]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:20:10 np0005535963 python3[4730]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764116409.2202952-207-40585969282379/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=cb84fc78280d4c5abb3b2a7b34c51088_id_rsa follow=False checksum=08a5bcec18918f1da14de13ebedb2d253caebd67 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:10 np0005535963 python3[4853]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:20:11 np0005535963 python3[4924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764116410.2951093-240-32854813966295/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=cb84fc78280d4c5abb3b2a7b34c51088_id_rsa.pub follow=False checksum=c42afda5c04f6f43a7169d4dfc0dc90f4c3a2852 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:12 np0005535963 python3[4972]: ansible-ping Invoked with data=pong
Nov 25 19:20:13 np0005535963 python3[4996]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:20:15 np0005535963 python3[5054]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 25 19:20:16 np0005535963 python3[5086]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:16 np0005535963 python3[5110]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:16 np0005535963 python3[5134]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:16 np0005535963 python3[5158]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:17 np0005535963 python3[5182]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:17 np0005535963 python3[5206]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:19 np0005535963 python3[5232]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:19 np0005535963 python3[5310]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:20:20 np0005535963 python3[5383]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764116419.2533956-21-169563622509488/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:20 np0005535963 python3[5431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:21 np0005535963 python3[5455]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:21 np0005535963 python3[5479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:21 np0005535963 python3[5503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:22 np0005535963 python3[5527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:22 np0005535963 python3[5551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:22 np0005535963 python3[5575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:22 np0005535963 python3[5599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:23 np0005535963 python3[5623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:23 np0005535963 python3[5647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:23 np0005535963 python3[5671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:24 np0005535963 python3[5695]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:24 np0005535963 python3[5719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:24 np0005535963 python3[5743]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:25 np0005535963 python3[5767]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:25 np0005535963 python3[5791]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:25 np0005535963 python3[5815]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:25 np0005535963 python3[5839]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:26 np0005535963 python3[5863]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:26 np0005535963 python3[5887]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:26 np0005535963 python3[5911]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:26 np0005535963 python3[5935]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:27 np0005535963 python3[5959]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:27 np0005535963 python3[5983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:27 np0005535963 python3[6007]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:28 np0005535963 python3[6031]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:20:30 np0005535963 python3[6057]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 19:20:30 np0005535963 systemd[1]: Starting Time & Date Service...
Nov 25 19:20:30 np0005535963 systemd[1]: Started Time & Date Service.
Nov 25 19:20:30 np0005535963 systemd-timedated[6059]: Changed time zone to 'UTC' (UTC).
Nov 25 19:20:32 np0005535963 python3[6088]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:32 np0005535963 python3[6164]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:20:32 np0005535963 python3[6235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764116432.2386885-153-147985713137287/source _original_basename=tmpopav5qo6 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:33 np0005535963 python3[6335]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:20:33 np0005535963 python3[6406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764116433.2247858-183-211110785541361/source _original_basename=tmpq9mm4lvi follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:34 np0005535963 python3[6508]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:20:35 np0005535963 python3[6581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764116434.3400495-231-216776804178552/source _original_basename=tmp6x47kb5s follow=False checksum=9002ae785196258bce68f82c9276ee1756ef1744 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:35 np0005535963 python3[6629]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:20:35 np0005535963 python3[6655]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:20:36 np0005535963 irqbalance[791]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 25 19:20:36 np0005535963 irqbalance[791]: IRQ 27 affinity is now unmanaged
Nov 25 19:20:36 np0005535963 python3[6735]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:20:36 np0005535963 python3[6808]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764116436.0460825-273-84021235801761/source _original_basename=tmpi937jkki follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:37 np0005535963 python3[6859]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-c9b0-cb5a-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:20:38 np0005535963 python3[6887]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-c9b0-cb5a-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 25 19:20:39 np0005535963 python3[6915]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:20:56 np0005535963 python3[6941]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:21:00 np0005535963 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 19:21:32 np0005535963 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 19:21:32 np0005535963 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 25 19:21:32 np0005535963 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 25 19:21:32 np0005535963 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 25 19:21:32 np0005535963 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 25 19:21:32 np0005535963 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 25 19:21:32 np0005535963 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 25 19:21:32 np0005535963 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 25 19:21:32 np0005535963 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 25 19:21:32 np0005535963 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 25 19:21:32 np0005535963 NetworkManager[859]: <info>  [1764116492.5246] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 19:21:32 np0005535963 systemd-udevd[6945]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 19:21:32 np0005535963 NetworkManager[859]: <info>  [1764116492.5433] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:21:32 np0005535963 NetworkManager[859]: <info>  [1764116492.5469] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 25 19:21:32 np0005535963 NetworkManager[859]: <info>  [1764116492.5476] device (eth1): carrier: link connected
Nov 25 19:21:32 np0005535963 NetworkManager[859]: <info>  [1764116492.5480] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 19:21:32 np0005535963 NetworkManager[859]: <info>  [1764116492.5490] policy: auto-activating connection 'Wired connection 1' (7629e937-1c96-30b6-ac82-df5a4daf1292)
Nov 25 19:21:32 np0005535963 NetworkManager[859]: <info>  [1764116492.5497] device (eth1): Activation: starting connection 'Wired connection 1' (7629e937-1c96-30b6-ac82-df5a4daf1292)
Nov 25 19:21:32 np0005535963 NetworkManager[859]: <info>  [1764116492.5499] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:21:32 np0005535963 NetworkManager[859]: <info>  [1764116492.5506] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:21:32 np0005535963 NetworkManager[859]: <info>  [1764116492.5513] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:21:32 np0005535963 NetworkManager[859]: <info>  [1764116492.5520] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 19:21:33 np0005535963 python3[6971]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-762d-2d73-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:21:43 np0005535963 python3[7051]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:21:44 np0005535963 python3[7124]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764116503.2555795-102-157920341672275/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=e7f354026ce3727f1b6a273c5288f814e4a378b4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:21:44 np0005535963 python3[7174]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 19:21:44 np0005535963 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 19:21:44 np0005535963 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 19:21:44 np0005535963 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 19:21:44 np0005535963 systemd[1]: Stopping Network Manager...
Nov 25 19:21:44 np0005535963 NetworkManager[859]: <info>  [1764116504.9391] caught SIGTERM, shutting down normally.
Nov 25 19:21:44 np0005535963 NetworkManager[859]: <info>  [1764116504.9401] dhcp4 (eth0): canceled DHCP transaction
Nov 25 19:21:44 np0005535963 NetworkManager[859]: <info>  [1764116504.9401] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 19:21:44 np0005535963 NetworkManager[859]: <info>  [1764116504.9402] dhcp4 (eth0): state changed no lease
Nov 25 19:21:44 np0005535963 NetworkManager[859]: <info>  [1764116504.9405] manager: NetworkManager state is now CONNECTING
Nov 25 19:21:44 np0005535963 NetworkManager[859]: <info>  [1764116504.9619] dhcp4 (eth1): canceled DHCP transaction
Nov 25 19:21:44 np0005535963 NetworkManager[859]: <info>  [1764116504.9619] dhcp4 (eth1): state changed no lease
Nov 25 19:21:44 np0005535963 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 19:21:44 np0005535963 NetworkManager[859]: <info>  [1764116504.9674] exiting (success)
Nov 25 19:21:44 np0005535963 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 19:21:44 np0005535963 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 19:21:44 np0005535963 systemd[1]: Stopped Network Manager.
Nov 25 19:21:44 np0005535963 systemd[1]: Starting Network Manager...
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0005] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:3ecb6427-f9e8-4e80-8be1-c37e67edd798)
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0007] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0053] manager[0x55f6baba9070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 19:21:45 np0005535963 systemd[1]: Starting Hostname Service...
Nov 25 19:21:45 np0005535963 systemd[1]: Started Hostname Service.
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0823] hostname: hostname: using hostnamed
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0825] hostname: static hostname changed from (none) to "np0005535963.novalocal"
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0829] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0832] manager[0x55f6baba9070]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0832] manager[0x55f6baba9070]: rfkill: WWAN hardware radio set enabled
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0854] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0854] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0855] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0855] manager: Networking is enabled by state file
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0857] settings: Loaded settings plugin: keyfile (internal)
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0860] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0879] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0886] dhcp: init: Using DHCP client 'internal'
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0888] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0892] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0897] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0902] device (lo): Activation: starting connection 'lo' (ab48b975-dcbe-41f6-95b4-36e306e95236)
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0906] device (eth0): carrier: link connected
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0909] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0912] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0913] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0917] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0921] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0925] device (eth1): carrier: link connected
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0928] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0931] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (7629e937-1c96-30b6-ac82-df5a4daf1292) (indicated)
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0931] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0934] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0939] device (eth1): Activation: starting connection 'Wired connection 1' (7629e937-1c96-30b6-ac82-df5a4daf1292)
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0943] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 19:21:45 np0005535963 systemd[1]: Started Network Manager.
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0946] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0947] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0948] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0950] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0952] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0953] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0954] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0956] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0960] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0961] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0980] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.0984] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.1012] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.1016] dhcp4 (eth0): state changed new lease, address=38.102.83.107
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.1020] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.1027] device (lo): Activation: successful, device activated.
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.1042] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.1132] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 19:21:45 np0005535963 systemd[1]: Starting Network Manager Wait Online...
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.1186] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.1190] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.1196] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.1203] device (eth0): Activation: successful, device activated.
Nov 25 19:21:45 np0005535963 NetworkManager[7182]: <info>  [1764116505.1211] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 19:21:45 np0005535963 python3[7258]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-762d-2d73-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:21:55 np0005535963 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 19:22:15 np0005535963 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 19:22:23 np0005535963 systemd[4300]: Starting Mark boot as successful...
Nov 25 19:22:23 np0005535963 systemd[4300]: Finished Mark boot as successful.
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3102] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 19:22:30 np0005535963 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 19:22:30 np0005535963 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3442] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3447] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3460] device (eth1): Activation: successful, device activated.
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3470] manager: startup complete
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3475] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <warn>  [1764116550.3484] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3496] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 25 19:22:30 np0005535963 systemd[1]: Finished Network Manager Wait Online.
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3659] dhcp4 (eth1): canceled DHCP transaction
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3659] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3660] dhcp4 (eth1): state changed no lease
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3679] policy: auto-activating connection 'ci-private-network' (cdae7d7e-f7c8-5500-8c9d-7c54b19d1da3)
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3684] device (eth1): Activation: starting connection 'ci-private-network' (cdae7d7e-f7c8-5500-8c9d-7c54b19d1da3)
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3685] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3687] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3694] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3703] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3754] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3755] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:22:30 np0005535963 NetworkManager[7182]: <info>  [1764116550.3761] device (eth1): Activation: successful, device activated.
Nov 25 19:22:40 np0005535963 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 19:22:41 np0005535963 python3[7364]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:22:42 np0005535963 python3[7437]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764116561.405152-267-2286150751213/source _original_basename=tmpiksbtddn follow=False checksum=b974985eff14f380a6ae213569c8c1aad5ad477f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:23:42 np0005535963 systemd-logind[800]: Session 1 logged out. Waiting for processes to exit.
Nov 25 19:25:23 np0005535963 systemd[4300]: Created slice User Background Tasks Slice.
Nov 25 19:25:23 np0005535963 systemd[4300]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 19:25:23 np0005535963 systemd[4300]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 19:28:20 np0005535963 systemd-logind[800]: New session 3 of user zuul.
Nov 25 19:28:20 np0005535963 systemd[1]: Started Session 3 of User zuul.
Nov 25 19:28:20 np0005535963 python3[7497]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-52b0-9e30-000000001cc8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:28:21 np0005535963 python3[7525]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:28:21 np0005535963 python3[7552]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:28:21 np0005535963 python3[7578]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:28:22 np0005535963 python3[7604]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:28:22 np0005535963 python3[7630]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:28:23 np0005535963 python3[7708]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:28:23 np0005535963 python3[7781]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764116902.8001585-470-172948075233073/source _original_basename=tmp6x50s_j5 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:28:24 np0005535963 python3[7831]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 19:28:24 np0005535963 systemd[1]: Reloading.
Nov 25 19:28:24 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:28:26 np0005535963 python3[7886]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 25 19:28:26 np0005535963 python3[7912]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:28:26 np0005535963 python3[7940]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:28:27 np0005535963 python3[7968]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:28:27 np0005535963 python3[7996]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:28:28 np0005535963 python3[8024]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-52b0-9e30-000000001ccf-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:28:28 np0005535963 python3[8054]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 19:28:30 np0005535963 systemd[1]: session-3.scope: Deactivated successfully.
Nov 25 19:28:30 np0005535963 systemd[1]: session-3.scope: Consumed 5.086s CPU time.
Nov 25 19:28:30 np0005535963 systemd-logind[800]: Session 3 logged out. Waiting for processes to exit.
Nov 25 19:28:30 np0005535963 systemd-logind[800]: Removed session 3.
Nov 25 19:28:32 np0005535963 systemd-logind[800]: New session 4 of user zuul.
Nov 25 19:28:32 np0005535963 systemd[1]: Started Session 4 of User zuul.
Nov 25 19:28:32 np0005535963 python3[8088]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 19:28:45 np0005535963 kernel: SELinux:  Converting 385 SID table entries...
Nov 25 19:28:45 np0005535963 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 19:28:45 np0005535963 kernel: SELinux:  policy capability open_perms=1
Nov 25 19:28:45 np0005535963 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 19:28:45 np0005535963 kernel: SELinux:  policy capability always_check_network=0
Nov 25 19:28:45 np0005535963 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 19:28:45 np0005535963 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 19:28:45 np0005535963 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 19:28:54 np0005535963 kernel: SELinux:  Converting 385 SID table entries...
Nov 25 19:28:54 np0005535963 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 19:28:54 np0005535963 kernel: SELinux:  policy capability open_perms=1
Nov 25 19:28:54 np0005535963 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 19:28:54 np0005535963 kernel: SELinux:  policy capability always_check_network=0
Nov 25 19:28:54 np0005535963 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 19:28:54 np0005535963 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 19:28:54 np0005535963 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 19:29:02 np0005535963 kernel: SELinux:  Converting 385 SID table entries...
Nov 25 19:29:02 np0005535963 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 19:29:02 np0005535963 kernel: SELinux:  policy capability open_perms=1
Nov 25 19:29:02 np0005535963 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 19:29:02 np0005535963 kernel: SELinux:  policy capability always_check_network=0
Nov 25 19:29:02 np0005535963 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 19:29:02 np0005535963 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 19:29:02 np0005535963 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 19:29:04 np0005535963 setsebool[8152]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 25 19:29:04 np0005535963 setsebool[8152]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 25 19:29:14 np0005535963 kernel: SELinux:  Converting 388 SID table entries...
Nov 25 19:29:14 np0005535963 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 19:29:14 np0005535963 kernel: SELinux:  policy capability open_perms=1
Nov 25 19:29:14 np0005535963 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 19:29:14 np0005535963 kernel: SELinux:  policy capability always_check_network=0
Nov 25 19:29:14 np0005535963 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 19:29:14 np0005535963 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 19:29:14 np0005535963 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 19:29:32 np0005535963 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 19:29:33 np0005535963 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 19:29:33 np0005535963 systemd[1]: Starting man-db-cache-update.service...
Nov 25 19:29:33 np0005535963 systemd[1]: Reloading.
Nov 25 19:29:33 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:29:33 np0005535963 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 19:29:38 np0005535963 python3[12541]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-bd61-9e47-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:29:39 np0005535963 kernel: evm: overlay not supported
Nov 25 19:29:39 np0005535963 systemd[4300]: Starting D-Bus User Message Bus...
Nov 25 19:29:39 np0005535963 dbus-broker-launch[13228]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 25 19:29:39 np0005535963 dbus-broker-launch[13228]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 25 19:29:39 np0005535963 systemd[4300]: Started D-Bus User Message Bus.
Nov 25 19:29:39 np0005535963 dbus-broker-lau[13228]: Ready
Nov 25 19:29:39 np0005535963 systemd[4300]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 19:29:39 np0005535963 systemd[4300]: Created slice Slice /user.
Nov 25 19:29:39 np0005535963 systemd[4300]: podman-13141.scope: unit configures an IP firewall, but not running as root.
Nov 25 19:29:39 np0005535963 systemd[4300]: (This warning is only shown for the first unit using IP firewalling.)
Nov 25 19:29:39 np0005535963 systemd[4300]: Started podman-13141.scope.
Nov 25 19:29:39 np0005535963 systemd[4300]: Started podman-pause-e21db56d.scope.
Nov 25 19:29:40 np0005535963 python3[13748]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.248:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.248:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:29:40 np0005535963 python3[13748]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 25 19:29:40 np0005535963 systemd[1]: session-4.scope: Deactivated successfully.
Nov 25 19:29:40 np0005535963 systemd[1]: session-4.scope: Consumed 58.747s CPU time.
Nov 25 19:29:40 np0005535963 systemd-logind[800]: Session 4 logged out. Waiting for processes to exit.
Nov 25 19:29:40 np0005535963 systemd-logind[800]: Removed session 4.
Nov 25 19:30:04 np0005535963 systemd-logind[800]: New session 5 of user zuul.
Nov 25 19:30:04 np0005535963 systemd[1]: Started Session 5 of User zuul.
Nov 25 19:30:04 np0005535963 python3[21544]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJW4v5GFX9Ow9lzekWl6keqGfssutSPzhiArL0GKky+vEaWloPfLXlyP5aSZW3SnKAElTVMfHxgjczmPTFtWGN4= zuul@np0005535962.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:05 np0005535963 python3[21737]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJW4v5GFX9Ow9lzekWl6keqGfssutSPzhiArL0GKky+vEaWloPfLXlyP5aSZW3SnKAElTVMfHxgjczmPTFtWGN4= zuul@np0005535962.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:06 np0005535963 python3[22080]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005535963.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 25 19:30:06 np0005535963 python3[22263]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJW4v5GFX9Ow9lzekWl6keqGfssutSPzhiArL0GKky+vEaWloPfLXlyP5aSZW3SnKAElTVMfHxgjczmPTFtWGN4= zuul@np0005535962.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 19:30:07 np0005535963 python3[22510]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:30:07 np0005535963 python3[22740]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764117006.7285721-135-185685826262664/source _original_basename=tmpdfjsp35b follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:30:08 np0005535963 python3[23049]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 25 19:30:08 np0005535963 systemd[1]: Starting Hostname Service...
Nov 25 19:30:08 np0005535963 systemd[1]: Started Hostname Service.
Nov 25 19:30:08 np0005535963 systemd-hostnamed[23151]: Changed pretty hostname to 'compute-0'
Nov 25 19:30:08 np0005535963 systemd-hostnamed[23151]: Hostname set to <compute-0> (static)
Nov 25 19:30:08 np0005535963 NetworkManager[7182]: <info>  [1764117008.6069] hostname: static hostname changed from "np0005535963.novalocal" to "compute-0"
Nov 25 19:30:08 np0005535963 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 19:30:08 np0005535963 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 19:30:09 np0005535963 systemd[1]: session-5.scope: Deactivated successfully.
Nov 25 19:30:09 np0005535963 systemd[1]: session-5.scope: Consumed 2.651s CPU time.
Nov 25 19:30:09 np0005535963 systemd-logind[800]: Session 5 logged out. Waiting for processes to exit.
Nov 25 19:30:09 np0005535963 systemd-logind[800]: Removed session 5.
Nov 25 19:30:18 np0005535963 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 19:30:30 np0005535963 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 19:30:30 np0005535963 systemd[1]: Finished man-db-cache-update.service.
Nov 25 19:30:30 np0005535963 systemd[1]: man-db-cache-update.service: Consumed 1min 9.300s CPU time.
Nov 25 19:30:30 np0005535963 systemd[1]: run-r4f3fedae4a734fd787c802a495fc6d0a.service: Deactivated successfully.
Nov 25 19:30:38 np0005535963 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 19:35:13 np0005535963 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 25 19:35:13 np0005535963 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 25 19:35:13 np0005535963 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 25 19:35:13 np0005535963 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 25 19:35:43 np0005535963 systemd-logind[800]: New session 6 of user zuul.
Nov 25 19:35:43 np0005535963 systemd[1]: Started Session 6 of User zuul.
Nov 25 19:35:44 np0005535963 python3[30047]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:35:45 np0005535963 python3[30163]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:35:46 np0005535963 python3[30236]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764117345.338031-33667-236123052363283/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:35:46 np0005535963 python3[30262]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:35:46 np0005535963 python3[30335]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764117345.338031-33667-236123052363283/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:35:47 np0005535963 python3[30361]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:35:47 np0005535963 python3[30434]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764117345.338031-33667-236123052363283/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:35:47 np0005535963 python3[30460]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:35:48 np0005535963 python3[30533]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764117345.338031-33667-236123052363283/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:35:48 np0005535963 python3[30559]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:35:49 np0005535963 python3[30632]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764117345.338031-33667-236123052363283/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:35:49 np0005535963 python3[30658]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:35:49 np0005535963 python3[30731]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764117345.338031-33667-236123052363283/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:35:50 np0005535963 python3[30757]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 19:35:50 np0005535963 python3[30830]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764117345.338031-33667-236123052363283/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:38:30 np0005535963 python3[30889]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:43:30 np0005535963 systemd[1]: session-6.scope: Deactivated successfully.
Nov 25 19:43:30 np0005535963 systemd[1]: session-6.scope: Consumed 5.912s CPU time.
Nov 25 19:43:30 np0005535963 systemd-logind[800]: Session 6 logged out. Waiting for processes to exit.
Nov 25 19:43:30 np0005535963 systemd-logind[800]: Removed session 6.
Nov 25 19:51:52 np0005535963 systemd-logind[800]: New session 7 of user zuul.
Nov 25 19:51:52 np0005535963 systemd[1]: Started Session 7 of User zuul.
Nov 25 19:51:53 np0005535963 python3.9[31049]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:51:54 np0005535963 python3.9[31230]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:52:02 np0005535963 systemd[1]: session-7.scope: Deactivated successfully.
Nov 25 19:52:02 np0005535963 systemd[1]: session-7.scope: Consumed 8.280s CPU time.
Nov 25 19:52:02 np0005535963 systemd-logind[800]: Session 7 logged out. Waiting for processes to exit.
Nov 25 19:52:02 np0005535963 systemd-logind[800]: Removed session 7.
Nov 25 19:52:19 np0005535963 systemd-logind[800]: New session 8 of user zuul.
Nov 25 19:52:19 np0005535963 systemd[1]: Started Session 8 of User zuul.
Nov 25 19:52:20 np0005535963 python3.9[31443]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 19:52:21 np0005535963 python3.9[31617]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:52:22 np0005535963 python3.9[31770]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:52:23 np0005535963 python3.9[31923]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:52:24 np0005535963 python3.9[32075]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:52:25 np0005535963 systemd[1]: Starting dnf makecache...
Nov 25 19:52:25 np0005535963 dnf[32199]: Failed determining last makecache time.
Nov 25 19:52:25 np0005535963 python3.9[32228]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:52:25 np0005535963 dnf[32199]: delorean-openstack-barbican-42b4c41831408a8e323 224 kB/s |  13 kB     00:00
Nov 25 19:52:25 np0005535963 dnf[32199]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 1.2 MB/s |  65 kB     00:00
Nov 25 19:52:25 np0005535963 dnf[32199]: delorean-openstack-cinder-1c00d6490d88e436f26ef 653 kB/s |  32 kB     00:00
Nov 25 19:52:25 np0005535963 dnf[32199]: delorean-python-stevedore-c4acc5639fd2329372142 2.5 MB/s | 131 kB     00:00
Nov 25 19:52:25 np0005535963 dnf[32199]: delorean-python-observabilityclient-2f31846d73c 510 kB/s |  25 kB     00:00
Nov 25 19:52:25 np0005535963 dnf[32199]: delorean-os-net-config-bbae2ed8a159b0435a473f38 5.4 MB/s | 356 kB     00:00
Nov 25 19:52:26 np0005535963 dnf[32199]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 781 kB/s |  42 kB     00:00
Nov 25 19:52:26 np0005535963 dnf[32199]: delorean-python-designate-tests-tempest-347fdbc 381 kB/s |  18 kB     00:00
Nov 25 19:52:26 np0005535963 dnf[32199]: delorean-openstack-glance-1fd12c29b339f30fe823e 402 kB/s |  18 kB     00:00
Nov 25 19:52:26 np0005535963 dnf[32199]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 555 kB/s |  29 kB     00:00
Nov 25 19:52:26 np0005535963 python3.9[32373]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764118344.880186-73-278260270503263/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:52:26 np0005535963 dnf[32199]: delorean-openstack-manila-3c01b7181572c95dac462 542 kB/s |  25 kB     00:00
Nov 25 19:52:26 np0005535963 dnf[32199]: delorean-python-whitebox-neutron-tests-tempest- 3.0 MB/s | 154 kB     00:00
Nov 25 19:52:26 np0005535963 dnf[32199]: delorean-openstack-octavia-ba397f07a7331190208c 530 kB/s |  26 kB     00:00
Nov 25 19:52:26 np0005535963 dnf[32199]: delorean-openstack-watcher-c014f81a8647287f6dcc 354 kB/s |  16 kB     00:00
Nov 25 19:52:26 np0005535963 dnf[32199]: delorean-python-tcib-1124124ec06aadbac34f0d340b 160 kB/s | 7.4 kB     00:00
Nov 25 19:52:26 np0005535963 dnf[32199]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 2.7 MB/s | 144 kB     00:00
Nov 25 19:52:26 np0005535963 dnf[32199]: delorean-openstack-swift-dc98a8463506ac520c469a 267 kB/s |  14 kB     00:00
Nov 25 19:52:26 np0005535963 dnf[32199]: delorean-python-tempestconf-8515371b7cceebd4282 1.1 MB/s |  53 kB     00:00
Nov 25 19:52:26 np0005535963 dnf[32199]: delorean-openstack-heat-ui-013accbfd179753bc3f0 1.9 MB/s |  96 kB     00:00
Nov 25 19:52:27 np0005535963 python3.9[32560]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:52:27 np0005535963 dnf[32199]: CentOS Stream 9 - BaseOS                         16 MB/s | 8.8 MB     00:00
Nov 25 19:52:28 np0005535963 python3.9[32721]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:52:28 np0005535963 python3.9[32873]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:52:29 np0005535963 dnf[32199]: CentOS Stream 9 - AppStream                      31 MB/s |  25 MB     00:00
Nov 25 19:52:29 np0005535963 python3.9[33028]: ansible-ansible.builtin.service_facts Invoked
Nov 25 19:52:34 np0005535963 python3.9[33281]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:52:35 np0005535963 python3.9[33436]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:52:36 np0005535963 dnf[32199]: CentOS Stream 9 - CRB                           7.9 MB/s | 7.3 MB     00:00
Nov 25 19:52:37 np0005535963 python3.9[33590]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:52:38 np0005535963 python3.9[33749]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 19:52:38 np0005535963 dnf[32199]: CentOS Stream 9 - Extras packages                29 kB/s |  20 kB     00:00
Nov 25 19:52:38 np0005535963 dnf[32199]: dlrn-antelope-testing                            16 MB/s | 1.1 MB     00:00
Nov 25 19:52:38 np0005535963 dnf[32199]: dlrn-antelope-build-deps                        6.4 MB/s | 461 kB     00:00
Nov 25 19:52:39 np0005535963 python3.9[33839]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 19:52:39 np0005535963 dnf[32199]: centos9-rabbitmq                                832 kB/s | 123 kB     00:00
Nov 25 19:52:39 np0005535963 dnf[32199]: centos9-storage                                 3.2 MB/s | 415 kB     00:00
Nov 25 19:52:39 np0005535963 dnf[32199]: centos9-opstools                                375 kB/s |  51 kB     00:00
Nov 25 19:52:40 np0005535963 dnf[32199]: NFV SIG OpenvSwitch                             3.8 MB/s | 454 kB     00:00
Nov 25 19:52:40 np0005535963 dnf[32199]: repo-setup-centos-appstream                      81 MB/s |  25 MB     00:00
Nov 25 19:52:47 np0005535963 dnf[32199]: repo-setup-centos-baseos                        6.1 MB/s | 8.8 MB     00:01
Nov 25 19:52:49 np0005535963 dnf[32199]: repo-setup-centos-highavailability              4.9 MB/s | 744 kB     00:00
Nov 25 19:52:49 np0005535963 dnf[32199]: repo-setup-centos-powertools                     32 MB/s | 7.3 MB     00:00
Nov 25 19:52:52 np0005535963 dnf[32199]: Extra Packages for Enterprise Linux 9 - x86_64   15 MB/s |  20 MB     00:01
Nov 25 19:53:05 np0005535963 dnf[32199]: Metadata cache created.
Nov 25 19:53:05 np0005535963 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 25 19:53:05 np0005535963 systemd[1]: Finished dnf makecache.
Nov 25 19:53:05 np0005535963 systemd[1]: dnf-makecache.service: Consumed 34.269s CPU time.
Nov 25 19:53:07 np0005535963 systemd[1]: Reloading.
Nov 25 19:53:07 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:53:07 np0005535963 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 25 19:53:08 np0005535963 systemd[1]: Reloading.
Nov 25 19:53:08 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:53:08 np0005535963 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 25 19:53:08 np0005535963 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 25 19:53:08 np0005535963 systemd[1]: Reloading.
Nov 25 19:53:08 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:53:08 np0005535963 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 25 19:53:08 np0005535963 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Nov 25 19:53:08 np0005535963 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Nov 25 19:53:08 np0005535963 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Nov 25 19:54:11 np0005535963 kernel: SELinux:  Converting 2718 SID table entries...
Nov 25 19:54:11 np0005535963 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 19:54:11 np0005535963 kernel: SELinux:  policy capability open_perms=1
Nov 25 19:54:11 np0005535963 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 19:54:11 np0005535963 kernel: SELinux:  policy capability always_check_network=0
Nov 25 19:54:11 np0005535963 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 19:54:11 np0005535963 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 19:54:11 np0005535963 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 19:54:11 np0005535963 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 25 19:54:12 np0005535963 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 19:54:12 np0005535963 systemd[1]: Starting man-db-cache-update.service...
Nov 25 19:54:12 np0005535963 systemd[1]: Reloading.
Nov 25 19:54:12 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:54:12 np0005535963 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 19:54:13 np0005535963 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 19:54:13 np0005535963 systemd[1]: Finished man-db-cache-update.service.
Nov 25 19:54:13 np0005535963 systemd[1]: man-db-cache-update.service: Consumed 1.377s CPU time.
Nov 25 19:54:13 np0005535963 systemd[1]: run-re036725a7af84286b06a9f7dd7b42c41.service: Deactivated successfully.
Nov 25 19:54:13 np0005535963 python3.9[35274]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:54:16 np0005535963 python3.9[35555]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 19:54:17 np0005535963 python3.9[35707]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 19:54:19 np0005535963 python3.9[35860]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:54:20 np0005535963 python3.9[36012]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 19:54:21 np0005535963 python3.9[36164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:54:22 np0005535963 python3.9[36316]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:54:23 np0005535963 python3.9[36439]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118461.99199-236-99120419249613/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=661af12c565470228d854ced01dfaeaefe9a4726 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:54:26 np0005535963 python3.9[36591]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:54:27 np0005535963 python3.9[36743]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:54:28 np0005535963 python3.9[36896]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:54:29 np0005535963 python3.9[37048]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 19:54:29 np0005535963 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 19:54:30 np0005535963 python3.9[37202]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 19:54:31 np0005535963 python3.9[37360]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 19:54:32 np0005535963 python3.9[37520]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 19:54:33 np0005535963 python3.9[37673]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 19:54:34 np0005535963 python3.9[37831]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 19:54:35 np0005535963 python3.9[37983]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 19:54:37 np0005535963 python3.9[38136]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:54:38 np0005535963 python3.9[38288]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:54:39 np0005535963 python3.9[38411]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764118478.061689-355-1861533728664/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:54:40 np0005535963 python3.9[38563]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 19:54:40 np0005535963 systemd[1]: Starting Load Kernel Modules...
Nov 25 19:54:40 np0005535963 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 25 19:54:40 np0005535963 kernel: Bridge firewalling registered
Nov 25 19:54:40 np0005535963 systemd-modules-load[38567]: Inserted module 'br_netfilter'
Nov 25 19:54:40 np0005535963 systemd[1]: Finished Load Kernel Modules.
Nov 25 19:54:41 np0005535963 python3.9[38723]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:54:42 np0005535963 python3.9[38846]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764118480.9049726-378-84957719716310/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:54:43 np0005535963 python3.9[38998]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 19:54:45 np0005535963 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Nov 25 19:54:45 np0005535963 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Nov 25 19:54:46 np0005535963 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 19:54:46 np0005535963 systemd[1]: Starting man-db-cache-update.service...
Nov 25 19:54:46 np0005535963 systemd[1]: Reloading.
Nov 25 19:54:46 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:54:46 np0005535963 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 19:54:47 np0005535963 python3.9[40428]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:54:48 np0005535963 python3.9[41435]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 19:54:49 np0005535963 python3.9[42170]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:54:50 np0005535963 python3.9[43006]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:54:50 np0005535963 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 19:54:50 np0005535963 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 19:54:50 np0005535963 systemd[1]: Finished man-db-cache-update.service.
Nov 25 19:54:50 np0005535963 systemd[1]: man-db-cache-update.service: Consumed 5.455s CPU time.
Nov 25 19:54:50 np0005535963 systemd[1]: run-rc9138c7c92724c11ba4416cc435bd483.service: Deactivated successfully.
Nov 25 19:54:51 np0005535963 systemd[1]: Starting Authorization Manager...
Nov 25 19:54:51 np0005535963 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 19:54:51 np0005535963 polkitd[43375]: Started polkitd version 0.117
Nov 25 19:54:51 np0005535963 systemd[1]: Started Authorization Manager.
Nov 25 19:54:52 np0005535963 python3.9[43545]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 19:54:53 np0005535963 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 19:54:53 np0005535963 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 19:54:53 np0005535963 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 19:54:53 np0005535963 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 19:54:53 np0005535963 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 19:54:54 np0005535963 python3.9[43707]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 19:54:56 np0005535963 python3.9[43859]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 19:54:56 np0005535963 systemd[1]: Reloading.
Nov 25 19:54:57 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:54:58 np0005535963 python3.9[44049]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 19:54:59 np0005535963 systemd[1]: Reloading.
Nov 25 19:54:59 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:55:00 np0005535963 python3.9[44238]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:55:01 np0005535963 python3.9[44391]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:55:01 np0005535963 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 25 19:55:01 np0005535963 python3.9[44544]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:55:04 np0005535963 python3.9[44706]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:55:05 np0005535963 python3.9[44859]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 19:55:05 np0005535963 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 19:55:05 np0005535963 systemd[1]: Stopped Apply Kernel Variables.
Nov 25 19:55:05 np0005535963 systemd[1]: Stopping Apply Kernel Variables...
Nov 25 19:55:05 np0005535963 systemd[1]: Starting Apply Kernel Variables...
Nov 25 19:55:05 np0005535963 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 19:55:05 np0005535963 systemd[1]: Finished Apply Kernel Variables.
Nov 25 19:55:05 np0005535963 systemd[1]: session-8.scope: Deactivated successfully.
Nov 25 19:55:05 np0005535963 systemd[1]: session-8.scope: Consumed 1min 49.338s CPU time.
Nov 25 19:55:05 np0005535963 systemd-logind[800]: Session 8 logged out. Waiting for processes to exit.
Nov 25 19:55:05 np0005535963 systemd-logind[800]: Removed session 8.
Nov 25 19:55:11 np0005535963 systemd-logind[800]: New session 9 of user zuul.
Nov 25 19:55:11 np0005535963 systemd[1]: Started Session 9 of User zuul.
Nov 25 19:55:12 np0005535963 python3.9[45042]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:55:14 np0005535963 python3.9[45198]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 19:55:15 np0005535963 python3.9[45351]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 19:55:16 np0005535963 python3.9[45509]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 19:55:17 np0005535963 python3.9[45669]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 19:55:18 np0005535963 python3.9[45753]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 19:55:22 np0005535963 python3.9[45917]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 19:55:33 np0005535963 kernel: SELinux:  Converting 2730 SID table entries...
Nov 25 19:55:33 np0005535963 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 19:55:33 np0005535963 kernel: SELinux:  policy capability open_perms=1
Nov 25 19:55:33 np0005535963 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 19:55:33 np0005535963 kernel: SELinux:  policy capability always_check_network=0
Nov 25 19:55:33 np0005535963 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 19:55:33 np0005535963 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 19:55:33 np0005535963 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 19:55:33 np0005535963 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 25 19:55:33 np0005535963 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 25 19:55:35 np0005535963 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 19:55:35 np0005535963 systemd[1]: Starting man-db-cache-update.service...
Nov 25 19:55:35 np0005535963 systemd[1]: Reloading.
Nov 25 19:55:35 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:55:35 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 19:55:35 np0005535963 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 19:55:36 np0005535963 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 19:55:36 np0005535963 systemd[1]: Finished man-db-cache-update.service.
Nov 25 19:55:36 np0005535963 systemd[1]: man-db-cache-update.service: Consumed 1.015s CPU time.
Nov 25 19:55:36 np0005535963 systemd[1]: run-r22a5de15c5544f1ebe54e08062caa47d.service: Deactivated successfully.
Nov 25 19:55:37 np0005535963 python3.9[47015]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 19:55:37 np0005535963 systemd[1]: Reloading.
Nov 25 19:55:37 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 19:55:37 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:55:37 np0005535963 systemd[1]: Starting Open vSwitch Database Unit...
Nov 25 19:55:37 np0005535963 chown[47058]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 25 19:55:37 np0005535963 ovs-ctl[47063]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 25 19:55:37 np0005535963 ovs-ctl[47063]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 25 19:55:38 np0005535963 ovs-ctl[47063]: Starting ovsdb-server [  OK  ]
Nov 25 19:55:38 np0005535963 ovs-vsctl[47112]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 25 19:55:38 np0005535963 ovs-vsctl[47132]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"27d03014-5e51-4d89-b5a1-b13242894075\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 25 19:55:38 np0005535963 ovs-ctl[47063]: Configuring Open vSwitch system IDs [  OK  ]
Nov 25 19:55:38 np0005535963 ovs-ctl[47063]: Enabling remote OVSDB managers [  OK  ]
Nov 25 19:55:38 np0005535963 ovs-vsctl[47138]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 19:55:38 np0005535963 systemd[1]: Started Open vSwitch Database Unit.
Nov 25 19:55:38 np0005535963 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 25 19:55:38 np0005535963 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 25 19:55:38 np0005535963 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 25 19:55:38 np0005535963 kernel: openvswitch: Open vSwitch switching datapath
Nov 25 19:55:38 np0005535963 ovs-ctl[47183]: Inserting openvswitch module [  OK  ]
Nov 25 19:55:38 np0005535963 ovs-ctl[47152]: Starting ovs-vswitchd [  OK  ]
Nov 25 19:55:38 np0005535963 ovs-vsctl[47200]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 19:55:38 np0005535963 ovs-ctl[47152]: Enabling remote OVSDB managers [  OK  ]
Nov 25 19:55:38 np0005535963 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 25 19:55:38 np0005535963 systemd[1]: Starting Open vSwitch...
Nov 25 19:55:38 np0005535963 systemd[1]: Finished Open vSwitch.
Nov 25 19:55:39 np0005535963 python3.9[47352]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:55:40 np0005535963 python3.9[47504]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 19:55:42 np0005535963 kernel: SELinux:  Converting 2744 SID table entries...
Nov 25 19:55:42 np0005535963 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 19:55:42 np0005535963 kernel: SELinux:  policy capability open_perms=1
Nov 25 19:55:42 np0005535963 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 19:55:42 np0005535963 kernel: SELinux:  policy capability always_check_network=0
Nov 25 19:55:42 np0005535963 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 19:55:42 np0005535963 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 19:55:42 np0005535963 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 19:55:43 np0005535963 python3.9[47659]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:55:43 np0005535963 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 25 19:55:44 np0005535963 python3.9[47817]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 19:55:46 np0005535963 python3.9[47970]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:55:48 np0005535963 python3.9[48257]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 19:55:49 np0005535963 python3.9[48407]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:55:49 np0005535963 python3.9[48561]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 19:55:51 np0005535963 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 19:55:51 np0005535963 systemd[1]: Starting man-db-cache-update.service...
Nov 25 19:55:51 np0005535963 systemd[1]: Reloading.
Nov 25 19:55:51 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 19:55:51 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:55:52 np0005535963 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 19:55:52 np0005535963 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 19:55:52 np0005535963 systemd[1]: Finished man-db-cache-update.service.
Nov 25 19:55:52 np0005535963 systemd[1]: run-rbdae6a65d5584244834a908cbdf3a3ea.service: Deactivated successfully.
Nov 25 19:55:53 np0005535963 python3.9[48877]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 19:55:54 np0005535963 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 19:55:54 np0005535963 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 19:55:54 np0005535963 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 19:55:54 np0005535963 systemd[1]: Stopping Network Manager...
Nov 25 19:55:54 np0005535963 NetworkManager[7182]: <info>  [1764118554.3258] caught SIGTERM, shutting down normally.
Nov 25 19:55:54 np0005535963 NetworkManager[7182]: <info>  [1764118554.3276] dhcp4 (eth0): canceled DHCP transaction
Nov 25 19:55:54 np0005535963 NetworkManager[7182]: <info>  [1764118554.3276] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 19:55:54 np0005535963 NetworkManager[7182]: <info>  [1764118554.3276] dhcp4 (eth0): state changed no lease
Nov 25 19:55:54 np0005535963 NetworkManager[7182]: <info>  [1764118554.3280] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 19:55:54 np0005535963 NetworkManager[7182]: <info>  [1764118554.3386] exiting (success)
Nov 25 19:55:54 np0005535963 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 19:55:54 np0005535963 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 19:55:54 np0005535963 systemd[1]: Stopped Network Manager.
Nov 25 19:55:54 np0005535963 systemd[1]: NetworkManager.service: Consumed 14.139s CPU time, 4.3M memory peak, read 0B from disk, written 41.0K to disk.
Nov 25 19:55:54 np0005535963 systemd[1]: Starting Network Manager...
Nov 25 19:55:54 np0005535963 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.4102] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:3ecb6427-f9e8-4e80-8be1-c37e67edd798)
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.4105] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.4196] manager[0x5654b74b0090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 19:55:54 np0005535963 systemd[1]: Starting Hostname Service...
Nov 25 19:55:54 np0005535963 systemd[1]: Started Hostname Service.
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5298] hostname: hostname: using hostnamed
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5299] hostname: static hostname changed from (none) to "compute-0"
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5306] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5313] manager[0x5654b74b0090]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5314] manager[0x5654b74b0090]: rfkill: WWAN hardware radio set enabled
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5347] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5362] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5363] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5364] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5364] manager: Networking is enabled by state file
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5367] settings: Loaded settings plugin: keyfile (internal)
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5373] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5415] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5432] dhcp: init: Using DHCP client 'internal'
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5437] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5445] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5459] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5473] device (lo): Activation: starting connection 'lo' (ab48b975-dcbe-41f6-95b4-36e306e95236)
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5485] device (eth0): carrier: link connected
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5493] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5504] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5506] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5520] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5533] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5545] device (eth1): carrier: link connected
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5553] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5563] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (cdae7d7e-f7c8-5500-8c9d-7c54b19d1da3) (indicated)
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5565] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5575] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5589] device (eth1): Activation: starting connection 'ci-private-network' (cdae7d7e-f7c8-5500-8c9d-7c54b19d1da3)
Nov 25 19:55:54 np0005535963 systemd[1]: Started Network Manager.
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5600] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5614] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5621] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5626] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5632] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5640] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5646] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5654] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5662] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5673] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5679] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5731] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5753] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5772] dhcp4 (eth0): state changed new lease, address=38.102.83.107
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.5783] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.6355] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.6368] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.6372] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.6376] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.6386] device (lo): Activation: successful, device activated.
Nov 25 19:55:54 np0005535963 systemd[1]: Starting Network Manager Wait Online...
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.6404] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.6414] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.6421] device (eth1): Activation: successful, device activated.
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.6440] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.6444] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.6452] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.6460] device (eth0): Activation: successful, device activated.
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.6470] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 19:55:54 np0005535963 NetworkManager[48886]: <info>  [1764118554.6475] manager: startup complete
Nov 25 19:55:54 np0005535963 systemd[1]: Finished Network Manager Wait Online.
Nov 25 19:55:55 np0005535963 python3.9[49103]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 19:55:59 np0005535963 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 19:55:59 np0005535963 systemd[1]: Starting man-db-cache-update.service...
Nov 25 19:55:59 np0005535963 systemd[1]: Reloading.
Nov 25 19:55:59 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 19:55:59 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:56:00 np0005535963 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 19:56:00 np0005535963 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 19:56:00 np0005535963 systemd[1]: Finished man-db-cache-update.service.
Nov 25 19:56:00 np0005535963 systemd[1]: run-r75ec32b2f3bd4330911a9bc801032578.service: Deactivated successfully.
Nov 25 19:56:01 np0005535963 python3.9[49561]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:56:02 np0005535963 python3.9[49713]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:56:03 np0005535963 python3.9[49867]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:56:04 np0005535963 python3.9[50019]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:56:04 np0005535963 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 19:56:05 np0005535963 python3.9[50171]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:56:06 np0005535963 python3.9[50323]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:56:07 np0005535963 python3.9[50475]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:56:08 np0005535963 python3.9[50598]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764118566.6980395-229-175636900898994/.source _original_basename=.u5yciiwq follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:56:08 np0005535963 python3.9[50750]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:56:09 np0005535963 python3.9[50902]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 25 19:56:10 np0005535963 python3.9[51054]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:56:13 np0005535963 python3.9[51481]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 25 19:56:14 np0005535963 ansible-async_wrapper.py[51656]: Invoked with j483903297384 300 /home/zuul/.ansible/tmp/ansible-tmp-1764118573.5756116-295-44770479683324/AnsiballZ_edpm_os_net_config.py _
Nov 25 19:56:14 np0005535963 ansible-async_wrapper.py[51659]: Starting module and watcher
Nov 25 19:56:14 np0005535963 ansible-async_wrapper.py[51659]: Start watching 51660 (300)
Nov 25 19:56:14 np0005535963 ansible-async_wrapper.py[51660]: Start module (51660)
Nov 25 19:56:14 np0005535963 ansible-async_wrapper.py[51656]: Return async_wrapper task started.
Nov 25 19:56:14 np0005535963 python3.9[51661]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 25 19:56:15 np0005535963 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 25 19:56:15 np0005535963 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 25 19:56:15 np0005535963 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 25 19:56:15 np0005535963 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 25 19:56:15 np0005535963 kernel: cfg80211: failed to load regulatory.db
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.7645] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.7665] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8219] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8220] audit: op="connection-add" uuid="974c9863-bcb7-4522-ab27-8708581e3281" name="br-ex-br" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8233] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8234] audit: op="connection-add" uuid="fe5fdded-94e3-4212-9dda-6501c2f9da82" name="br-ex-port" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8247] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8248] audit: op="connection-add" uuid="15beb5c6-34b5-4c79-a159-723d19516ccc" name="eth1-port" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8263] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8264] audit: op="connection-add" uuid="f6078484-4aef-408d-b73f-91ac8893c98f" name="vlan20-port" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8277] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8279] audit: op="connection-add" uuid="8e008a22-2001-41ca-bdef-6ab7dfdfea7e" name="vlan21-port" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8291] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8292] audit: op="connection-add" uuid="8cf004f5-d213-4e0c-ae71-e52370f27a5c" name="vlan22-port" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8305] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8306] audit: op="connection-add" uuid="2f510501-69dd-4c1b-bf79-fa7ce1c10c42" name="vlan23-port" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8327] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8345] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8347] audit: op="connection-add" uuid="94d9e118-ec5b-4575-bc81-62cc2f6b2a45" name="br-ex-if" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8390] audit: op="connection-update" uuid="cdae7d7e-f7c8-5500-8c9d-7c54b19d1da3" name="ci-private-network" args="ipv4.routing-rules,ipv4.addresses,ipv4.method,ipv4.never-default,ipv4.routes,ipv4.dns,connection.slave-type,connection.controller,connection.port-type,connection.master,connection.timestamp,ovs-external-ids.data,ovs-interface.type,ipv6.routing-rules,ipv6.addresses,ipv6.method,ipv6.addr-gen-mode,ipv6.routes,ipv6.dns" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8407] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8408] audit: op="connection-add" uuid="9a3be4ef-9d5f-46f7-9ee7-8e5428be6a24" name="vlan20-if" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8426] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8428] audit: op="connection-add" uuid="9f8c870e-bf94-4794-a51b-d85e9eb40df8" name="vlan21-if" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8445] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8446] audit: op="connection-add" uuid="fbfa373a-6289-49e1-875a-2f8f53f68109" name="vlan22-if" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8464] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8465] audit: op="connection-add" uuid="17fe384a-2d1e-4d92-8bd6-921f00466e5f" name="vlan23-if" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8476] audit: op="connection-delete" uuid="7629e937-1c96-30b6-ac82-df5a4daf1292" name="Wired connection 1" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8488] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8496] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8500] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (974c9863-bcb7-4522-ab27-8708581e3281)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8500] audit: op="connection-activate" uuid="974c9863-bcb7-4522-ab27-8708581e3281" name="br-ex-br" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8501] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8506] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8510] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (fe5fdded-94e3-4212-9dda-6501c2f9da82)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8511] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8516] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8519] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (15beb5c6-34b5-4c79-a159-723d19516ccc)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8521] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8526] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8530] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (f6078484-4aef-408d-b73f-91ac8893c98f)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8531] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8536] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8539] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (8e008a22-2001-41ca-bdef-6ab7dfdfea7e)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8541] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8546] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8549] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (8cf004f5-d213-4e0c-ae71-e52370f27a5c)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8550] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8556] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8559] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (2f510501-69dd-4c1b-bf79-fa7ce1c10c42)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8559] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8561] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8562] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8567] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8571] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8575] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (94d9e118-ec5b-4575-bc81-62cc2f6b2a45)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8575] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8578] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8580] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8580] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8582] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8591] device (eth1): disconnecting for new activation request.
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8592] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8594] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8595] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8596] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8599] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8602] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8605] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (9a3be4ef-9d5f-46f7-9ee7-8e5428be6a24)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8606] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8609] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8610] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8611] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8613] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8617] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8620] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (9f8c870e-bf94-4794-a51b-d85e9eb40df8)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8621] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8623] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8625] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8626] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8628] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8631] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8633] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (fbfa373a-6289-49e1-875a-2f8f53f68109)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8634] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8636] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8637] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8638] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8640] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8643] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8647] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (17fe384a-2d1e-4d92-8bd6-921f00466e5f)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8647] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8650] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8651] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8652] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8653] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8666] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.method,ipv6.addr-gen-mode" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8667] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8670] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8671] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8677] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8681] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8684] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8687] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8689] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8693] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8696] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 kernel: ovs-system: entered promiscuous mode
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8699] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8700] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8704] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8707] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8709] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8710] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8714] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8717] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8719] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8720] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8724] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 kernel: Timeout policy base is empty
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8727] dhcp4 (eth0): canceled DHCP transaction
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8728] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8728] dhcp4 (eth0): state changed no lease
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8729] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 25 19:56:16 np0005535963 systemd-udevd[51668]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8739] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8741] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51662 uid=0 result="fail" reason="Device is not activated"
Nov 25 19:56:16 np0005535963 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8769] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8782] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8788] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8791] dhcp4 (eth0): state changed new lease, address=38.102.83.107
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8832] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8848] device (eth1): disconnecting for new activation request.
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8848] audit: op="connection-activate" uuid="cdae7d7e-f7c8-5500-8c9d-7c54b19d1da3" name="ci-private-network" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8884] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51662 uid=0 result="success"
Nov 25 19:56:16 np0005535963 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.8952] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9054] device (eth1): Activation: starting connection 'ci-private-network' (cdae7d7e-f7c8-5500-8c9d-7c54b19d1da3)
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9066] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9081] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9089] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9103] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9113] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9123] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9126] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9129] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9132] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9135] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9138] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9159] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9172] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9181] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9190] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 kernel: br-ex: entered promiscuous mode
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9199] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9206] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9214] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9223] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9234] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 19:56:16 np0005535963 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9242] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9252] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9260] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9270] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9286] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9303] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 kernel: vlan22: entered promiscuous mode
Nov 25 19:56:16 np0005535963 systemd-udevd[51666]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9394] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9413] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9424] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9435] device (eth1): Activation: successful, device activated.
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9466] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 kernel: vlan23: entered promiscuous mode
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9494] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9520] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9533] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9540] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9547] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9597] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9602] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9603] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9612] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 19:56:16 np0005535963 kernel: vlan21: entered promiscuous mode
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9636] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 kernel: vlan20: entered promiscuous mode
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9678] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9680] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9687] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9747] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9758] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9793] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9796] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9803] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9847] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9859] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9895] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9897] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 19:56:16 np0005535963 NetworkManager[48886]: <info>  [1764118576.9905] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 19:56:18 np0005535963 NetworkManager[48886]: <info>  [1764118578.1444] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51662 uid=0 result="success"
Nov 25 19:56:18 np0005535963 NetworkManager[48886]: <info>  [1764118578.4008] checkpoint[0x5654b7486950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 25 19:56:18 np0005535963 NetworkManager[48886]: <info>  [1764118578.4013] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51662 uid=0 result="success"
Nov 25 19:56:18 np0005535963 python3.9[52019]: ansible-ansible.legacy.async_status Invoked with jid=j483903297384.51656 mode=status _async_dir=/root/.ansible_async
Nov 25 19:56:18 np0005535963 NetworkManager[48886]: <info>  [1764118578.8677] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51662 uid=0 result="success"
Nov 25 19:56:18 np0005535963 NetworkManager[48886]: <info>  [1764118578.8694] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51662 uid=0 result="success"
Nov 25 19:56:19 np0005535963 NetworkManager[48886]: <info>  [1764118579.1562] audit: op="networking-control" arg="global-dns-configuration" pid=51662 uid=0 result="success"
Nov 25 19:56:19 np0005535963 NetworkManager[48886]: <info>  [1764118579.1600] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 25 19:56:19 np0005535963 NetworkManager[48886]: <info>  [1764118579.1625] audit: op="networking-control" arg="global-dns-configuration" pid=51662 uid=0 result="success"
Nov 25 19:56:19 np0005535963 NetworkManager[48886]: <info>  [1764118579.2108] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51662 uid=0 result="success"
Nov 25 19:56:19 np0005535963 NetworkManager[48886]: <info>  [1764118579.4499] checkpoint[0x5654b7486a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 25 19:56:19 np0005535963 NetworkManager[48886]: <info>  [1764118579.4507] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51662 uid=0 result="success"
Nov 25 19:56:19 np0005535963 ansible-async_wrapper.py[51660]: Module complete (51660)
Nov 25 19:56:19 np0005535963 ansible-async_wrapper.py[51659]: 51660 still running (300)
Nov 25 19:56:22 np0005535963 python3.9[52126]: ansible-ansible.legacy.async_status Invoked with jid=j483903297384.51656 mode=status _async_dir=/root/.ansible_async
Nov 25 19:56:22 np0005535963 python3.9[52225]: ansible-ansible.legacy.async_status Invoked with jid=j483903297384.51656 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 19:56:23 np0005535963 python3.9[52377]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:56:24 np0005535963 python3.9[52500]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764118582.9199252-322-157081708551776/.source.returncode _original_basename=.obkubor8 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:56:24 np0005535963 ansible-async_wrapper.py[51659]: Done in kid B.
Nov 25 19:56:24 np0005535963 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 19:56:25 np0005535963 python3.9[52654]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:56:25 np0005535963 python3.9[52778]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764118584.5058186-338-248408887085907/.source.cfg _original_basename=.4vzsyxba follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:56:26 np0005535963 python3.9[52930]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 19:56:26 np0005535963 systemd[1]: Reloading Network Manager...
Nov 25 19:56:27 np0005535963 NetworkManager[48886]: <info>  [1764118587.0247] audit: op="reload" arg="0" pid=52934 uid=0 result="success"
Nov 25 19:56:27 np0005535963 NetworkManager[48886]: <info>  [1764118587.0255] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 25 19:56:27 np0005535963 systemd[1]: Reloaded Network Manager.
Nov 25 19:56:27 np0005535963 systemd[1]: session-9.scope: Deactivated successfully.
Nov 25 19:56:27 np0005535963 systemd[1]: session-9.scope: Consumed 55.375s CPU time.
Nov 25 19:56:27 np0005535963 systemd-logind[800]: Session 9 logged out. Waiting for processes to exit.
Nov 25 19:56:27 np0005535963 systemd-logind[800]: Removed session 9.
Nov 25 19:56:32 np0005535963 systemd-logind[800]: New session 10 of user zuul.
Nov 25 19:56:32 np0005535963 systemd[1]: Started Session 10 of User zuul.
Nov 25 19:56:34 np0005535963 python3.9[53118]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:56:35 np0005535963 python3.9[53273]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 19:56:37 np0005535963 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 19:56:37 np0005535963 python3.9[53468]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:56:38 np0005535963 systemd[1]: session-10.scope: Deactivated successfully.
Nov 25 19:56:38 np0005535963 systemd[1]: session-10.scope: Consumed 3.021s CPU time.
Nov 25 19:56:38 np0005535963 systemd-logind[800]: Session 10 logged out. Waiting for processes to exit.
Nov 25 19:56:38 np0005535963 systemd-logind[800]: Removed session 10.
Nov 25 19:56:43 np0005535963 systemd-logind[800]: New session 11 of user zuul.
Nov 25 19:56:43 np0005535963 systemd[1]: Started Session 11 of User zuul.
Nov 25 19:56:45 np0005535963 python3.9[53650]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:56:46 np0005535963 python3.9[53804]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:56:47 np0005535963 python3.9[53960]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 19:56:48 np0005535963 python3.9[54045]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 19:56:50 np0005535963 python3.9[54198]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 19:56:51 np0005535963 python3.9[54393]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:56:52 np0005535963 python3.9[54545]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:56:52 np0005535963 systemd[1]: var-lib-containers-storage-overlay-compat3760194306-merged.mount: Deactivated successfully.
Nov 25 19:56:52 np0005535963 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck388281916-merged.mount: Deactivated successfully.
Nov 25 19:56:52 np0005535963 podman[54546]: 2025-11-26 00:56:52.779960538 +0000 UTC m=+0.069165558 system refresh
Nov 25 19:56:53 np0005535963 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 19:56:53 np0005535963 python3.9[54708]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:56:54 np0005535963 python3.9[54831]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118613.0309744-79-69251714829587/.source.json follow=False _original_basename=podman_network_config.j2 checksum=dd9422db6add5cfbe51861ec377373cd06391f98 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:56:55 np0005535963 python3.9[54983]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:56:56 np0005535963 python3.9[55106]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764118614.8538306-94-275674864570634/.source.conf follow=False _original_basename=registries.conf.j2 checksum=6f4b8cd86a91c4902ce1d5be02f975ff3c7494d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:56:57 np0005535963 python3.9[55258]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:56:58 np0005535963 python3.9[55410]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:56:58 np0005535963 python3.9[55562]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:56:59 np0005535963 python3.9[55714]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:57:00 np0005535963 python3.9[55866]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 19:57:03 np0005535963 python3.9[56019]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:57:03 np0005535963 python3.9[56173]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:57:04 np0005535963 python3.9[56325]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:57:05 np0005535963 python3.9[56477]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:57:06 np0005535963 python3.9[56630]: ansible-service_facts Invoked
Nov 25 19:57:06 np0005535963 network[56647]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 19:57:06 np0005535963 network[56648]: 'network-scripts' will be removed from distribution in near future.
Nov 25 19:57:06 np0005535963 network[56649]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 19:57:12 np0005535963 python3.9[57101]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 19:57:15 np0005535963 python3.9[57254]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 19:57:16 np0005535963 python3.9[57406]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:57:17 np0005535963 python3.9[57531]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764118636.1189106-238-121083455931217/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:57:18 np0005535963 python3.9[57685]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:57:19 np0005535963 python3.9[57810]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764118637.7666047-253-226842719007128/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:57:20 np0005535963 python3.9[57964]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:57:21 np0005535963 python3.9[58118]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 19:57:22 np0005535963 python3.9[58202]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 19:57:23 np0005535963 python3.9[58356]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 19:57:24 np0005535963 python3.9[58440]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 19:57:24 np0005535963 chronyd[795]: chronyd exiting
Nov 25 19:57:24 np0005535963 systemd[1]: Stopping NTP client/server...
Nov 25 19:57:24 np0005535963 systemd[1]: chronyd.service: Deactivated successfully.
Nov 25 19:57:24 np0005535963 systemd[1]: Stopped NTP client/server.
Nov 25 19:57:24 np0005535963 systemd[1]: Starting NTP client/server...
Nov 25 19:57:25 np0005535963 chronyd[58449]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 19:57:25 np0005535963 chronyd[58449]: Frequency -28.366 +/- 0.147 ppm read from /var/lib/chrony/drift
Nov 25 19:57:25 np0005535963 chronyd[58449]: Loaded seccomp filter (level 2)
Nov 25 19:57:25 np0005535963 systemd[1]: Started NTP client/server.
Nov 25 19:57:25 np0005535963 systemd[1]: session-11.scope: Deactivated successfully.
Nov 25 19:57:25 np0005535963 systemd[1]: session-11.scope: Consumed 30.154s CPU time.
Nov 25 19:57:25 np0005535963 systemd-logind[800]: Session 11 logged out. Waiting for processes to exit.
Nov 25 19:57:25 np0005535963 systemd-logind[800]: Removed session 11.
Nov 25 19:57:31 np0005535963 systemd-logind[800]: New session 12 of user zuul.
Nov 25 19:57:31 np0005535963 systemd[1]: Started Session 12 of User zuul.
Nov 25 19:57:32 np0005535963 python3.9[58630]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:57:33 np0005535963 python3.9[58782]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:57:34 np0005535963 python3.9[58905]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764118652.848847-34-182221933088312/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:57:35 np0005535963 systemd[1]: session-12.scope: Deactivated successfully.
Nov 25 19:57:35 np0005535963 systemd[1]: session-12.scope: Consumed 2.156s CPU time.
Nov 25 19:57:35 np0005535963 systemd-logind[800]: Session 12 logged out. Waiting for processes to exit.
Nov 25 19:57:35 np0005535963 systemd-logind[800]: Removed session 12.
Nov 25 19:57:40 np0005535963 systemd-logind[800]: New session 13 of user zuul.
Nov 25 19:57:40 np0005535963 systemd[1]: Started Session 13 of User zuul.
Nov 25 19:57:42 np0005535963 python3.9[59083]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:57:43 np0005535963 python3.9[59239]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:57:44 np0005535963 python3.9[59414]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:57:45 np0005535963 python3.9[59537]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764118663.6001205-41-171180297021520/.source.json _original_basename=.iq3nmfar follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:57:46 np0005535963 python3.9[59689]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:57:46 np0005535963 python3.9[59812]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764118665.5936406-64-205226201346409/.source _original_basename=.yog2t8fo follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:57:47 np0005535963 python3.9[59964]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:57:48 np0005535963 python3.9[60116]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:57:49 np0005535963 python3.9[60239]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764118667.9883716-88-261145461357061/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:57:50 np0005535963 python3.9[60392]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:57:50 np0005535963 python3.9[60515]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764118669.4776337-88-195334242758409/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:57:51 np0005535963 python3.9[60667]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:57:52 np0005535963 python3.9[60819]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:57:53 np0005535963 python3.9[60942]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118671.7288725-125-21367778916828/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:57:53 np0005535963 python3.9[61094]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:57:54 np0005535963 python3.9[61217]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118673.274184-140-1085157693842/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:57:55 np0005535963 python3.9[61369]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 19:57:55 np0005535963 systemd[1]: Reloading.
Nov 25 19:57:56 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:57:56 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 19:57:56 np0005535963 systemd[1]: Reloading.
Nov 25 19:57:56 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:57:56 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 19:57:56 np0005535963 systemd[1]: Starting EDPM Container Shutdown...
Nov 25 19:57:56 np0005535963 systemd[1]: Finished EDPM Container Shutdown.
Nov 25 19:57:57 np0005535963 python3.9[61597]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:57:57 np0005535963 python3.9[61720]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118676.7204754-163-246146207802235/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:57:58 np0005535963 python3.9[61872]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:57:59 np0005535963 python3.9[61995]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118678.200072-178-119901559258863/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:00 np0005535963 python3.9[62147]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 19:58:00 np0005535963 systemd[1]: Reloading.
Nov 25 19:58:00 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:58:00 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 19:58:00 np0005535963 systemd[1]: Reloading.
Nov 25 19:58:00 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:58:00 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 19:58:00 np0005535963 systemd[1]: Starting Create netns directory...
Nov 25 19:58:00 np0005535963 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 19:58:00 np0005535963 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 19:58:00 np0005535963 systemd[1]: Finished Create netns directory.
Nov 25 19:58:01 np0005535963 python3.9[62374]: ansible-ansible.builtin.service_facts Invoked
Nov 25 19:58:01 np0005535963 network[62391]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 19:58:01 np0005535963 network[62392]: 'network-scripts' will be removed from distribution in near future.
Nov 25 19:58:01 np0005535963 network[62393]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 19:58:06 np0005535963 python3.9[62655]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 19:58:06 np0005535963 systemd[1]: Reloading.
Nov 25 19:58:06 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:58:06 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 19:58:06 np0005535963 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 25 19:58:07 np0005535963 iptables.init[62695]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 25 19:58:07 np0005535963 iptables.init[62695]: iptables: Flushing firewall rules: [  OK  ]
Nov 25 19:58:07 np0005535963 systemd[1]: iptables.service: Deactivated successfully.
Nov 25 19:58:07 np0005535963 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 25 19:58:08 np0005535963 python3.9[62891]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 19:58:09 np0005535963 python3.9[63045]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 19:58:09 np0005535963 systemd[1]: Reloading.
Nov 25 19:58:09 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 19:58:09 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 19:58:09 np0005535963 systemd[1]: Starting Netfilter Tables...
Nov 25 19:58:09 np0005535963 systemd[1]: Finished Netfilter Tables.
Nov 25 19:58:10 np0005535963 python3.9[63237]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:58:11 np0005535963 python3.9[63390]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:58:12 np0005535963 python3.9[63515]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764118691.3577058-247-67669565081317/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:13 np0005535963 python3.9[63668]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 19:58:13 np0005535963 systemd[1]: Reloading OpenSSH server daemon...
Nov 25 19:58:13 np0005535963 systemd[1]: Reloaded OpenSSH server daemon.
Nov 25 19:58:14 np0005535963 python3.9[63824]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:15 np0005535963 python3.9[63976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:58:16 np0005535963 python3.9[64099]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118694.8585274-278-6998569946058/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:17 np0005535963 python3.9[64251]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 19:58:17 np0005535963 systemd[1]: Starting Time & Date Service...
Nov 25 19:58:17 np0005535963 systemd[1]: Started Time & Date Service.
Nov 25 19:58:18 np0005535963 python3.9[64407]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:19 np0005535963 python3.9[64559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:58:19 np0005535963 python3.9[64682]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764118698.520821-313-98456073967639/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:20 np0005535963 python3.9[64834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:58:21 np0005535963 python3.9[64957]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764118700.070384-328-102228281678934/.source.yaml _original_basename=.r27m80vh follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:22 np0005535963 python3.9[65109]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:58:22 np0005535963 python3.9[65232]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118701.564533-343-201389481377851/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:23 np0005535963 python3.9[65384]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:58:24 np0005535963 python3.9[65537]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:58:25 np0005535963 python3[65690]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 19:58:26 np0005535963 python3.9[65842]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:58:27 np0005535963 python3.9[65965]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118705.8324802-382-242002339452780/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:28 np0005535963 python3.9[66117]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:58:28 np0005535963 python3.9[66240]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118707.4584935-397-62544748287267/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:29 np0005535963 python3.9[66392]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:58:30 np0005535963 python3.9[66515]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118709.1571507-412-85762958263699/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:31 np0005535963 python3.9[66667]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:58:31 np0005535963 python3.9[66790]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118710.6726003-427-78371850849134/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:32 np0005535963 python3.9[66942]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:58:33 np0005535963 python3.9[67065]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118712.2043252-442-203650223960191/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:34 np0005535963 python3.9[67217]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:35 np0005535963 python3.9[67369]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:58:36 np0005535963 python3.9[67528]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:37 np0005535963 python3.9[67681]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:38 np0005535963 python3.9[67833]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:39 np0005535963 python3.9[67985]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 19:58:40 np0005535963 python3.9[68138]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 19:58:40 np0005535963 systemd[1]: session-13.scope: Deactivated successfully.
Nov 25 19:58:40 np0005535963 systemd-logind[800]: Session 13 logged out. Waiting for processes to exit.
Nov 25 19:58:40 np0005535963 systemd[1]: session-13.scope: Consumed 44.525s CPU time.
Nov 25 19:58:40 np0005535963 systemd-logind[800]: Removed session 13.
Nov 25 19:58:46 np0005535963 systemd-logind[800]: New session 14 of user zuul.
Nov 25 19:58:46 np0005535963 systemd[1]: Started Session 14 of User zuul.
Nov 25 19:58:47 np0005535963 python3.9[68320]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 19:58:47 np0005535963 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 19:58:48 np0005535963 python3.9[68474]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:58:49 np0005535963 python3.9[68626]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:58:50 np0005535963 python3.9[68778]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCykmnY+oafG3mHme/LpEkb2adSDQrzMN3MimIJb6cb9uyFPPekXIkxuzLR2hnrvQYJh8FRip2XXTA7OK9VGOt/2ffm5oV/vtTcglUGBGV2I6g6oMNtUbnvnulNj76pFz/cfKe0hQkAGM+b2aadpjm9DG0vOtuULnGPYiexfSN6uH58xfd6fWWwXjl3fLfUAdeMMfIXKn8+yO/MWeiP0OXqDBlmxsSq2awwlyW9zXr3UKOEVNzRm1HWuDoC92FALJq2LRIlgRWL62xsOSzlx2yESDY5d5NMP8+T5pbIRZls9qv5+Ngd2uM4RwQeE8HfNRAn9pBMJH1w0wa4/SkUv7v+88rm9mUzO9qsWn4KxM3S4ZJ9OGdX6YIRZ1gi4mMR9avWqoJHvs60HyrpKTvZHZrgOLXzXP+Dt35H271u/euxUPrrrRKH77hRA+rUnFkO1gpJFKKdp+VODlgXMotBRQOtwFhOf5UfJivpSu1UeS3WlZKkmCVCnf3KFdlEkcKNNjU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE4JumxWKxmoxGnJJmVBjitKlLFgQ6W4f029bTfAiSDd#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLJ2eh4CQVE9/EuBwJMMRg0Myb0WN6nOq5cVeYrcwl3vKUnKN3kWqlDkumr3pQyW/7ceK7qycJrI9T1pQjoOj2A=#012 create=True mode=0644 path=/tmp/ansible.coli3038 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:51 np0005535963 python3.9[68930]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.coli3038' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:58:52 np0005535963 python3.9[69084]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.coli3038 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:58:53 np0005535963 systemd[1]: session-14.scope: Deactivated successfully.
Nov 25 19:58:53 np0005535963 systemd[1]: session-14.scope: Consumed 4.223s CPU time.
Nov 25 19:58:53 np0005535963 systemd-logind[800]: Session 14 logged out. Waiting for processes to exit.
Nov 25 19:58:53 np0005535963 systemd-logind[800]: Removed session 14.
Nov 25 19:58:58 np0005535963 systemd-logind[800]: New session 15 of user zuul.
Nov 25 19:58:58 np0005535963 systemd[1]: Started Session 15 of User zuul.
Nov 25 19:58:59 np0005535963 python3.9[69262]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:59:01 np0005535963 python3.9[69418]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 19:59:02 np0005535963 python3.9[69572]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 19:59:02 np0005535963 python3.9[69725]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:59:04 np0005535963 python3.9[69878]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:59:04 np0005535963 python3.9[70032]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:59:05 np0005535963 python3.9[70187]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:06 np0005535963 systemd[1]: session-15.scope: Deactivated successfully.
Nov 25 19:59:06 np0005535963 systemd[1]: session-15.scope: Consumed 5.476s CPU time.
Nov 25 19:59:06 np0005535963 systemd-logind[800]: Session 15 logged out. Waiting for processes to exit.
Nov 25 19:59:06 np0005535963 systemd-logind[800]: Removed session 15.
Nov 25 19:59:11 np0005535963 systemd-logind[800]: New session 16 of user zuul.
Nov 25 19:59:11 np0005535963 systemd[1]: Started Session 16 of User zuul.
Nov 25 19:59:12 np0005535963 python3.9[70365]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:59:14 np0005535963 python3.9[70521]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 19:59:15 np0005535963 python3.9[70605]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 19:59:17 np0005535963 python3.9[70756]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 19:59:18 np0005535963 python3.9[70907]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 19:59:19 np0005535963 python3.9[71057]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:59:19 np0005535963 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 19:59:20 np0005535963 python3.9[71208]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 19:59:20 np0005535963 systemd[1]: session-16.scope: Deactivated successfully.
Nov 25 19:59:20 np0005535963 systemd[1]: session-16.scope: Consumed 6.872s CPU time.
Nov 25 19:59:20 np0005535963 systemd-logind[800]: Session 16 logged out. Waiting for processes to exit.
Nov 25 19:59:20 np0005535963 systemd-logind[800]: Removed session 16.
Nov 25 19:59:26 np0005535963 systemd-logind[800]: New session 17 of user zuul.
Nov 25 19:59:26 np0005535963 systemd[1]: Started Session 17 of User zuul.
Nov 25 19:59:27 np0005535963 python3.9[71386]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 19:59:29 np0005535963 python3.9[71542]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:59:29 np0005535963 python3.9[71694]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:59:30 np0005535963 python3.9[71846]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:31 np0005535963 python3.9[71969]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118770.1779916-65-273061675346685/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6ecf06b6edcea7d1fbd4ae7ff5906ea018bdb3aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:32 np0005535963 python3.9[72121]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:33 np0005535963 python3.9[72244]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118771.988324-65-153626682426598/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ee6f2d109259b31d57ad4c6e13860e018bea8565 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:34 np0005535963 python3.9[72396]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:34 np0005535963 chronyd[58449]: Selected source 23.159.16.194 (pool.ntp.org)
Nov 25 19:59:34 np0005535963 python3.9[72519]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118773.4271436-65-95412349665175/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7d21f798ff8f71f1f0f5a22b0c3ec1ddd2116cf2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:35 np0005535963 python3.9[72671]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:59:36 np0005535963 python3.9[72823]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:59:37 np0005535963 python3.9[72975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:37 np0005535963 python3.9[73098]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118776.6090608-124-145824530342971/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=a37e24eb1fafb26d2d16206736944a22587643db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:38 np0005535963 python3.9[73250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:39 np0005535963 python3.9[73373]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118778.157067-124-138351199673139/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ee6f2d109259b31d57ad4c6e13860e018bea8565 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:40 np0005535963 python3.9[73525]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:40 np0005535963 python3.9[73648]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118779.708967-124-114411408661651/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=468a145aeed099b09df6e56cdfafbbb9cdab1323 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:41 np0005535963 python3.9[73800]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:59:42 np0005535963 python3.9[73952]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:59:43 np0005535963 python3.9[74104]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:44 np0005535963 python3.9[74227]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118782.914827-183-164326894988139/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=1718a9879c6c13bb7ffaacfa47773c0ba3add3a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:44 np0005535963 python3.9[74379]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:45 np0005535963 python3.9[74502]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118784.3820543-183-185746982945249/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=5cbc88f17d05a1b378e80e219a305f40df4c8469 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:46 np0005535963 python3.9[74654]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:47 np0005535963 python3.9[74777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118785.8433423-183-151877951197597/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=210b936da777b560d8cf4fbcdc5598d9a508a0ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:47 np0005535963 python3.9[74929]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:59:48 np0005535963 python3.9[75081]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:59:49 np0005535963 python3.9[75233]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:50 np0005535963 python3.9[75356]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118789.1312838-242-281241530843350/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=088bf1219f9c5827177f2398f0f530c09a0159e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:51 np0005535963 python3.9[75508]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:51 np0005535963 python3.9[75631]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118790.6127434-242-13972319769505/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=b771969d0143ad59aea8506fa55f83a43a00e414 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:52 np0005535963 python3.9[75783]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:53 np0005535963 python3.9[75906]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118792.0081906-242-109562791379351/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6cc92ab9329e37287d49e3898bbb202b337eefca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:54 np0005535963 python3.9[76058]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:59:55 np0005535963 python3.9[76210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:56 np0005535963 python3.9[76333]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118794.954873-310-33538820399937/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=661af12c565470228d854ced01dfaeaefe9a4726 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:57 np0005535963 python3.9[76485]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 19:59:57 np0005535963 python3.9[76637]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 19:59:58 np0005535963 python3.9[76760]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118797.3436968-334-5847402444661/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=661af12c565470228d854ced01dfaeaefe9a4726 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 19:59:59 np0005535963 python3.9[76912]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:00:00 np0005535963 python3.9[77064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:01 np0005535963 python3.9[77187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118799.864604-358-261032187467165/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=661af12c565470228d854ced01dfaeaefe9a4726 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:02 np0005535963 python3.9[77339]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:00:02 np0005535963 python3.9[77491]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:03 np0005535963 python3.9[77614]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118802.271592-382-9896015520451/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=661af12c565470228d854ced01dfaeaefe9a4726 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:04 np0005535963 python3.9[77767]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:00:05 np0005535963 python3.9[77919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:06 np0005535963 python3.9[78042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118804.7907238-406-16798051039049/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=661af12c565470228d854ced01dfaeaefe9a4726 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:06 np0005535963 python3.9[78194]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:00:07 np0005535963 python3.9[78346]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:08 np0005535963 python3.9[78469]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118807.1263373-430-190180035571893/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=661af12c565470228d854ced01dfaeaefe9a4726 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:08 np0005535963 systemd-logind[800]: Session 17 logged out. Waiting for processes to exit.
Nov 25 20:00:08 np0005535963 systemd[1]: session-17.scope: Deactivated successfully.
Nov 25 20:00:08 np0005535963 systemd[1]: session-17.scope: Consumed 33.629s CPU time.
Nov 25 20:00:08 np0005535963 systemd-logind[800]: Removed session 17.
Nov 25 20:00:14 np0005535963 systemd-logind[800]: New session 18 of user zuul.
Nov 25 20:00:14 np0005535963 systemd[1]: Started Session 18 of User zuul.
Nov 25 20:00:15 np0005535963 python3.9[78647]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:00:17 np0005535963 python3.9[78803]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:00:18 np0005535963 python3.9[78955]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:00:19 np0005535963 python3.9[79105]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:00:20 np0005535963 python3.9[79257]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 20:00:22 np0005535963 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 25 20:00:22 np0005535963 python3.9[79413]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:00:23 np0005535963 python3.9[79497]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:00:25 np0005535963 python3.9[79650]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 20:00:26 np0005535963 python3[79805]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 25 20:00:27 np0005535963 python3.9[79957]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:28 np0005535963 python3.9[80109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:29 np0005535963 python3.9[80187]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:30 np0005535963 python3.9[80339]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:30 np0005535963 python3.9[80417]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yxgvlecu recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:31 np0005535963 python3.9[80569]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:32 np0005535963 python3.9[80647]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:33 np0005535963 python3.9[80799]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:00:34 np0005535963 python3[80952]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 20:00:35 np0005535963 python3.9[81104]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:36 np0005535963 python3.9[81229]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118834.666131-157-19650307371952/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:36 np0005535963 python3.9[81381]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:37 np0005535963 python3.9[81506]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118836.3043165-172-94441159847248/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:38 np0005535963 python3.9[81658]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:39 np0005535963 python3.9[81783]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118837.8540313-187-79928104302369/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:40 np0005535963 python3.9[81935]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:40 np0005535963 python3.9[82060]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118839.4209027-202-14087952420079/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:41 np0005535963 python3.9[82212]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:42 np0005535963 python3.9[82337]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764118841.0347724-217-211689178342304/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:43 np0005535963 python3.9[82489]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:44 np0005535963 python3.9[82641]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:00:45 np0005535963 python3.9[82796]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:46 np0005535963 python3.9[82948]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:00:46 np0005535963 python3.9[83101]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:00:47 np0005535963 python3.9[83255]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:00:48 np0005535963 python3.9[83410]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:50 np0005535963 python3.9[83560]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:00:51 np0005535963 python3.9[83713]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:00:51 np0005535963 ovs-vsctl[83714]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 25 20:00:52 np0005535963 python3.9[83866]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:00:53 np0005535963 python3.9[84021]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:00:53 np0005535963 ovs-vsctl[84022]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 25 20:00:53 np0005535963 python3.9[84172]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:00:54 np0005535963 python3.9[84326]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:00:55 np0005535963 python3.9[84478]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:56 np0005535963 python3.9[84556]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:00:56 np0005535963 python3.9[84708]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:57 np0005535963 python3.9[84786]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:00:58 np0005535963 python3.9[84938]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:00:59 np0005535963 python3.9[85090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:00:59 np0005535963 python3.9[85168]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:01:00 np0005535963 python3.9[85320]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:01:01 np0005535963 python3.9[85398]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:01:02 np0005535963 python3.9[85565]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:01:02 np0005535963 systemd[1]: Reloading.
Nov 25 20:01:02 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:01:02 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:01:03 np0005535963 python3.9[85754]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:01:03 np0005535963 python3.9[85832]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:01:04 np0005535963 python3.9[85984]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:01:05 np0005535963 python3.9[86062]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:01:06 np0005535963 python3.9[86214]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:01:06 np0005535963 systemd[1]: Reloading.
Nov 25 20:01:06 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:01:06 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:01:06 np0005535963 systemd[1]: Starting Create netns directory...
Nov 25 20:01:06 np0005535963 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 20:01:06 np0005535963 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 20:01:06 np0005535963 systemd[1]: Finished Create netns directory.
Nov 25 20:01:07 np0005535963 python3.9[86408]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:01:08 np0005535963 python3.9[86560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:01:09 np0005535963 python3.9[86683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764118867.8676949-468-12510529816077/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:01:10 np0005535963 python3.9[86835]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:01:10 np0005535963 python3.9[86987]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:01:11 np0005535963 python3.9[87110]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764118870.4397001-493-180284602995280/.source.json _original_basename=.1gu1lw0j follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:01:12 np0005535963 python3.9[87262]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:01:15 np0005535963 python3.9[87689]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 25 20:01:16 np0005535963 python3.9[87841]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 20:01:17 np0005535963 python3.9[87993]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 20:01:17 np0005535963 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:01:18 np0005535963 python3[88156]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 20:01:18 np0005535963 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:01:20 np0005535963 systemd[1]: var-lib-containers-storage-overlay-compat3633639694-lower\x2dmapped.mount: Deactivated successfully.
Nov 25 20:01:24 np0005535963 podman[88169]: 2025-11-26 01:01:24.125756255 +0000 UTC m=+5.445713604 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 25 20:01:24 np0005535963 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:01:24 np0005535963 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:01:24 np0005535963 podman[88289]: 2025-11-26 01:01:24.309053242 +0000 UTC m=+0.058780279 container create e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 20:01:24 np0005535963 podman[88289]: 2025-11-26 01:01:24.273523293 +0000 UTC m=+0.023250370 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 25 20:01:24 np0005535963 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 20:01:24 np0005535963 python3[88156]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 25 20:01:25 np0005535963 python3.9[88478]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:01:26 np0005535963 python3.9[88632]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:01:26 np0005535963 python3.9[88708]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:01:27 np0005535963 python3.9[88859]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764118886.9806035-581-103539386862361/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:01:28 np0005535963 python3.9[88935]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:01:28 np0005535963 systemd[1]: Reloading.
Nov 25 20:01:28 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:01:28 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:01:29 np0005535963 python3.9[89046]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:01:29 np0005535963 systemd[1]: Reloading.
Nov 25 20:01:29 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:01:29 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:01:29 np0005535963 systemd[1]: Starting ovn_controller container...
Nov 25 20:01:29 np0005535963 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 25 20:01:29 np0005535963 systemd[1]: Started libcrun container.
Nov 25 20:01:29 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad725ec72aac8e3d7f8b337396e570480eeba640e090c93f5c9e2f547f653aab/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 25 20:01:29 np0005535963 systemd[1]: Started /usr/bin/podman healthcheck run e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16.
Nov 25 20:01:29 np0005535963 podman[89089]: 2025-11-26 01:01:29.787805654 +0000 UTC m=+0.175807130 container init e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:01:29 np0005535963 ovn_controller[89102]: + sudo -E kolla_set_configs
Nov 25 20:01:29 np0005535963 podman[89089]: 2025-11-26 01:01:29.821404637 +0000 UTC m=+0.209406063 container start e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 20:01:29 np0005535963 edpm-start-podman-container[89089]: ovn_controller
Nov 25 20:01:29 np0005535963 systemd[1]: Created slice User Slice of UID 0.
Nov 25 20:01:29 np0005535963 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 25 20:01:29 np0005535963 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 25 20:01:29 np0005535963 systemd[1]: Starting User Manager for UID 0...
Nov 25 20:01:29 np0005535963 edpm-start-podman-container[89088]: Creating additional drop-in dependency for "ovn_controller" (e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16)
Nov 25 20:01:29 np0005535963 podman[89109]: 2025-11-26 01:01:29.98166176 +0000 UTC m=+0.140518168 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 20:01:29 np0005535963 systemd[1]: e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16-33ece921abf8d185.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 20:01:29 np0005535963 systemd[1]: e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16-33ece921abf8d185.service: Failed with result 'exit-code'.
Nov 25 20:01:29 np0005535963 systemd[1]: Reloading.
Nov 25 20:01:30 np0005535963 systemd[89141]: Queued start job for default target Main User Target.
Nov 25 20:01:30 np0005535963 systemd[89141]: Created slice User Application Slice.
Nov 25 20:01:30 np0005535963 systemd[89141]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 25 20:01:30 np0005535963 systemd[89141]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 20:01:30 np0005535963 systemd[89141]: Reached target Paths.
Nov 25 20:01:30 np0005535963 systemd[89141]: Reached target Timers.
Nov 25 20:01:30 np0005535963 systemd[89141]: Starting D-Bus User Message Bus Socket...
Nov 25 20:01:30 np0005535963 systemd[89141]: Starting Create User's Volatile Files and Directories...
Nov 25 20:01:30 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:01:30 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:01:30 np0005535963 systemd[89141]: Listening on D-Bus User Message Bus Socket.
Nov 25 20:01:30 np0005535963 systemd[89141]: Reached target Sockets.
Nov 25 20:01:30 np0005535963 systemd[89141]: Finished Create User's Volatile Files and Directories.
Nov 25 20:01:30 np0005535963 systemd[89141]: Reached target Basic System.
Nov 25 20:01:30 np0005535963 systemd[89141]: Reached target Main User Target.
Nov 25 20:01:30 np0005535963 systemd[89141]: Startup finished in 149ms.
Nov 25 20:01:30 np0005535963 systemd[1]: Started User Manager for UID 0.
Nov 25 20:01:30 np0005535963 systemd[1]: Started ovn_controller container.
Nov 25 20:01:30 np0005535963 systemd[1]: Started Session c1 of User root.
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: INFO:__main__:Validating config file
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: INFO:__main__:Writing out command to execute
Nov 25 20:01:30 np0005535963 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: ++ cat /run_command
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: + ARGS=
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: + sudo kolla_copy_cacerts
Nov 25 20:01:30 np0005535963 systemd[1]: Started Session c2 of User root.
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: + [[ ! -n '' ]]
Nov 25 20:01:30 np0005535963 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: + . kolla_extend_start
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: + umask 0022
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 25 20:01:30 np0005535963 NetworkManager[48886]: <info>  [1764118890.4680] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 25 20:01:30 np0005535963 NetworkManager[48886]: <info>  [1764118890.4694] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 20:01:30 np0005535963 NetworkManager[48886]: <info>  [1764118890.4712] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 25 20:01:30 np0005535963 NetworkManager[48886]: <info>  [1764118890.4721] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 25 20:01:30 np0005535963 NetworkManager[48886]: <info>  [1764118890.4726] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 20:01:30 np0005535963 kernel: br-int: entered promiscuous mode
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 20:01:30 np0005535963 ovn_controller[89102]: 2025-11-26T01:01:30Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 20:01:30 np0005535963 NetworkManager[48886]: <info>  [1764118890.4961] manager: (ovn-03cca6-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 25 20:01:30 np0005535963 kernel: genev_sys_6081: entered promiscuous mode
Nov 25 20:01:30 np0005535963 NetworkManager[48886]: <info>  [1764118890.5256] device (genev_sys_6081): carrier: link connected
Nov 25 20:01:30 np0005535963 NetworkManager[48886]: <info>  [1764118890.5260] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 25 20:01:30 np0005535963 systemd-udevd[89252]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 20:01:30 np0005535963 systemd-udevd[89258]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 20:01:31 np0005535963 python3.9[89368]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:01:31 np0005535963 ovs-vsctl[89369]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 25 20:01:31 np0005535963 python3.9[89521]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:01:32 np0005535963 ovs-vsctl[89523]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 25 20:01:32 np0005535963 python3.9[89676]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:01:33 np0005535963 ovs-vsctl[89677]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 25 20:01:33 np0005535963 systemd[1]: session-18.scope: Deactivated successfully.
Nov 25 20:01:33 np0005535963 systemd[1]: session-18.scope: Consumed 1min 8.088s CPU time.
Nov 25 20:01:33 np0005535963 systemd-logind[800]: Session 18 logged out. Waiting for processes to exit.
Nov 25 20:01:33 np0005535963 systemd-logind[800]: Removed session 18.
Nov 25 20:01:39 np0005535963 systemd-logind[800]: New session 20 of user zuul.
Nov 25 20:01:39 np0005535963 systemd[1]: Started Session 20 of User zuul.
Nov 25 20:01:40 np0005535963 python3.9[89855]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:01:40 np0005535963 systemd[1]: Stopping User Manager for UID 0...
Nov 25 20:01:40 np0005535963 systemd[89141]: Activating special unit Exit the Session...
Nov 25 20:01:40 np0005535963 systemd[89141]: Stopped target Main User Target.
Nov 25 20:01:40 np0005535963 systemd[89141]: Stopped target Basic System.
Nov 25 20:01:40 np0005535963 systemd[89141]: Stopped target Paths.
Nov 25 20:01:40 np0005535963 systemd[89141]: Stopped target Sockets.
Nov 25 20:01:40 np0005535963 systemd[89141]: Stopped target Timers.
Nov 25 20:01:40 np0005535963 systemd[89141]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 20:01:40 np0005535963 systemd[89141]: Closed D-Bus User Message Bus Socket.
Nov 25 20:01:40 np0005535963 systemd[89141]: Stopped Create User's Volatile Files and Directories.
Nov 25 20:01:40 np0005535963 systemd[89141]: Removed slice User Application Slice.
Nov 25 20:01:40 np0005535963 systemd[89141]: Reached target Shutdown.
Nov 25 20:01:40 np0005535963 systemd[89141]: Finished Exit the Session.
Nov 25 20:01:40 np0005535963 systemd[89141]: Reached target Exit the Session.
Nov 25 20:01:40 np0005535963 systemd[1]: user@0.service: Deactivated successfully.
Nov 25 20:01:40 np0005535963 systemd[1]: Stopped User Manager for UID 0.
Nov 25 20:01:40 np0005535963 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 25 20:01:40 np0005535963 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 25 20:01:40 np0005535963 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 25 20:01:40 np0005535963 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 25 20:01:40 np0005535963 systemd[1]: Removed slice User Slice of UID 0.
Nov 25 20:01:41 np0005535963 python3.9[90015]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:01:43 np0005535963 python3.9[90178]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:01:43 np0005535963 systemd[1]: Reloading.
Nov 25 20:01:43 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:01:43 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:01:44 np0005535963 python3.9[90366]: ansible-ansible.builtin.service_facts Invoked
Nov 25 20:01:44 np0005535963 network[90383]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 20:01:44 np0005535963 network[90384]: 'network-scripts' will be removed from distribution in near future.
Nov 25 20:01:44 np0005535963 network[90385]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 20:01:50 np0005535963 python3.9[90647]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:01:51 np0005535963 python3.9[90800]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:01:52 np0005535963 python3.9[90954]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:01:53 np0005535963 python3.9[91107]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:01:54 np0005535963 python3.9[91260]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:01:54 np0005535963 python3.9[91413]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:01:55 np0005535963 python3.9[91566]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:01:58 np0005535963 python3.9[91719]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:01:58 np0005535963 python3.9[91871]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:01:59 np0005535963 python3.9[92023]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:00 np0005535963 ovn_controller[89102]: 2025-11-26T01:02:00Z|00025|memory|INFO|16256 kB peak resident set size after 29.8 seconds
Nov 25 20:02:00 np0005535963 ovn_controller[89102]: 2025-11-26T01:02:00Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 25 20:02:00 np0005535963 podman[92147]: 2025-11-26 01:02:00.296912551 +0000 UTC m=+0.161621484 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 25 20:02:00 np0005535963 python3.9[92192]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:01 np0005535963 python3.9[92352]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:02 np0005535963 python3.9[92504]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:02 np0005535963 python3.9[92656]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:03 np0005535963 python3.9[92808]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:04 np0005535963 python3.9[92960]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:05 np0005535963 python3.9[93112]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:06 np0005535963 python3.9[93264]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:06 np0005535963 python3.9[93416]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:07 np0005535963 python3.9[93568]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:08 np0005535963 python3.9[93720]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:02:09 np0005535963 python3.9[93872]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:02:10 np0005535963 python3.9[94024]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 20:02:11 np0005535963 python3.9[94176]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:02:11 np0005535963 systemd[1]: Reloading.
Nov 25 20:02:11 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:02:11 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:02:12 np0005535963 python3.9[94363]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:02:13 np0005535963 python3.9[94516]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:02:13 np0005535963 python3.9[94669]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:02:14 np0005535963 python3.9[94822]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:02:15 np0005535963 python3.9[94975]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:02:17 np0005535963 python3.9[95128]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:02:19 np0005535963 python3.9[95281]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:02:20 np0005535963 python3.9[95434]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 25 20:02:21 np0005535963 python3.9[95587]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 20:02:22 np0005535963 python3.9[95745]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 20:02:24 np0005535963 python3.9[95905]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:02:25 np0005535963 python3.9[95989]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:02:30 np0005535963 podman[96001]: 2025-11-26 01:02:30.571232983 +0000 UTC m=+0.121694507 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:02:54 np0005535963 kernel: SELinux:  Converting 2757 SID table entries...
Nov 25 20:02:54 np0005535963 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 20:02:54 np0005535963 kernel: SELinux:  policy capability open_perms=1
Nov 25 20:02:54 np0005535963 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 20:02:54 np0005535963 kernel: SELinux:  policy capability always_check_network=0
Nov 25 20:02:54 np0005535963 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 20:02:54 np0005535963 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 20:02:54 np0005535963 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 20:03:01 np0005535963 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 25 20:03:01 np0005535963 podman[96215]: 2025-11-26 01:03:01.605382099 +0000 UTC m=+0.146592521 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:03:03 np0005535963 kernel: SELinux:  Converting 2757 SID table entries...
Nov 25 20:03:03 np0005535963 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 20:03:03 np0005535963 kernel: SELinux:  policy capability open_perms=1
Nov 25 20:03:03 np0005535963 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 20:03:03 np0005535963 kernel: SELinux:  policy capability always_check_network=0
Nov 25 20:03:03 np0005535963 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 20:03:03 np0005535963 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 20:03:03 np0005535963 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 20:03:32 np0005535963 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 25 20:03:32 np0005535963 podman[104102]: 2025-11-26 01:03:32.640275687 +0000 UTC m=+0.178311587 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 20:04:03 np0005535963 podman[113075]: 2025-11-26 01:04:03.59789392 +0000 UTC m=+0.148275748 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 25 20:04:05 np0005535963 kernel: SELinux:  Converting 2758 SID table entries...
Nov 25 20:04:05 np0005535963 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 20:04:05 np0005535963 kernel: SELinux:  policy capability open_perms=1
Nov 25 20:04:05 np0005535963 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 20:04:05 np0005535963 kernel: SELinux:  policy capability always_check_network=0
Nov 25 20:04:05 np0005535963 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 20:04:05 np0005535963 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 20:04:05 np0005535963 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 20:04:06 np0005535963 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Nov 25 20:04:06 np0005535963 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 25 20:04:06 np0005535963 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Nov 25 20:04:14 np0005535963 systemd[1]: Stopping OpenSSH server daemon...
Nov 25 20:04:14 np0005535963 systemd[1]: sshd.service: Deactivated successfully.
Nov 25 20:04:14 np0005535963 systemd[1]: Stopped OpenSSH server daemon.
Nov 25 20:04:14 np0005535963 systemd[1]: sshd.service: Consumed 1.607s CPU time, read 32.0K from disk, written 0B to disk.
Nov 25 20:04:14 np0005535963 systemd[1]: Stopped target sshd-keygen.target.
Nov 25 20:04:14 np0005535963 systemd[1]: Stopping sshd-keygen.target...
Nov 25 20:04:14 np0005535963 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 20:04:14 np0005535963 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 20:04:14 np0005535963 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 20:04:14 np0005535963 systemd[1]: Reached target sshd-keygen.target.
Nov 25 20:04:14 np0005535963 systemd[1]: Starting OpenSSH server daemon...
Nov 25 20:04:14 np0005535963 systemd[1]: Started OpenSSH server daemon.
Nov 25 20:04:17 np0005535963 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 20:04:17 np0005535963 systemd[1]: Starting man-db-cache-update.service...
Nov 25 20:04:17 np0005535963 systemd[1]: Reloading.
Nov 25 20:04:17 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:04:17 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:04:17 np0005535963 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 20:04:20 np0005535963 python3.9[116822]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 20:04:21 np0005535963 systemd[1]: Reloading.
Nov 25 20:04:21 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:04:21 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:04:22 np0005535963 python3.9[118171]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 20:04:22 np0005535963 systemd[1]: Reloading.
Nov 25 20:04:22 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:04:22 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:04:23 np0005535963 python3.9[119264]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 20:04:23 np0005535963 systemd[1]: Reloading.
Nov 25 20:04:23 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:04:23 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:04:24 np0005535963 python3.9[120412]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 20:04:24 np0005535963 systemd[1]: Reloading.
Nov 25 20:04:24 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:04:24 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:04:26 np0005535963 python3.9[121615]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:26 np0005535963 systemd[1]: Reloading.
Nov 25 20:04:26 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:04:26 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:04:27 np0005535963 python3.9[122762]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:27 np0005535963 systemd[1]: Reloading.
Nov 25 20:04:27 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:04:27 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:04:28 np0005535963 python3.9[123686]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:28 np0005535963 systemd[1]: Reloading.
Nov 25 20:04:28 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:04:28 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:04:29 np0005535963 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 20:04:29 np0005535963 systemd[1]: Finished man-db-cache-update.service.
Nov 25 20:04:29 np0005535963 systemd[1]: man-db-cache-update.service: Consumed 14.135s CPU time.
Nov 25 20:04:29 np0005535963 systemd[1]: run-r9856d10df4cb4438832287d6f08a1d37.service: Deactivated successfully.
Nov 25 20:04:29 np0005535963 python3.9[123964]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:30 np0005535963 python3.9[124120]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:30 np0005535963 systemd[1]: Reloading.
Nov 25 20:04:30 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:04:30 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:04:31 np0005535963 python3.9[124310]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 20:04:32 np0005535963 systemd[1]: Reloading.
Nov 25 20:04:32 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:04:32 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:04:32 np0005535963 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 25 20:04:32 np0005535963 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 25 20:04:33 np0005535963 python3.9[124503]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:34 np0005535963 podman[124630]: 2025-11-26 01:04:34.196993594 +0000 UTC m=+0.171479729 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 20:04:34 np0005535963 python3.9[124677]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:35 np0005535963 python3.9[124840]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:36 np0005535963 python3.9[124995]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:37 np0005535963 python3.9[125150]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:38 np0005535963 python3.9[125305]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:39 np0005535963 python3.9[125460]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:41 np0005535963 python3.9[125615]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:42 np0005535963 python3.9[125770]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:43 np0005535963 python3.9[125925]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:44 np0005535963 python3.9[126080]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:45 np0005535963 python3.9[126235]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:46 np0005535963 python3.9[126390]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:47 np0005535963 python3.9[126545]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 20:04:48 np0005535963 python3.9[126700]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:04:49 np0005535963 python3.9[126852]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:04:50 np0005535963 python3.9[127004]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:04:50 np0005535963 python3.9[127156]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:04:51 np0005535963 python3.9[127308]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:04:52 np0005535963 python3.9[127460]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:04:53 np0005535963 python3.9[127612]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:04:54 np0005535963 python3.9[127737]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764119092.6558325-554-240645563849692/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:04:55 np0005535963 python3.9[127889]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:04:55 np0005535963 python3.9[128014]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764119094.5978892-554-4192007772857/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:04:56 np0005535963 python3.9[128166]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:04:57 np0005535963 python3.9[128291]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764119096.049073-554-157738834929758/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:04:58 np0005535963 python3.9[128443]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:04:58 np0005535963 python3.9[128568]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764119097.54878-554-67701924656650/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:04:59 np0005535963 python3.9[128720]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:00 np0005535963 python3.9[128845]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764119099.1509783-554-226501443389272/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:01 np0005535963 python3.9[128997]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:02 np0005535963 python3.9[129122]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764119100.6943731-554-214720068818278/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:02 np0005535963 python3.9[129274]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:03 np0005535963 python3.9[129397]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764119102.229602-554-172719177497432/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:04 np0005535963 python3.9[129549]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:04 np0005535963 podman[129552]: 2025-11-26 01:05:04.532955965 +0000 UTC m=+0.103020793 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 20:05:05 np0005535963 python3.9[129701]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764119103.7589908-554-1319113337129/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:05 np0005535963 python3.9[129853]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 25 20:05:06 np0005535963 python3.9[130006]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:07 np0005535963 python3.9[130158]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:08 np0005535963 python3.9[130310]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:09 np0005535963 python3.9[130462]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:09 np0005535963 python3.9[130614]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:10 np0005535963 python3.9[130766]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:11 np0005535963 python3.9[130918]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:12 np0005535963 python3.9[131070]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:12 np0005535963 python3.9[131222]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:13 np0005535963 python3.9[131374]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:14 np0005535963 python3.9[131526]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:15 np0005535963 python3.9[131678]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:16 np0005535963 python3.9[131830]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:17 np0005535963 python3.9[131982]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:18 np0005535963 python3.9[132134]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:18 np0005535963 python3.9[132257]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119117.5081894-775-182333883578798/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:19 np0005535963 python3.9[132409]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:20 np0005535963 python3.9[132532]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119119.0571535-775-65339780904392/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:21 np0005535963 python3.9[132684]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:21 np0005535963 python3.9[132807]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119120.4654613-775-77854902785822/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:22 np0005535963 python3.9[132959]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:23 np0005535963 python3.9[133082]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119121.9315357-775-67381391925193/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:24 np0005535963 python3.9[133234]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:24 np0005535963 python3.9[133357]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119123.464458-775-54250097284122/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:25 np0005535963 python3.9[133509]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:26 np0005535963 python3.9[133632]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119124.93804-775-89411742119729/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:27 np0005535963 python3.9[133784]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:27 np0005535963 python3.9[133907]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119126.4654682-775-21703238546609/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:28 np0005535963 python3.9[134059]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:29 np0005535963 python3.9[134182]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119128.0477693-775-1697712813889/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:30 np0005535963 python3.9[134334]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:30 np0005535963 python3.9[134457]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119129.5338435-775-49517205635989/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:31 np0005535963 python3.9[134609]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:32 np0005535963 python3.9[134732]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119131.0529017-775-202457344933019/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:33 np0005535963 python3.9[134884]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:33 np0005535963 python3.9[135007]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119132.6541324-775-95677800749539/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:34 np0005535963 podman[135159]: 2025-11-26 01:05:34.734869817 +0000 UTC m=+0.126730897 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:05:34 np0005535963 python3.9[135160]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:35 np0005535963 python3.9[135308]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119134.1595712-775-194939538920358/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:36 np0005535963 python3.9[135460]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:37 np0005535963 python3.9[135583]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119135.7357714-775-129544569717503/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:37 np0005535963 python3.9[135735]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:05:38 np0005535963 python3.9[135858]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119137.2663863-775-223944026481835/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:39 np0005535963 python3.9[136008]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:05:40 np0005535963 python3.9[136163]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 25 20:05:42 np0005535963 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 25 20:05:42 np0005535963 python3.9[136319]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:43 np0005535963 python3.9[136471]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:44 np0005535963 python3.9[136623]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:44 np0005535963 python3.9[136775]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:45 np0005535963 python3.9[136927]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:46 np0005535963 python3.9[137079]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:47 np0005535963 python3.9[137231]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:48 np0005535963 python3.9[137383]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:49 np0005535963 python3.9[137535]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:49 np0005535963 python3.9[137687]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:50 np0005535963 python3.9[137839]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:05:50 np0005535963 systemd[1]: Reloading.
Nov 25 20:05:51 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:05:51 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:05:51 np0005535963 systemd[1]: Starting libvirt logging daemon socket...
Nov 25 20:05:51 np0005535963 systemd[1]: Listening on libvirt logging daemon socket.
Nov 25 20:05:51 np0005535963 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 25 20:05:51 np0005535963 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 25 20:05:51 np0005535963 systemd[1]: Starting libvirt logging daemon...
Nov 25 20:05:51 np0005535963 systemd[1]: Started libvirt logging daemon.
Nov 25 20:05:52 np0005535963 python3.9[138033]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:05:52 np0005535963 systemd[1]: Reloading.
Nov 25 20:05:52 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:05:52 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:05:52 np0005535963 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 25 20:05:52 np0005535963 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 25 20:05:52 np0005535963 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 25 20:05:52 np0005535963 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 25 20:05:52 np0005535963 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 25 20:05:52 np0005535963 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 25 20:05:52 np0005535963 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 20:05:52 np0005535963 systemd[1]: Started libvirt nodedev daemon.
Nov 25 20:05:53 np0005535963 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 25 20:05:53 np0005535963 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 25 20:05:53 np0005535963 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 25 20:05:53 np0005535963 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 25 20:05:53 np0005535963 python3.9[138251]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:05:53 np0005535963 systemd[1]: Reloading.
Nov 25 20:05:53 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:05:53 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:05:54 np0005535963 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 25 20:05:54 np0005535963 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 25 20:05:54 np0005535963 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 25 20:05:54 np0005535963 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 25 20:05:54 np0005535963 systemd[1]: Starting libvirt proxy daemon...
Nov 25 20:05:54 np0005535963 systemd[1]: Started libvirt proxy daemon.
Nov 25 20:05:54 np0005535963 setroubleshoot[138146]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l bd171508-360c-423e-be0e-e5a55ed5155c
Nov 25 20:05:54 np0005535963 setroubleshoot[138146]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 25 20:05:54 np0005535963 setroubleshoot[138146]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l bd171508-360c-423e-be0e-e5a55ed5155c
Nov 25 20:05:54 np0005535963 setroubleshoot[138146]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 25 20:05:55 np0005535963 python3.9[138472]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:05:55 np0005535963 systemd[1]: Reloading.
Nov 25 20:05:55 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:05:55 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:05:55 np0005535963 systemd[1]: Listening on libvirt locking daemon socket.
Nov 25 20:05:55 np0005535963 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 25 20:05:55 np0005535963 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 25 20:05:55 np0005535963 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 25 20:05:55 np0005535963 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 25 20:05:55 np0005535963 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 25 20:05:55 np0005535963 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 25 20:05:55 np0005535963 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 25 20:05:55 np0005535963 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 25 20:05:55 np0005535963 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 25 20:05:55 np0005535963 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 20:05:55 np0005535963 systemd[1]: Started libvirt QEMU daemon.
Nov 25 20:05:56 np0005535963 python3.9[138687]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:05:56 np0005535963 systemd[1]: Reloading.
Nov 25 20:05:56 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:05:56 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:05:56 np0005535963 systemd[1]: Starting libvirt secret daemon socket...
Nov 25 20:05:56 np0005535963 systemd[1]: Listening on libvirt secret daemon socket.
Nov 25 20:05:56 np0005535963 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 25 20:05:56 np0005535963 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 25 20:05:56 np0005535963 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 25 20:05:56 np0005535963 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 25 20:05:56 np0005535963 systemd[1]: Starting libvirt secret daemon...
Nov 25 20:05:56 np0005535963 systemd[1]: Started libvirt secret daemon.
Nov 25 20:05:57 np0005535963 python3.9[138899]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:05:58 np0005535963 python3.9[139051]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 20:05:59 np0005535963 python3.9[139203]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:06:00 np0005535963 python3.9[139326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119159.2160368-1120-8119717586259/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:01 np0005535963 python3.9[139478]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:02 np0005535963 python3.9[139630]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:06:02 np0005535963 python3.9[139708]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:03 np0005535963 python3.9[139860]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:06:04 np0005535963 python3.9[139938]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.kbh026ch recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:04 np0005535963 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 25 20:06:04 np0005535963 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 25 20:06:05 np0005535963 podman[140038]: 2025-11-26 01:06:05.085294385 +0000 UTC m=+0.139998762 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 20:06:05 np0005535963 python3.9[140116]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:06:06 np0005535963 python3.9[140194]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:06 np0005535963 python3.9[140346]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:06:07 np0005535963 python3[140499]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 20:06:08 np0005535963 python3.9[140651]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:06:09 np0005535963 python3.9[140729]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:10 np0005535963 python3.9[140881]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:06:10 np0005535963 python3.9[140959]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:11 np0005535963 python3.9[141111]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:06:12 np0005535963 python3.9[141189]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:13 np0005535963 python3.9[141341]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:06:13 np0005535963 python3.9[141419]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:14 np0005535963 python3.9[141571]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:06:15 np0005535963 python3.9[141696]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119174.0155869-1245-223702740062053/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:16 np0005535963 python3.9[141848]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:17 np0005535963 python3.9[142000]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:06:18 np0005535963 python3.9[142155]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:18 np0005535963 python3.9[142307]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:06:19 np0005535963 python3.9[142460]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:06:20 np0005535963 python3.9[142614]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:06:21 np0005535963 python3.9[142769]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:22 np0005535963 python3.9[142921]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:06:22 np0005535963 python3.9[143044]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119181.6762486-1317-24416683392951/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:23 np0005535963 python3.9[143196]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:06:24 np0005535963 python3.9[143319]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119183.1936247-1332-247464583702819/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:25 np0005535963 python3.9[143471]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:06:25 np0005535963 python3.9[143594]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119184.6597276-1347-184404080069354/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:26 np0005535963 irqbalance[791]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 25 20:06:26 np0005535963 irqbalance[791]: IRQ 26 affinity is now unmanaged
Nov 25 20:06:26 np0005535963 python3.9[143746]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:06:26 np0005535963 systemd[1]: Reloading.
Nov 25 20:06:27 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:06:27 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:06:27 np0005535963 systemd[1]: Reached target edpm_libvirt.target.
Nov 25 20:06:28 np0005535963 python3.9[143936]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 20:06:28 np0005535963 systemd[1]: Reloading.
Nov 25 20:06:28 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:06:28 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:06:28 np0005535963 systemd[1]: Reloading.
Nov 25 20:06:29 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:06:29 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:06:29 np0005535963 systemd[1]: session-20.scope: Deactivated successfully.
Nov 25 20:06:29 np0005535963 systemd[1]: session-20.scope: Consumed 4min 732ms CPU time.
Nov 25 20:06:29 np0005535963 systemd-logind[800]: Session 20 logged out. Waiting for processes to exit.
Nov 25 20:06:29 np0005535963 systemd-logind[800]: Removed session 20.
Nov 25 20:06:34 np0005535963 systemd-logind[800]: New session 21 of user zuul.
Nov 25 20:06:34 np0005535963 systemd[1]: Started Session 21 of User zuul.
Nov 25 20:06:35 np0005535963 podman[144094]: 2025-11-26 01:06:35.594811362 +0000 UTC m=+0.135741463 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:06:36 np0005535963 python3.9[144213]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:06:37 np0005535963 python3.9[144369]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:06:37 np0005535963 systemd[1]: Reloading.
Nov 25 20:06:37 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:06:37 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:06:39 np0005535963 python3.9[144554]: ansible-ansible.builtin.service_facts Invoked
Nov 25 20:06:39 np0005535963 network[144571]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 20:06:39 np0005535963 network[144572]: 'network-scripts' will be removed from distribution in near future.
Nov 25 20:06:39 np0005535963 network[144573]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 20:06:43 np0005535963 python3.9[144844]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:06:45 np0005535963 python3.9[144997]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:46 np0005535963 python3.9[145149]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:06:47 np0005535963 python3.9[145301]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:06:48 np0005535963 python3.9[145453]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 20:06:49 np0005535963 python3.9[145606]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:06:49 np0005535963 systemd[1]: Reloading.
Nov 25 20:06:49 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:06:49 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:06:50 np0005535963 python3.9[145793]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:06:51 np0005535963 python3.9[145946]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:06:52 np0005535963 python3.9[146096]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:06:53 np0005535963 python3.9[146248]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:06:54 np0005535963 python3.9[146369]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764119212.7265546-133-239784506223448/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:06:55 np0005535963 python3.9[146521]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Nov 25 20:06:56 np0005535963 python3.9[146673]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 25 20:06:56 np0005535963 python3.9[146826]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 20:06:58 np0005535963 python3.9[146984]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 20:06:59 np0005535963 python3.9[147142]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:00 np0005535963 python3.9[147263]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764119219.0635197-201-251366912127314/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:00 np0005535963 python3.9[147413]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:01 np0005535963 python3.9[147534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764119220.3940644-201-20849807986900/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:02 np0005535963 python3.9[147684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:02 np0005535963 python3.9[147805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764119221.847827-201-270734286968694/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:03 np0005535963 python3.9[147955]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:07:04 np0005535963 python3.9[148107]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:07:05 np0005535963 python3.9[148259]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:05 np0005535963 podman[148354]: 2025-11-26 01:07:05.796836804 +0000 UTC m=+0.110741235 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 25 20:07:05 np0005535963 python3.9[148397]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119224.7611935-260-215029210560904/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:06 np0005535963 python3.9[148557]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:07 np0005535963 python3.9[148633]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:07 np0005535963 python3.9[148783]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:08 np0005535963 python3.9[148904]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119227.2894554-260-180883321216093/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:09 np0005535963 python3.9[149054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:09 np0005535963 python3.9[149175]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119228.745312-260-217811989706865/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:10 np0005535963 python3.9[149325]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:11 np0005535963 python3.9[149446]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119230.117377-260-269152683814332/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:12 np0005535963 python3.9[149596]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:12 np0005535963 python3.9[149717]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119231.5363963-260-262452440029410/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:13 np0005535963 python3.9[149867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:14 np0005535963 python3.9[149988]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119233.0275667-260-68249614085935/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:14 np0005535963 python3.9[150138]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:15 np0005535963 python3.9[150259]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119234.3984628-260-78769909847407/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:16 np0005535963 python3.9[150409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:16 np0005535963 python3.9[150530]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119235.7239525-260-233508705262145/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:17 np0005535963 python3.9[150680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:18 np0005535963 python3.9[150801]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119237.0919023-260-16693841201972/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:19 np0005535963 python3.9[150951]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:19 np0005535963 python3.9[151072]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119238.429631-260-23032293201917/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:20 np0005535963 python3.9[151222]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:21 np0005535963 python3.9[151298]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:22 np0005535963 python3.9[151448]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:22 np0005535963 python3.9[151524]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:23 np0005535963 python3.9[151674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:23 np0005535963 python3.9[151750]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:24 np0005535963 python3.9[151902]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:25 np0005535963 python3.9[152054]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:26 np0005535963 python3.9[152206]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:07:27 np0005535963 python3.9[152358]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:07:27 np0005535963 systemd[1]: Reloading.
Nov 25 20:07:27 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:07:27 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:07:28 np0005535963 systemd[1]: Listening on Podman API Socket.
Nov 25 20:07:29 np0005535963 python3.9[152552]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:29 np0005535963 python3.9[152675]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764119248.5588408-482-101927870603705/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:07:30 np0005535963 python3.9[152751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:31 np0005535963 python3.9[152874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764119248.5588408-482-101927870603705/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:07:32 np0005535963 python3.9[153026]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Nov 25 20:07:33 np0005535963 python3.9[153178]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 20:07:34 np0005535963 python3[153330]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 20:07:36 np0005535963 podman[153383]: 2025-11-26 01:07:36.559045798 +0000 UTC m=+0.112363258 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:07:48 np0005535963 podman[153344]: 2025-11-26 01:07:48.564084453 +0000 UTC m=+13.811662295 image pull 62d0cdbd80511c7b16dc1b12830c26126f29d8961a194546e50bdb4d0a16aab7 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Nov 25 20:07:48 np0005535963 podman[153512]: 2025-11-26 01:07:48.761188259 +0000 UTC m=+0.070391915 container create bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true)
Nov 25 20:07:48 np0005535963 podman[153512]: 2025-11-26 01:07:48.722517496 +0000 UTC m=+0.031721222 image pull 62d0cdbd80511c7b16dc1b12830c26126f29d8961a194546e50bdb4d0a16aab7 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Nov 25 20:07:48 np0005535963 python3[153330]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Nov 25 20:07:49 np0005535963 python3.9[153702]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:07:50 np0005535963 python3.9[153856]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:51 np0005535963 python3.9[154007]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764119270.8253648-546-144338284832459/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:07:52 np0005535963 python3.9[154083]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:07:52 np0005535963 systemd[1]: Reloading.
Nov 25 20:07:52 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:07:52 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:07:52 np0005535963 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 25 20:07:53 np0005535963 python3.9[154194]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:07:53 np0005535963 systemd[1]: Reloading.
Nov 25 20:07:53 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:07:53 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:07:54 np0005535963 systemd[1]: Starting ceilometer_agent_compute container...
Nov 25 20:07:54 np0005535963 systemd[1]: Started libcrun container.
Nov 25 20:07:54 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6231f10c9f6da920409201cda8a091527e023af12d49171280888a978522b/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:54 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6231f10c9f6da920409201cda8a091527e023af12d49171280888a978522b/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:54 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6231f10c9f6da920409201cda8a091527e023af12d49171280888a978522b/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:54 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6231f10c9f6da920409201cda8a091527e023af12d49171280888a978522b/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:54 np0005535963 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 20:07:54 np0005535963 systemd[1]: Started /usr/bin/podman healthcheck run bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.
Nov 25 20:07:54 np0005535963 podman[154235]: 2025-11-26 01:07:54.292160512 +0000 UTC m=+0.179543334 container init bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: + sudo -E kolla_set_configs
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: sudo: unable to send audit message: Operation not permitted
Nov 25 20:07:54 np0005535963 podman[154235]: 2025-11-26 01:07:54.331701221 +0000 UTC m=+0.219084023 container start bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 25 20:07:54 np0005535963 podman[154235]: ceilometer_agent_compute
Nov 25 20:07:54 np0005535963 systemd[1]: Started ceilometer_agent_compute container.
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: INFO:__main__:Validating config file
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: INFO:__main__:Copying service configuration files
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: INFO:__main__:Writing out command to execute
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: ++ cat /run_command
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: + ARGS=
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: + sudo kolla_copy_cacerts
Nov 25 20:07:54 np0005535963 podman[154259]: 2025-11-26 01:07:54.427715431 +0000 UTC m=+0.076289873 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: sudo: unable to send audit message: Operation not permitted
Nov 25 20:07:54 np0005535963 systemd[1]: bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0-3c4dbfa736d2634b.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 20:07:54 np0005535963 systemd[1]: bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0-3c4dbfa736d2634b.service: Failed with result 'exit-code'.
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: + [[ ! -n '' ]]
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: + . kolla_extend_start
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: + umask 0022
Nov 25 20:07:54 np0005535963 ceilometer_agent_compute[154251]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.205 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.206 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.206 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.206 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.206 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.206 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.206 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.206 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.206 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.206 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.207 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.207 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.207 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.207 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.207 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.207 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.207 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.207 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.208 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.208 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.208 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.208 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.208 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.208 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.208 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.208 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.208 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.208 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.209 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.209 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.209 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.209 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.209 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.209 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.209 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.209 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.209 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.209 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.209 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.210 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.210 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.210 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.210 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.210 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.210 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.210 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.210 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.210 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.210 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.210 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.211 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.211 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.211 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.211 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.211 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.211 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.211 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.211 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.211 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.211 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.211 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.211 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.211 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.212 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.212 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.212 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.212 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.212 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.212 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.212 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.212 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.212 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.212 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.212 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.212 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.212 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.213 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.213 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.213 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.213 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.213 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.213 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.213 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.213 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.213 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.213 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.213 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.214 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.214 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.214 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.214 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.214 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.214 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.214 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.214 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.214 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.214 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.214 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.214 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.215 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.215 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.215 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.215 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.215 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.215 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.215 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.215 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.215 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.215 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.215 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.216 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.216 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.216 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.216 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.216 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.216 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.216 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.216 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.216 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.216 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.216 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.217 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.217 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.217 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.217 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.217 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.217 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.217 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.217 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.217 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.218 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.218 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.218 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.218 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.218 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.218 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.218 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.218 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.218 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.218 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.220 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.220 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.220 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.220 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.243 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.244 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.244 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.245 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.245 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.245 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.245 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.245 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.245 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.246 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.246 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.246 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.246 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.246 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.246 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.247 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.247 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.247 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.247 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.247 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.247 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.248 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.248 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.248 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.248 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.248 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.248 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.248 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.248 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.249 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.249 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.249 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.249 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.249 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.249 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.249 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.249 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.250 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.250 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.250 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.250 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.250 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.250 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.250 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.250 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.251 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.251 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.251 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.251 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.251 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.251 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.251 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.251 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.252 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.252 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.252 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.252 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.252 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.252 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.252 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.252 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.253 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.253 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.253 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.253 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.253 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.253 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.253 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.253 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.254 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.254 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.254 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.254 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.254 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.254 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.254 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.254 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.255 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.255 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.255 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.255 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.255 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.255 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.255 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.256 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.256 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.256 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.256 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.256 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.256 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.256 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.256 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.257 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.257 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.257 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.257 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.257 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.257 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.257 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.257 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.258 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.258 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.258 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.258 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.258 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.258 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.258 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.258 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.259 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.259 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.259 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.259 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.259 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.259 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.259 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.259 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.260 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.260 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.260 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.260 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.260 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.260 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.260 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.260 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.261 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.261 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.261 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.261 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.261 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.262 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.262 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.264 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.264 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.264 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.266 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.268 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.269 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Nov 25 20:07:55 np0005535963 python3.9[154437]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:07:55 np0005535963 systemd[1]: Stopping ceilometer_agent_compute container...
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.471 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.480 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.480 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.480 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.500 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.590 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.591 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.591 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.591 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.591 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.591 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.591 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.591 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.591 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.591 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.592 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.592 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.592 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.592 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.592 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.592 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.592 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.592 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.592 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.592 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.593 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.593 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.593 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.593 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.593 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.593 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.593 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.593 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.593 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.593 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.593 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.593 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.594 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.594 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.594 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.594 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.594 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.594 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.594 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.594 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.594 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.594 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.594 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.594 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.594 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.594 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.595 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.595 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.595 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.595 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.595 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.595 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.595 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.595 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.595 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.595 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.595 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.595 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.596 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.596 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.596 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.596 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.596 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.596 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.596 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.596 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.596 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.596 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.596 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.596 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.596 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.596 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.597 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.597 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.597 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.597 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.597 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.597 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.597 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.597 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.597 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.597 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.597 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.597 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.597 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.598 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.598 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.598 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.598 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.598 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.598 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.598 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.598 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.598 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.598 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.598 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.598 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.598 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.598 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.599 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.599 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.599 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.599 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.599 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.599 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.599 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.599 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.599 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.599 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.599 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.599 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.600 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.601 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.602 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.602 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.602 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.602 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.602 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.602 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.602 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.602 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.602 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.602 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.602 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.602 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.602 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.602 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.602 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.603 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.603 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.603 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.603 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.603 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.603 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.603 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.603 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.603 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.603 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.608 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.608 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Nov 25 20:07:55 np0005535963 ceilometer_agent_compute[154251]: 2025-11-26 01:07:55.626 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Nov 25 20:07:55 np0005535963 virtqemud[138515]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 20:07:55 np0005535963 virtqemud[138515]: hostname: compute-0
Nov 25 20:07:55 np0005535963 virtqemud[138515]: End of file while reading data: Input/output error
Nov 25 20:07:55 np0005535963 systemd[1]: libpod-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.scope: Deactivated successfully.
Nov 25 20:07:55 np0005535963 podman[154449]: 2025-11-26 01:07:55.776573222 +0000 UTC m=+0.347987671 container died bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 20:07:55 np0005535963 systemd[1]: libpod-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.scope: Consumed 1.484s CPU time.
Nov 25 20:07:55 np0005535963 systemd[1]: bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0-3c4dbfa736d2634b.timer: Deactivated successfully.
Nov 25 20:07:55 np0005535963 systemd[1]: Stopped /usr/bin/podman healthcheck run bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.
Nov 25 20:07:55 np0005535963 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0-userdata-shm.mount: Deactivated successfully.
Nov 25 20:07:55 np0005535963 systemd[1]: var-lib-containers-storage-overlay-f4c6231f10c9f6da920409201cda8a091527e023af12d49171280888a978522b-merged.mount: Deactivated successfully.
Nov 25 20:07:57 np0005535963 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 20:07:58 np0005535963 podman[154449]: 2025-11-26 01:07:58.341247831 +0000 UTC m=+2.912662320 container cleanup bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 20:07:58 np0005535963 podman[154449]: ceilometer_agent_compute
Nov 25 20:07:58 np0005535963 podman[154481]: ceilometer_agent_compute
Nov 25 20:07:58 np0005535963 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Nov 25 20:07:58 np0005535963 systemd[1]: Stopped ceilometer_agent_compute container.
Nov 25 20:07:58 np0005535963 systemd[1]: Starting ceilometer_agent_compute container...
Nov 25 20:07:58 np0005535963 systemd[1]: Started libcrun container.
Nov 25 20:07:58 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6231f10c9f6da920409201cda8a091527e023af12d49171280888a978522b/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:58 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6231f10c9f6da920409201cda8a091527e023af12d49171280888a978522b/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:58 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6231f10c9f6da920409201cda8a091527e023af12d49171280888a978522b/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:58 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6231f10c9f6da920409201cda8a091527e023af12d49171280888a978522b/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 25 20:07:58 np0005535963 systemd[1]: Started /usr/bin/podman healthcheck run bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.
Nov 25 20:07:58 np0005535963 podman[154494]: 2025-11-26 01:07:58.608707269 +0000 UTC m=+0.147343538 container init bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: + sudo -E kolla_set_configs
Nov 25 20:07:58 np0005535963 podman[154494]: 2025-11-26 01:07:58.642595516 +0000 UTC m=+0.181231755 container start bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.license=GPLv2)
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: sudo: unable to send audit message: Operation not permitted
Nov 25 20:07:58 np0005535963 podman[154494]: ceilometer_agent_compute
Nov 25 20:07:58 np0005535963 systemd[1]: Started ceilometer_agent_compute container.
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Validating config file
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Copying service configuration files
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: INFO:__main__:Writing out command to execute
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: ++ cat /run_command
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: + ARGS=
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: + sudo kolla_copy_cacerts
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: sudo: unable to send audit message: Operation not permitted
Nov 25 20:07:58 np0005535963 podman[154515]: 2025-11-26 01:07:58.752503538 +0000 UTC m=+0.084639397 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.schema-version=1.0)
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: + [[ ! -n '' ]]
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: + . kolla_extend_start
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: + umask 0022
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 25 20:07:58 np0005535963 ceilometer_agent_compute[154508]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 25 20:07:58 np0005535963 systemd[1]: bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0-1a6bb104d2d31e81.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 20:07:58 np0005535963 systemd[1]: bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0-1a6bb104d2d31e81.service: Failed with result 'exit-code'.
Nov 25 20:07:59 np0005535963 python3.9[154689]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.511 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.511 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.511 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.511 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.512 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.512 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.512 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.512 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.512 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.512 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.512 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.513 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.513 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.513 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.513 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.513 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.513 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.513 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.513 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.514 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.514 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.514 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.514 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.514 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.514 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.514 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.514 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.515 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.515 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.515 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.515 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.515 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.515 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.515 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.515 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.515 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.516 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.516 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.516 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.516 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.516 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.516 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.516 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.516 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.516 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.517 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.517 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.517 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.517 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.517 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.517 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.517 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.517 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.517 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.517 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.518 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.518 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.518 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.518 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.518 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.518 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.518 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.518 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.518 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.519 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.519 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.519 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.519 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.519 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.519 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.519 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.519 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.519 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.520 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.520 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.520 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.520 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.520 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.520 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.520 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.520 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.521 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.521 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.521 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.521 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.521 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.521 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.521 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.521 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.522 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.522 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.522 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.522 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.522 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.522 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.522 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.522 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.522 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.523 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.523 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.523 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.523 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.523 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.523 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.523 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.523 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.523 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.524 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.524 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.524 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.524 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.524 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.524 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.524 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.524 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.524 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.525 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.525 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.525 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.525 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.525 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.525 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.525 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.525 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.525 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.526 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.526 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.526 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.526 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.526 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.526 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.526 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.526 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.526 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.527 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.527 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.527 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.527 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.527 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.527 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.527 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.527 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.527 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.528 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.528 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.528 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.528 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.528 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.528 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.528 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.528 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.528 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.528 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.553 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.554 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.554 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.554 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.555 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.555 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.555 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.555 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.555 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.555 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.556 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.556 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.556 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.556 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.557 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.557 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.557 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.557 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.557 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.558 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.558 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.558 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.558 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.558 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.558 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.558 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.559 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.559 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.559 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.559 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.559 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.559 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.559 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.559 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.560 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.560 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.560 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.560 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.560 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.560 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.561 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.561 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.561 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.561 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.561 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.561 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.562 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.562 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.562 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.562 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.562 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.562 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.563 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.563 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.563 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.563 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.563 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.563 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.563 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.563 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.564 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.564 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.564 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.564 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.564 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.564 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.564 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.565 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.565 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.565 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.565 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.565 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.565 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.565 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.566 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.566 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.566 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.566 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.566 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.566 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.566 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.567 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.567 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.567 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.567 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.567 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.567 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.567 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.568 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.568 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.568 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.568 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.568 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.568 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.569 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.569 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.569 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.569 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.569 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.569 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.569 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.569 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.570 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.570 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.570 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.570 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.570 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.570 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.570 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.570 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.571 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.571 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.571 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.571 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.571 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.571 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.572 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.572 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.572 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.572 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.572 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.572 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.573 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.573 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.573 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.573 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.573 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.573 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.573 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.573 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.574 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.574 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.574 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.574 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.574 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.574 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.574 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.574 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.575 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.575 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.575 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.575 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.575 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.575 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.575 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.576 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.576 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.576 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.576 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.576 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.576 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.576 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.577 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.577 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.580 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.582 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.583 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.584 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.593 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.594 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.594 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.733 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.733 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.733 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.733 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.734 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.734 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.734 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.734 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.734 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.734 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.734 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.734 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.734 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.734 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.734 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.735 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.735 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.735 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.735 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.735 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.735 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.735 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.735 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.735 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.735 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.735 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.736 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.736 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.736 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.736 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.736 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.736 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.736 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.736 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.736 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.736 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.736 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.736 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.736 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.736 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.737 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.737 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.737 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.737 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.737 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.737 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.737 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.737 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.737 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.737 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.737 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.737 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.738 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.738 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.738 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.738 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.738 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.738 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.738 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.738 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.738 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.738 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.738 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.738 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.738 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.738 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.739 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.739 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.739 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.739 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.739 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.739 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.739 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.739 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.739 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.739 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.739 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.739 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.739 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.740 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.740 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.740 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.740 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.740 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.740 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.740 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.740 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.740 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.740 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.740 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.741 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.741 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.741 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.741 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.741 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.741 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.741 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.741 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.741 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.741 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.741 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.741 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.742 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.742 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.742 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.742 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.742 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.742 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.742 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.742 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.742 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.742 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.742 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.742 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.743 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.744 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.744 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.744 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.744 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.744 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.744 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.744 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.744 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.744 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.744 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.744 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.744 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.744 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.745 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.745 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.745 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.745 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.745 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.745 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.745 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.745 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.745 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.745 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.745 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.746 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.746 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.746 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.746 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.746 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.746 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.746 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.746 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.746 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.747 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.747 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.747 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.747 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.747 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.747 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.750 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.773 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.774 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.774 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feff248b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.776 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff25140e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248a270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff35fda90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff5310410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff2489520>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff4ce75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff20339e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feff25140b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feff248b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feff248b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feff248b740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feff248b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feff248b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feff248b1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feff248ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feff248b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feff248b0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feff248ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feff248bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feff248bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feff24894f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feff248b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feff248bc20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feff248b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feff248bcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feff55e84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feff248bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feff248b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feff248bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feff248a2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feff248aea0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feff248afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:07:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:07:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:08:00 np0005535963 python3.9[154825]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764119278.9171717-578-55668169664746/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:08:01 np0005535963 python3.9[154977]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Nov 25 20:08:02 np0005535963 python3.9[155129]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 20:08:03 np0005535963 python3[155281]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 20:08:05 np0005535963 podman[155294]: 2025-11-26 01:08:05.163991774 +0000 UTC m=+1.638272475 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Nov 25 20:08:05 np0005535963 podman[155393]: 2025-11-26 01:08:05.35254187 +0000 UTC m=+0.067817207 container create 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible)
Nov 25 20:08:05 np0005535963 podman[155393]: 2025-11-26 01:08:05.321678265 +0000 UTC m=+0.036953602 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Nov 25 20:08:05 np0005535963 python3[155281]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Nov 25 20:08:06 np0005535963 python3.9[155585]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:08:07 np0005535963 podman[155711]: 2025-11-26 01:08:07.237987418 +0000 UTC m=+0.136940042 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 20:08:07 np0005535963 python3.9[155755]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:08:08 np0005535963 python3.9[155915]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764119287.4926274-631-227131134493704/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:08:09 np0005535963 python3.9[155991]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:08:09 np0005535963 systemd[1]: Reloading.
Nov 25 20:08:09 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:08:09 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:08:10 np0005535963 python3.9[156102]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:08:10 np0005535963 systemd[1]: Reloading.
Nov 25 20:08:10 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:08:10 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:08:10 np0005535963 systemd[1]: Starting node_exporter container...
Nov 25 20:08:10 np0005535963 systemd[1]: Started libcrun container.
Nov 25 20:08:10 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c78ff80dc3747cacd8b76fb559e5870daaa9617ba16eab8ed8efe83252b560/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:10 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c78ff80dc3747cacd8b76fb559e5870daaa9617ba16eab8ed8efe83252b560/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:10 np0005535963 systemd[1]: Started /usr/bin/podman healthcheck run 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4.
Nov 25 20:08:10 np0005535963 podman[156142]: 2025-11-26 01:08:10.797018251 +0000 UTC m=+0.145319336 container init 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.819Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.819Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.819Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.819Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.819Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=arp
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=bcache
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=bonding
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=cpu
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=edac
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=filefd
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=netclass
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=netdev
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=netstat
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=nfs
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=nvme
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.820Z caller=node_exporter.go:117 level=info collector=softnet
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.821Z caller=node_exporter.go:117 level=info collector=systemd
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.821Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.821Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.821Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.821Z caller=node_exporter.go:117 level=info collector=xfs
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.821Z caller=node_exporter.go:117 level=info collector=zfs
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.821Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 25 20:08:10 np0005535963 node_exporter[156157]: ts=2025-11-26T01:08:10.822Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Nov 25 20:08:10 np0005535963 podman[156142]: 2025-11-26 01:08:10.836716625 +0000 UTC m=+0.185017710 container start 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 20:08:10 np0005535963 podman[156142]: node_exporter
Nov 25 20:08:10 np0005535963 systemd[1]: Started node_exporter container.
Nov 25 20:08:10 np0005535963 podman[156166]: 2025-11-26 01:08:10.935294953 +0000 UTC m=+0.081050088 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 20:08:11 np0005535963 python3.9[156342]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:08:11 np0005535963 systemd[1]: Stopping node_exporter container...
Nov 25 20:08:12 np0005535963 systemd[1]: libpod-48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4.scope: Deactivated successfully.
Nov 25 20:08:12 np0005535963 podman[156346]: 2025-11-26 01:08:12.044807499 +0000 UTC m=+0.071089696 container died 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 20:08:12 np0005535963 systemd[1]: 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4-211008c1bc62d6c6.timer: Deactivated successfully.
Nov 25 20:08:12 np0005535963 systemd[1]: Stopped /usr/bin/podman healthcheck run 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4.
Nov 25 20:08:12 np0005535963 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4-userdata-shm.mount: Deactivated successfully.
Nov 25 20:08:12 np0005535963 systemd[1]: var-lib-containers-storage-overlay-e2c78ff80dc3747cacd8b76fb559e5870daaa9617ba16eab8ed8efe83252b560-merged.mount: Deactivated successfully.
Nov 25 20:08:12 np0005535963 podman[156346]: 2025-11-26 01:08:12.194196037 +0000 UTC m=+0.220478244 container cleanup 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 20:08:12 np0005535963 podman[156346]: node_exporter
Nov 25 20:08:12 np0005535963 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 25 20:08:12 np0005535963 podman[156375]: node_exporter
Nov 25 20:08:12 np0005535963 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Nov 25 20:08:12 np0005535963 systemd[1]: Stopped node_exporter container.
Nov 25 20:08:12 np0005535963 systemd[1]: Starting node_exporter container...
Nov 25 20:08:12 np0005535963 systemd[1]: Started libcrun container.
Nov 25 20:08:12 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c78ff80dc3747cacd8b76fb559e5870daaa9617ba16eab8ed8efe83252b560/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:12 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c78ff80dc3747cacd8b76fb559e5870daaa9617ba16eab8ed8efe83252b560/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:12 np0005535963 systemd[1]: Started /usr/bin/podman healthcheck run 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4.
Nov 25 20:08:12 np0005535963 podman[156388]: 2025-11-26 01:08:12.499235755 +0000 UTC m=+0.163431726 container init 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.518Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.518Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.518Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.519Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.519Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.519Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.519Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.520Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.520Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=arp
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=bcache
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=bonding
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=cpu
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=edac
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=filefd
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=netclass
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=netdev
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=netstat
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=nfs
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=nvme
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=softnet
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=systemd
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=xfs
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.521Z caller=node_exporter.go:117 level=info collector=zfs
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.522Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 25 20:08:12 np0005535963 node_exporter[156403]: ts=2025-11-26T01:08:12.522Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Nov 25 20:08:12 np0005535963 podman[156388]: 2025-11-26 01:08:12.531773691 +0000 UTC m=+0.195969642 container start 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 20:08:12 np0005535963 podman[156388]: node_exporter
Nov 25 20:08:12 np0005535963 systemd[1]: Started node_exporter container.
Nov 25 20:08:12 np0005535963 podman[156413]: 2025-11-26 01:08:12.62914159 +0000 UTC m=+0.081271112 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 20:08:13 np0005535963 python3.9[156588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:08:14 np0005535963 python3.9[156711]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764119292.829004-663-61092000541756/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:08:15 np0005535963 python3.9[156863]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Nov 25 20:08:16 np0005535963 python3.9[157015]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 20:08:17 np0005535963 python3[157167]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 20:08:18 np0005535963 podman[157180]: 2025-11-26 01:08:18.775766229 +0000 UTC m=+1.337423661 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Nov 25 20:08:18 np0005535963 podman[157275]: 2025-11-26 01:08:18.959181226 +0000 UTC m=+0.066531453 container create cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 20:08:18 np0005535963 podman[157275]: 2025-11-26 01:08:18.925648665 +0000 UTC m=+0.032998912 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Nov 25 20:08:18 np0005535963 python3[157167]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Nov 25 20:08:20 np0005535963 python3.9[157465]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:08:21 np0005535963 python3.9[157619]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:08:22 np0005535963 python3.9[157770]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764119301.1731868-716-30864334479016/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:08:22 np0005535963 python3.9[157846]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:08:22 np0005535963 systemd[1]: Reloading.
Nov 25 20:08:22 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:08:22 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:08:23 np0005535963 python3.9[157956]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:08:23 np0005535963 systemd[1]: Reloading.
Nov 25 20:08:24 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:08:24 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:08:24 np0005535963 systemd[1]: Starting podman_exporter container...
Nov 25 20:08:24 np0005535963 systemd[1]: Started libcrun container.
Nov 25 20:08:24 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be431ec50810d1c63784cba25ba71b693d3cdc480f1a85dd52a4938b7bdef532/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:24 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be431ec50810d1c63784cba25ba71b693d3cdc480f1a85dd52a4938b7bdef532/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:24 np0005535963 systemd[1]: Started /usr/bin/podman healthcheck run cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7.
Nov 25 20:08:24 np0005535963 podman[157995]: 2025-11-26 01:08:24.446487432 +0000 UTC m=+0.181633784 container init cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 20:08:24 np0005535963 podman_exporter[158010]: ts=2025-11-26T01:08:24.474Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 25 20:08:24 np0005535963 podman_exporter[158010]: ts=2025-11-26T01:08:24.474Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 25 20:08:24 np0005535963 podman_exporter[158010]: ts=2025-11-26T01:08:24.474Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 25 20:08:24 np0005535963 podman_exporter[158010]: ts=2025-11-26T01:08:24.475Z caller=handler.go:105 level=info collector=container
Nov 25 20:08:24 np0005535963 podman[157995]: 2025-11-26 01:08:24.485285974 +0000 UTC m=+0.220432276 container start cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 20:08:24 np0005535963 podman[157995]: podman_exporter
Nov 25 20:08:24 np0005535963 systemd[1]: Starting Podman API Service...
Nov 25 20:08:24 np0005535963 systemd[1]: Started Podman API Service.
Nov 25 20:08:24 np0005535963 systemd[1]: Started podman_exporter container.
Nov 25 20:08:24 np0005535963 podman[158021]: time="2025-11-26T01:08:24Z" level=info msg="/usr/bin/podman filtering at log level info"
Nov 25 20:08:24 np0005535963 podman[158021]: time="2025-11-26T01:08:24Z" level=info msg="Setting parallel job count to 25"
Nov 25 20:08:24 np0005535963 podman[158021]: time="2025-11-26T01:08:24Z" level=info msg="Using sqlite as database backend"
Nov 25 20:08:24 np0005535963 podman[158021]: time="2025-11-26T01:08:24Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 25 20:08:24 np0005535963 podman[158021]: time="2025-11-26T01:08:24Z" level=info msg="Using systemd socket activation to determine API endpoint"
Nov 25 20:08:24 np0005535963 podman[158021]: time="2025-11-26T01:08:24Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Nov 25 20:08:24 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:08:24 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 25 20:08:24 np0005535963 podman[158021]: time="2025-11-26T01:08:24Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 20:08:24 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:08:24 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9686 "" "Go-http-client/1.1"
Nov 25 20:08:24 np0005535963 podman_exporter[158010]: ts=2025-11-26T01:08:24.574Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Nov 25 20:08:24 np0005535963 podman_exporter[158010]: ts=2025-11-26T01:08:24.575Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Nov 25 20:08:24 np0005535963 podman_exporter[158010]: ts=2025-11-26T01:08:24.576Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Nov 25 20:08:24 np0005535963 podman[158019]: 2025-11-26 01:08:24.600346033 +0000 UTC m=+0.098497317 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 20:08:24 np0005535963 systemd[1]: cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7-5a63732a19382222.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 20:08:24 np0005535963 systemd[1]: cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7-5a63732a19382222.service: Failed with result 'exit-code'.
Nov 25 20:08:25 np0005535963 python3.9[158209]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:08:25 np0005535963 systemd[1]: Stopping podman_exporter container...
Nov 25 20:08:25 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:08:24 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Nov 25 20:08:25 np0005535963 systemd[1]: libpod-cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7.scope: Deactivated successfully.
Nov 25 20:08:25 np0005535963 podman[158213]: 2025-11-26 01:08:25.635615763 +0000 UTC m=+0.065022329 container died cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 20:08:25 np0005535963 systemd[1]: cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7-5a63732a19382222.timer: Deactivated successfully.
Nov 25 20:08:25 np0005535963 systemd[1]: Stopped /usr/bin/podman healthcheck run cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7.
Nov 25 20:08:25 np0005535963 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7-userdata-shm.mount: Deactivated successfully.
Nov 25 20:08:25 np0005535963 systemd[1]: var-lib-containers-storage-overlay-be431ec50810d1c63784cba25ba71b693d3cdc480f1a85dd52a4938b7bdef532-merged.mount: Deactivated successfully.
Nov 25 20:08:25 np0005535963 podman[158213]: 2025-11-26 01:08:25.86326408 +0000 UTC m=+0.292670646 container cleanup cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 20:08:25 np0005535963 podman[158213]: podman_exporter
Nov 25 20:08:25 np0005535963 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 25 20:08:25 np0005535963 podman[158240]: podman_exporter
Nov 25 20:08:25 np0005535963 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Nov 25 20:08:25 np0005535963 systemd[1]: Stopped podman_exporter container.
Nov 25 20:08:25 np0005535963 systemd[1]: Starting podman_exporter container...
Nov 25 20:08:26 np0005535963 systemd[1]: Started libcrun container.
Nov 25 20:08:26 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be431ec50810d1c63784cba25ba71b693d3cdc480f1a85dd52a4938b7bdef532/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:26 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be431ec50810d1c63784cba25ba71b693d3cdc480f1a85dd52a4938b7bdef532/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:26 np0005535963 systemd[1]: Started /usr/bin/podman healthcheck run cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7.
Nov 25 20:08:26 np0005535963 podman[158253]: 2025-11-26 01:08:26.152014252 +0000 UTC m=+0.163036246 container init cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 20:08:26 np0005535963 podman_exporter[158268]: ts=2025-11-26T01:08:26.172Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 25 20:08:26 np0005535963 podman_exporter[158268]: ts=2025-11-26T01:08:26.173Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 25 20:08:26 np0005535963 podman_exporter[158268]: ts=2025-11-26T01:08:26.173Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 25 20:08:26 np0005535963 podman_exporter[158268]: ts=2025-11-26T01:08:26.173Z caller=handler.go:105 level=info collector=container
Nov 25 20:08:26 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:08:26 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 25 20:08:26 np0005535963 podman[158021]: time="2025-11-26T01:08:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 20:08:26 np0005535963 podman[158253]: 2025-11-26 01:08:26.196972209 +0000 UTC m=+0.207994163 container start cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 20:08:26 np0005535963 podman[158253]: podman_exporter
Nov 25 20:08:26 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:08:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9688 "" "Go-http-client/1.1"
Nov 25 20:08:26 np0005535963 podman_exporter[158268]: ts=2025-11-26T01:08:26.205Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Nov 25 20:08:26 np0005535963 podman_exporter[158268]: ts=2025-11-26T01:08:26.206Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Nov 25 20:08:26 np0005535963 podman_exporter[158268]: ts=2025-11-26T01:08:26.206Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Nov 25 20:08:26 np0005535963 systemd[1]: Started podman_exporter container.
Nov 25 20:08:26 np0005535963 podman[158279]: 2025-11-26 01:08:26.290801905 +0000 UTC m=+0.078428946 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 20:08:27 np0005535963 python3.9[158454]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:08:27 np0005535963 python3.9[158577]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764119306.4795306-748-78201603386754/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:08:29 np0005535963 podman[158701]: 2025-11-26 01:08:29.055252665 +0000 UTC m=+0.080192090 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118)
Nov 25 20:08:29 np0005535963 systemd[1]: bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0-1a6bb104d2d31e81.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 20:08:29 np0005535963 systemd[1]: bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0-1a6bb104d2d31e81.service: Failed with result 'exit-code'.
Nov 25 20:08:29 np0005535963 python3.9[158748]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Nov 25 20:08:30 np0005535963 python3.9[158900]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 20:08:31 np0005535963 python3[159052]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 20:08:33 np0005535963 podman[159067]: 2025-11-26 01:08:33.817544918 +0000 UTC m=+2.437291882 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 25 20:08:34 np0005535963 podman[159166]: 2025-11-26 01:08:34.037223835 +0000 UTC m=+0.073441244 container create 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64)
Nov 25 20:08:34 np0005535963 podman[159166]: 2025-11-26 01:08:34.000094824 +0000 UTC m=+0.036312243 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 25 20:08:34 np0005535963 python3[159052]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 25 20:08:35 np0005535963 python3.9[159356]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:08:36 np0005535963 python3.9[159510]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:08:36 np0005535963 python3.9[159661]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764119316.1414068-801-10993830897498/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:08:37 np0005535963 podman[159737]: 2025-11-26 01:08:37.473268825 +0000 UTC m=+0.127375538 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:08:37 np0005535963 python3.9[159738]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:08:37 np0005535963 systemd[1]: Reloading.
Nov 25 20:08:37 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:08:37 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:08:38 np0005535963 python3.9[159875]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:08:39 np0005535963 systemd[1]: Reloading.
Nov 25 20:08:39 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:08:39 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:08:40 np0005535963 systemd[1]: Starting openstack_network_exporter container...
Nov 25 20:08:40 np0005535963 systemd[1]: Started libcrun container.
Nov 25 20:08:40 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dda18351939e97cff3771dd2a4880cbe8b2da8f3f8943adc3f7985ddb6ec35/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:40 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dda18351939e97cff3771dd2a4880cbe8b2da8f3f8943adc3f7985ddb6ec35/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:40 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dda18351939e97cff3771dd2a4880cbe8b2da8f3f8943adc3f7985ddb6ec35/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:40 np0005535963 systemd[1]: Started /usr/bin/podman healthcheck run 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6.
Nov 25 20:08:40 np0005535963 podman[159914]: 2025-11-26 01:08:40.262056161 +0000 UTC m=+0.189791690 container init 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal)
Nov 25 20:08:40 np0005535963 openstack_network_exporter[159930]: INFO    01:08:40 main.go:48: registering *bridge.Collector
Nov 25 20:08:40 np0005535963 openstack_network_exporter[159930]: INFO    01:08:40 main.go:48: registering *coverage.Collector
Nov 25 20:08:40 np0005535963 openstack_network_exporter[159930]: INFO    01:08:40 main.go:48: registering *datapath.Collector
Nov 25 20:08:40 np0005535963 openstack_network_exporter[159930]: INFO    01:08:40 main.go:48: registering *iface.Collector
Nov 25 20:08:40 np0005535963 openstack_network_exporter[159930]: INFO    01:08:40 main.go:48: registering *memory.Collector
Nov 25 20:08:40 np0005535963 openstack_network_exporter[159930]: INFO    01:08:40 main.go:48: registering *ovnnorthd.Collector
Nov 25 20:08:40 np0005535963 openstack_network_exporter[159930]: INFO    01:08:40 main.go:48: registering *ovn.Collector
Nov 25 20:08:40 np0005535963 openstack_network_exporter[159930]: INFO    01:08:40 main.go:48: registering *ovsdbserver.Collector
Nov 25 20:08:40 np0005535963 openstack_network_exporter[159930]: INFO    01:08:40 main.go:48: registering *pmd_perf.Collector
Nov 25 20:08:40 np0005535963 openstack_network_exporter[159930]: INFO    01:08:40 main.go:48: registering *pmd_rxq.Collector
Nov 25 20:08:40 np0005535963 openstack_network_exporter[159930]: INFO    01:08:40 main.go:48: registering *vswitch.Collector
Nov 25 20:08:40 np0005535963 openstack_network_exporter[159930]: NOTICE  01:08:40 main.go:76: listening on https://:9105/metrics
Nov 25 20:08:40 np0005535963 podman[159914]: 2025-11-26 01:08:40.294743011 +0000 UTC m=+0.222478550 container start 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Nov 25 20:08:40 np0005535963 podman[159914]: openstack_network_exporter
Nov 25 20:08:40 np0005535963 systemd[1]: Started openstack_network_exporter container.
Nov 25 20:08:40 np0005535963 podman[159940]: 2025-11-26 01:08:40.410792046 +0000 UTC m=+0.098351531 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 25 20:08:41 np0005535963 python3.9[160113]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:08:41 np0005535963 systemd[1]: Stopping openstack_network_exporter container...
Nov 25 20:08:41 np0005535963 systemd[1]: libpod-3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6.scope: Deactivated successfully.
Nov 25 20:08:41 np0005535963 podman[160117]: 2025-11-26 01:08:41.435112126 +0000 UTC m=+0.064316811 container died 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git)
Nov 25 20:08:41 np0005535963 systemd[1]: 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6-79cc78b3d7ca6d2a.timer: Deactivated successfully.
Nov 25 20:08:41 np0005535963 systemd[1]: Stopped /usr/bin/podman healthcheck run 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6.
Nov 25 20:08:41 np0005535963 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6-userdata-shm.mount: Deactivated successfully.
Nov 25 20:08:41 np0005535963 systemd[1]: var-lib-containers-storage-overlay-82dda18351939e97cff3771dd2a4880cbe8b2da8f3f8943adc3f7985ddb6ec35-merged.mount: Deactivated successfully.
Nov 25 20:08:42 np0005535963 podman[160117]: 2025-11-26 01:08:42.089320303 +0000 UTC m=+0.718524988 container cleanup 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Nov 25 20:08:42 np0005535963 podman[160117]: openstack_network_exporter
Nov 25 20:08:42 np0005535963 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 25 20:08:42 np0005535963 podman[160147]: openstack_network_exporter
Nov 25 20:08:42 np0005535963 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Nov 25 20:08:42 np0005535963 systemd[1]: Stopped openstack_network_exporter container.
Nov 25 20:08:42 np0005535963 systemd[1]: Starting openstack_network_exporter container...
Nov 25 20:08:42 np0005535963 systemd[1]: Started libcrun container.
Nov 25 20:08:42 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dda18351939e97cff3771dd2a4880cbe8b2da8f3f8943adc3f7985ddb6ec35/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:42 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dda18351939e97cff3771dd2a4880cbe8b2da8f3f8943adc3f7985ddb6ec35/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:42 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dda18351939e97cff3771dd2a4880cbe8b2da8f3f8943adc3f7985ddb6ec35/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 20:08:42 np0005535963 systemd[1]: Started /usr/bin/podman healthcheck run 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6.
Nov 25 20:08:42 np0005535963 podman[160161]: 2025-11-26 01:08:42.384647274 +0000 UTC m=+0.169040645 container init 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9-minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64)
Nov 25 20:08:42 np0005535963 openstack_network_exporter[160178]: INFO    01:08:42 main.go:48: registering *bridge.Collector
Nov 25 20:08:42 np0005535963 openstack_network_exporter[160178]: INFO    01:08:42 main.go:48: registering *coverage.Collector
Nov 25 20:08:42 np0005535963 openstack_network_exporter[160178]: INFO    01:08:42 main.go:48: registering *datapath.Collector
Nov 25 20:08:42 np0005535963 openstack_network_exporter[160178]: INFO    01:08:42 main.go:48: registering *iface.Collector
Nov 25 20:08:42 np0005535963 openstack_network_exporter[160178]: INFO    01:08:42 main.go:48: registering *memory.Collector
Nov 25 20:08:42 np0005535963 openstack_network_exporter[160178]: INFO    01:08:42 main.go:48: registering *ovnnorthd.Collector
Nov 25 20:08:42 np0005535963 openstack_network_exporter[160178]: INFO    01:08:42 main.go:48: registering *ovn.Collector
Nov 25 20:08:42 np0005535963 openstack_network_exporter[160178]: INFO    01:08:42 main.go:48: registering *ovsdbserver.Collector
Nov 25 20:08:42 np0005535963 openstack_network_exporter[160178]: INFO    01:08:42 main.go:48: registering *pmd_perf.Collector
Nov 25 20:08:42 np0005535963 openstack_network_exporter[160178]: INFO    01:08:42 main.go:48: registering *pmd_rxq.Collector
Nov 25 20:08:42 np0005535963 openstack_network_exporter[160178]: INFO    01:08:42 main.go:48: registering *vswitch.Collector
Nov 25 20:08:42 np0005535963 openstack_network_exporter[160178]: NOTICE  01:08:42 main.go:76: listening on https://:9105/metrics
Nov 25 20:08:42 np0005535963 podman[160161]: 2025-11-26 01:08:42.420494029 +0000 UTC m=+0.204887380 container start 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.expose-services=)
Nov 25 20:08:42 np0005535963 podman[160161]: openstack_network_exporter
Nov 25 20:08:42 np0005535963 systemd[1]: Started openstack_network_exporter container.
Nov 25 20:08:42 np0005535963 podman[160188]: 2025-11-26 01:08:42.538473255 +0000 UTC m=+0.101348811 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 25 20:08:43 np0005535963 podman[160332]: 2025-11-26 01:08:43.188415186 +0000 UTC m=+0.078661065 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 20:08:43 np0005535963 python3.9[160384]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 20:08:44 np0005535963 python3.9[160536]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Nov 25 20:08:45 np0005535963 python3.9[160702]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:08:45 np0005535963 systemd[1]: Started libpod-conmon-e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16.scope.
Nov 25 20:08:45 np0005535963 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 20:08:45 np0005535963 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 20:08:45 np0005535963 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 20:08:45 np0005535963 podman[160703]: 2025-11-26 01:08:45.854065809 +0000 UTC m=+0.109917312 container exec e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 20:08:45 np0005535963 podman[160703]: 2025-11-26 01:08:45.861017122 +0000 UTC m=+0.116868565 container exec_died e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:08:45 np0005535963 systemd[1]: libpod-conmon-e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16.scope: Deactivated successfully.
Nov 25 20:08:46 np0005535963 python3.9[160888]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:08:46 np0005535963 systemd[1]: Started libpod-conmon-e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16.scope.
Nov 25 20:08:46 np0005535963 podman[160889]: 2025-11-26 01:08:46.846740538 +0000 UTC m=+0.094473090 container exec e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 20:08:46 np0005535963 podman[160889]: 2025-11-26 01:08:46.876954938 +0000 UTC m=+0.124687500 container exec_died e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 20:08:46 np0005535963 systemd[1]: libpod-conmon-e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16.scope: Deactivated successfully.
Nov 25 20:08:47 np0005535963 python3.9[161072]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:08:48 np0005535963 python3.9[161224]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Nov 25 20:08:49 np0005535963 python3.9[161389]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:08:49 np0005535963 systemd[1]: Started libpod-conmon-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.scope.
Nov 25 20:08:49 np0005535963 podman[161390]: 2025-11-26 01:08:49.794722759 +0000 UTC m=+0.104949842 container exec bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Nov 25 20:08:49 np0005535963 podman[161390]: 2025-11-26 01:08:49.830234972 +0000 UTC m=+0.140462055 container exec_died bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 25 20:08:49 np0005535963 systemd[1]: libpod-conmon-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.scope: Deactivated successfully.
Nov 25 20:08:50 np0005535963 python3.9[161572]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:08:50 np0005535963 systemd[1]: Started libpod-conmon-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.scope.
Nov 25 20:08:50 np0005535963 podman[161573]: 2025-11-26 01:08:50.872811998 +0000 UTC m=+0.103445997 container exec bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 25 20:08:50 np0005535963 podman[161573]: 2025-11-26 01:08:50.904322445 +0000 UTC m=+0.134956444 container exec_died bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:08:50 np0005535963 systemd[1]: libpod-conmon-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.scope: Deactivated successfully.
Nov 25 20:08:51 np0005535963 python3.9[161757]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:08:52 np0005535963 python3.9[161909]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Nov 25 20:08:53 np0005535963 python3.9[162075]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:08:53 np0005535963 systemd[1]: Started libpod-conmon-48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4.scope.
Nov 25 20:08:53 np0005535963 podman[162076]: 2025-11-26 01:08:53.900564183 +0000 UTC m=+0.103903563 container exec 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 20:08:53 np0005535963 podman[162076]: 2025-11-26 01:08:53.934434436 +0000 UTC m=+0.137773766 container exec_died 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 20:08:53 np0005535963 systemd[1]: libpod-conmon-48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4.scope: Deactivated successfully.
Nov 25 20:08:54 np0005535963 python3.9[162260]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:08:54 np0005535963 systemd[1]: Started libpod-conmon-48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4.scope.
Nov 25 20:08:54 np0005535963 podman[162261]: 2025-11-26 01:08:54.995884269 +0000 UTC m=+0.102400419 container exec 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 20:08:55 np0005535963 podman[162261]: 2025-11-26 01:08:55.029283845 +0000 UTC m=+0.135799935 container exec_died 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 20:08:55 np0005535963 systemd[1]: libpod-conmon-48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4.scope: Deactivated successfully.
Nov 25 20:08:55 np0005535963 python3.9[162444]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:08:56 np0005535963 podman[162544]: 2025-11-26 01:08:56.545102788 +0000 UTC m=+0.087382572 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 20:08:56 np0005535963 python3.9[162620]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Nov 25 20:08:57 np0005535963 python3.9[162785]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:08:57 np0005535963 systemd[1]: Started libpod-conmon-cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7.scope.
Nov 25 20:08:57 np0005535963 podman[162786]: 2025-11-26 01:08:57.942281453 +0000 UTC m=+0.102333587 container exec cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 20:08:57 np0005535963 podman[162786]: 2025-11-26 01:08:57.97956689 +0000 UTC m=+0.139619054 container exec_died cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 20:08:58 np0005535963 systemd[1]: libpod-conmon-cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7.scope: Deactivated successfully.
Nov 25 20:08:58 np0005535963 python3.9[162970]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:08:58 np0005535963 systemd[1]: Started libpod-conmon-cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7.scope.
Nov 25 20:08:59 np0005535963 podman[162971]: 2025-11-26 01:08:59.008047662 +0000 UTC m=+0.098343551 container exec cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 20:08:59 np0005535963 podman[162971]: 2025-11-26 01:08:59.038602305 +0000 UTC m=+0.128898184 container exec_died cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 20:08:59 np0005535963 systemd[1]: libpod-conmon-cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7.scope: Deactivated successfully.
Nov 25 20:08:59 np0005535963 podman[163063]: 2025-11-26 01:08:59.550370624 +0000 UTC m=+0.099976350 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 20:08:59 np0005535963 python3.9[163174]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:00 np0005535963 python3.9[163326]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Nov 25 20:09:01 np0005535963 python3.9[163491]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:09:01 np0005535963 systemd[1]: Started libpod-conmon-3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6.scope.
Nov 25 20:09:02 np0005535963 podman[163492]: 2025-11-26 01:09:02.010097913 +0000 UTC m=+0.109723576 container exec 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-type=git)
Nov 25 20:09:02 np0005535963 podman[163492]: 2025-11-26 01:09:02.021227908 +0000 UTC m=+0.120853521 container exec_died 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm)
Nov 25 20:09:02 np0005535963 systemd[1]: libpod-conmon-3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6.scope: Deactivated successfully.
Nov 25 20:09:02 np0005535963 python3.9[163675]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:09:02 np0005535963 systemd[1]: Started libpod-conmon-3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6.scope.
Nov 25 20:09:02 np0005535963 podman[163676]: 2025-11-26 01:09:02.993742882 +0000 UTC m=+0.092289121 container exec 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Nov 25 20:09:03 np0005535963 podman[163676]: 2025-11-26 01:09:03.024209581 +0000 UTC m=+0.122755810 container exec_died 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git)
Nov 25 20:09:03 np0005535963 systemd[1]: libpod-conmon-3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6.scope: Deactivated successfully.
Nov 25 20:09:03 np0005535963 python3.9[163861]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:04 np0005535963 python3.9[164013]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:05 np0005535963 python3.9[164165]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:06 np0005535963 python3.9[164288]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119345.0737028-1016-240869635095132/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:07 np0005535963 python3.9[164440]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:08 np0005535963 podman[164564]: 2025-11-26 01:09:08.054802171 +0000 UTC m=+0.159676944 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 20:09:08 np0005535963 python3.9[164609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:08 np0005535963 python3.9[164696]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:09 np0005535963 python3.9[164848]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:10 np0005535963 python3.9[164926]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.klwlqnli recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:10 np0005535963 python3.9[165078]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:11 np0005535963 python3.9[165156]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:12 np0005535963 python3.9[165308]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:09:13 np0005535963 podman[165434]: 2025-11-26 01:09:13.346861221 +0000 UTC m=+0.083467048 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 20:09:13 np0005535963 podman[165433]: 2025-11-26 01:09:13.360215111 +0000 UTC m=+0.101266204 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, version=9.6, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Nov 25 20:09:13 np0005535963 python3[165504]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 20:09:14 np0005535963 python3.9[165659]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:15 np0005535963 python3.9[165737]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:15 np0005535963 python3.9[165889]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:16 np0005535963 python3.9[165967]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:17 np0005535963 python3.9[166119]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:17 np0005535963 python3.9[166197]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:18 np0005535963 python3.9[166349]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:19 np0005535963 python3.9[166427]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:20 np0005535963 python3.9[166579]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:20 np0005535963 python3.9[166704]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119359.4303553-1141-204997361522610/.source.nft follow=False _original_basename=ruleset.j2 checksum=bc835bd485c96b4ac7465e87d3a790a8d097f2aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:21 np0005535963 python3.9[166856]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:22 np0005535963 python3.9[167008]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:09:23 np0005535963 python3.9[167163]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:24 np0005535963 python3.9[167315]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:09:25 np0005535963 python3.9[167468]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:09:26 np0005535963 python3.9[167622]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:09:27 np0005535963 podman[167750]: 2025-11-26 01:09:27.032537176 +0000 UTC m=+0.093992390 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 20:09:27 np0005535963 python3.9[167800]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:27 np0005535963 systemd[1]: session-21.scope: Deactivated successfully.
Nov 25 20:09:27 np0005535963 systemd[1]: session-21.scope: Consumed 2min 26.996s CPU time.
Nov 25 20:09:27 np0005535963 systemd-logind[800]: Session 21 logged out. Waiting for processes to exit.
Nov 25 20:09:27 np0005535963 systemd-logind[800]: Removed session 21.
Nov 25 20:09:29 np0005535963 podman[158021]: time="2025-11-26T01:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 20:09:29 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12784 "" "Go-http-client/1.1"
Nov 25 20:09:29 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2134 "" "Go-http-client/1.1"
Nov 25 20:09:30 np0005535963 podman[167827]: 2025-11-26 01:09:30.528112667 +0000 UTC m=+0.086962263 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Nov 25 20:09:31 np0005535963 openstack_network_exporter[160178]: ERROR   01:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 20:09:31 np0005535963 openstack_network_exporter[160178]: ERROR   01:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 20:09:31 np0005535963 openstack_network_exporter[160178]: ERROR   01:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 20:09:31 np0005535963 openstack_network_exporter[160178]: ERROR   01:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 20:09:31 np0005535963 openstack_network_exporter[160178]: 
Nov 25 20:09:31 np0005535963 openstack_network_exporter[160178]: ERROR   01:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 20:09:31 np0005535963 openstack_network_exporter[160178]: 
Nov 25 20:09:33 np0005535963 systemd-logind[800]: New session 22 of user zuul.
Nov 25 20:09:33 np0005535963 systemd[1]: Started Session 22 of User zuul.
Nov 25 20:09:34 np0005535963 python3.9[168008]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:09:34 np0005535963 systemd[1]: Reloading.
Nov 25 20:09:34 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:09:34 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:09:35 np0005535963 python3.9[168194]: ansible-ansible.builtin.service_facts Invoked
Nov 25 20:09:36 np0005535963 network[168211]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 20:09:36 np0005535963 network[168212]: 'network-scripts' will be removed from distribution in near future.
Nov 25 20:09:36 np0005535963 network[168213]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 20:09:38 np0005535963 podman[168256]: 2025-11-26 01:09:38.590207108 +0000 UTC m=+0.140640453 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 20:09:42 np0005535963 python3.9[168509]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:09:43 np0005535963 python3.9[168662]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:43 np0005535963 podman[168664]: 2025-11-26 01:09:43.528751973 +0000 UTC m=+0.073878651 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 20:09:43 np0005535963 podman[168663]: 2025-11-26 01:09:43.558637607 +0000 UTC m=+0.104131836 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal)
Nov 25 20:09:44 np0005535963 python3.9[168859]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:45 np0005535963 python3.9[169011]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:09:46 np0005535963 python3.9[169163]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 20:09:47 np0005535963 python3.9[169315]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:09:47 np0005535963 systemd[1]: Reloading.
Nov 25 20:09:47 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:09:47 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:09:48 np0005535963 python3.9[169502]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:09:49 np0005535963 python3.9[169655]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:09:50 np0005535963 python3.9[169806]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:09:51 np0005535963 python3.9[169958]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:52 np0005535963 python3.9[170079]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764119390.9029694-125-197230322567929/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:09:53 np0005535963 python3.9[170231]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 25 20:09:55 np0005535963 python3.9[170383]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:55 np0005535963 python3.9[170504]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764119394.5014474-171-108429798630332/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:56 np0005535963 python3.9[170654]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:57 np0005535963 python3.9[170775]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764119395.8393972-171-224657154354029/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:57 np0005535963 podman[170875]: 2025-11-26 01:09:57.54174725 +0000 UTC m=+0.094766522 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 20:09:57 np0005535963 python3.9[170949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:09:58 np0005535963 python3.9[171070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764119397.1983428-171-64836353870400/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:09:59 np0005535963 python3.9[171220]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:09:59 np0005535963 podman[158021]: time="2025-11-26T01:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 20:09:59 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12784 "" "Go-http-client/1.1"
Nov 25 20:09:59 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2145 "" "Go-http-client/1.1"
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.773 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.774 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.774 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feff248b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.775 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff25140e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feff25140b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248a270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.778 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff35fda90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feff248b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.779 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feff248b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.780 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff5310410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feff248b740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feff248b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.782 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feff248b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff2489520>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feff248b1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff4ce75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feff248ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feff248b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0edde20>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feff248b0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feff248ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feff248bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feff248bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feff24894f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feff248b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feff248bc20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feff248b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feff248bcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feff55e84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feff248bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feff248b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feff248bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feff248a2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feff248aea0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feff248afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:09:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:09:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:10:00 np0005535963 python3.9[171373]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:10:00 np0005535963 podman[171499]: 2025-11-26 01:10:00.796968134 +0000 UTC m=+0.100005586 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 20:10:00 np0005535963 python3.9[171538]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:10:01 np0005535963 openstack_network_exporter[160178]: ERROR   01:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 20:10:01 np0005535963 openstack_network_exporter[160178]: ERROR   01:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 20:10:01 np0005535963 openstack_network_exporter[160178]: ERROR   01:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 20:10:01 np0005535963 openstack_network_exporter[160178]: ERROR   01:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 20:10:01 np0005535963 openstack_network_exporter[160178]: 
Nov 25 20:10:01 np0005535963 openstack_network_exporter[160178]: ERROR   01:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 20:10:01 np0005535963 openstack_network_exporter[160178]: 
Nov 25 20:10:01 np0005535963 python3.9[171666]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119400.323307-230-18905080189533/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:02 np0005535963 python3.9[171816]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:10:03 np0005535963 python3.9[171892]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:03 np0005535963 python3.9[172042]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:10:04 np0005535963 python3.9[172163]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119403.2434192-230-79671503500657/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:05 np0005535963 python3.9[172313]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:10:05 np0005535963 python3.9[172434]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119404.6272144-230-1102666990154/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:06 np0005535963 python3.9[172584]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:10:07 np0005535963 python3.9[172705]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119406.0708315-230-109978383191395/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:08 np0005535963 python3.9[172855]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:10:08 np0005535963 python3.9[172976]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119407.518063-230-43554460902352/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:08 np0005535963 podman[172977]: 2025-11-26 01:10:08.988344215 +0000 UTC m=+0.160936227 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 20:10:09 np0005535963 python3.9[173150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:10:10 np0005535963 python3.9[173226]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:11 np0005535963 python3.9[173378]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:11 np0005535963 python3.9[173530]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:12 np0005535963 python3.9[173682]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:10:13 np0005535963 python3.9[173834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:10:14 np0005535963 podman[173929]: 2025-11-26 01:10:14.177929112 +0000 UTC m=+0.063190148 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible)
Nov 25 20:10:14 np0005535963 podman[173930]: 2025-11-26 01:10:14.199953946 +0000 UTC m=+0.072547082 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 20:10:14 np0005535963 python3.9[173996]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764119413.1095202-349-96993242114425/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:10:14 np0005535963 python3.9[174077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:10:15 np0005535963 python3.9[174200]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764119413.1095202-349-96993242114425/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:10:16 np0005535963 python3.9[174352]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:10:17 np0005535963 python3.9[174475]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764119415.8075123-349-77865952556102/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 20:10:18 np0005535963 python3.9[174627]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Nov 25 20:10:19 np0005535963 python3.9[174779]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 20:10:20 np0005535963 python3[174931]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 20:10:25 np0005535963 podman[174944]: 2025-11-26 01:10:25.858208832 +0000 UTC m=+5.310823965 image pull 02e0056780c6b31017996766cd13000137ba644dac3fc851da034db8cf4ceb2c quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Nov 25 20:10:26 np0005535963 podman[175044]: 2025-11-26 01:10:26.104270517 +0000 UTC m=+0.098896301 container create 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 20:10:26 np0005535963 podman[175044]: 2025-11-26 01:10:26.045157639 +0000 UTC m=+0.039783433 image pull 02e0056780c6b31017996766cd13000137ba644dac3fc851da034db8cf4ceb2c quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Nov 25 20:10:26 np0005535963 python3[174931]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Nov 25 20:10:27 np0005535963 python3.9[175234]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:10:27 np0005535963 podman[175360]: 2025-11-26 01:10:27.905855613 +0000 UTC m=+0.089327721 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 20:10:28 np0005535963 python3.9[175403]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:29 np0005535963 python3.9[175561]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764119428.1794462-427-99842459322307/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:29 np0005535963 podman[158021]: time="2025-11-26T01:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 20:10:29 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 15564 "" "Go-http-client/1.1"
Nov 25 20:10:29 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2149 "" "Go-http-client/1.1"
Nov 25 20:10:30 np0005535963 python3.9[175638]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:10:30 np0005535963 systemd[1]: Reloading.
Nov 25 20:10:30 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:10:30 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:10:30 np0005535963 podman[175721]: 2025-11-26 01:10:30.971465754 +0000 UTC m=+0.091487697 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 20:10:31 np0005535963 python3.9[175770]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:10:31 np0005535963 systemd[1]: Reloading.
Nov 25 20:10:31 np0005535963 openstack_network_exporter[160178]: ERROR   01:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 20:10:31 np0005535963 openstack_network_exporter[160178]: ERROR   01:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 20:10:31 np0005535963 openstack_network_exporter[160178]: ERROR   01:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 20:10:31 np0005535963 openstack_network_exporter[160178]: ERROR   01:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 20:10:31 np0005535963 openstack_network_exporter[160178]: 
Nov 25 20:10:31 np0005535963 openstack_network_exporter[160178]: ERROR   01:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 20:10:31 np0005535963 openstack_network_exporter[160178]: 
Nov 25 20:10:31 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:10:31 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:10:31 np0005535963 systemd[1]: Starting ceilometer_agent_ipmi container...
Nov 25 20:10:31 np0005535963 systemd[1]: Started libcrun container.
Nov 25 20:10:31 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d2527f32a0a93c5b9f6304ec3907f9ef039bfc6059ac3dc081152946e0957f/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:31 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d2527f32a0a93c5b9f6304ec3907f9ef039bfc6059ac3dc081152946e0957f/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:31 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d2527f32a0a93c5b9f6304ec3907f9ef039bfc6059ac3dc081152946e0957f/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:31 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d2527f32a0a93c5b9f6304ec3907f9ef039bfc6059ac3dc081152946e0957f/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:31 np0005535963 systemd[1]: Started /usr/bin/podman healthcheck run 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22.
Nov 25 20:10:31 np0005535963 podman[175809]: 2025-11-26 01:10:31.82157728 +0000 UTC m=+0.163527644 container init 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: + sudo -E kolla_set_configs
Nov 25 20:10:31 np0005535963 podman[175809]: 2025-11-26 01:10:31.85518033 +0000 UTC m=+0.197130724 container start 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 20:10:31 np0005535963 podman[175809]: ceilometer_agent_ipmi
Nov 25 20:10:31 np0005535963 systemd[1]: Started ceilometer_agent_ipmi container.
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: INFO:__main__:Validating config file
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: INFO:__main__:Copying service configuration files
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: INFO:__main__:Writing out command to execute
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: ++ cat /run_command
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: + ARGS=
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: + sudo kolla_copy_cacerts
Nov 25 20:10:31 np0005535963 podman[175831]: 2025-11-26 01:10:31.957496549 +0000 UTC m=+0.082206324 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 20:10:31 np0005535963 systemd[1]: 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22-3a873c604cfe1019.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 20:10:31 np0005535963 systemd[1]: 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22-3a873c604cfe1019.service: Failed with result 'exit-code'.
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: + [[ ! -n '' ]]
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: + . kolla_extend_start
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: + umask 0022
Nov 25 20:10:31 np0005535963 ceilometer_agent_ipmi[175824]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.780 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.780 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.780 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.781 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.781 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.781 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.781 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.781 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.781 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.781 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.781 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.781 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.781 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.782 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.782 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.782 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.782 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.782 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.782 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.782 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.782 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.782 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.782 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.782 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.783 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.783 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.783 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.783 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.783 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.783 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.783 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.783 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.783 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.783 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.783 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.783 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.783 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.784 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.784 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.784 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.784 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.784 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.784 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.784 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.784 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.784 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.784 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.784 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.784 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.785 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.785 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.785 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.785 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.785 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.785 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.785 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.785 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.785 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.785 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.785 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.785 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.786 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.786 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.786 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.786 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.786 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.786 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.786 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.786 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.786 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.786 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.786 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.787 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.787 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.787 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.787 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.787 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.787 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.787 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.787 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.787 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.787 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.787 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.787 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.788 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.788 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.788 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.788 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.788 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.788 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.788 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.788 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.788 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.788 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.788 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.788 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.788 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.789 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.789 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.789 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.789 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.789 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.789 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.789 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.789 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.789 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.789 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.790 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.790 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.790 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.790 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.790 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.790 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.790 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.790 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.790 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.790 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.790 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.790 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.791 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.791 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.791 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.791 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.791 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.791 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.791 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.791 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.791 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.791 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.791 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.791 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.792 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.792 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.792 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.792 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.792 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.792 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.792 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.792 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.792 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.792 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.792 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.792 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.793 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.793 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.793 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.793 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.793 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.793 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.793 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.793 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.793 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.793 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.793 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.793 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.794 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.794 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.794 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.794 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.794 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.794 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.794 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.794 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.794 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.794 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.815 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.817 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.819 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 25 20:10:32 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:32.941 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpu60shmvx/privsep.sock']
Nov 25 20:10:32 np0005535963 python3.9[176007]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Nov 25 20:10:33 np0005535963 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.617 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.618 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpu60shmvx/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.490 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.498 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.502 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.502 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.752 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.752 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.754 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.754 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.754 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.755 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.755 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.755 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.755 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.756 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.756 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.756 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.756 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.761 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.762 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.762 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.762 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.762 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.762 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.763 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.763 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.763 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.763 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.763 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.763 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.764 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.764 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.764 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.765 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.765 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.765 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.765 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.766 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.766 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.766 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.766 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.766 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.767 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.767 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.767 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.767 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.767 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.767 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.768 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.768 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.768 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.768 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.768 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.768 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.769 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.769 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.769 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.769 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.769 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.770 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.770 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.770 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.770 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.770 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.771 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.771 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.771 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.771 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.771 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.771 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.772 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.772 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.772 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.772 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.772 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.773 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.773 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.773 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.773 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.773 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.774 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.774 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.774 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.774 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.775 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.775 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.775 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.775 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.775 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.776 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.776 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.776 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.776 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.776 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.778 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.778 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.778 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.778 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.778 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.779 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.779 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.779 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.779 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.779 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.780 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.781 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.781 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.781 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.781 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.781 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.782 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.782 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.782 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.782 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.783 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.783 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.783 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.783 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.783 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.783 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.784 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.784 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.784 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.784 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.784 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.785 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.785 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.785 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.785 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.785 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.786 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.786 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.786 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.786 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.786 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.787 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.787 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.787 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.787 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.787 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.788 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.788 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.788 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.788 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.788 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.789 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.790 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.790 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.790 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.790 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.790 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.791 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.791 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.791 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.791 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.791 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.791 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.792 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.792 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.792 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.792 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.792 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.793 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.793 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.793 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.793 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.793 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.793 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.794 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.794 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.794 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.794 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.794 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.795 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.795 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.795 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.795 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.795 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.795 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.796 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.796 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.796 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.796 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.796 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.797 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.797 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.797 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.797 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.797 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.798 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.798 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.798 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.798 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.798 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.799 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.799 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.799 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.799 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.799 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.800 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.800 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.800 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.800 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.800 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.800 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 25 20:10:33 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:33.804 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 25 20:10:33 np0005535963 python3.9[176170]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 20:10:34 np0005535963 python3[176324]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 20:10:40 np0005535963 podman[176417]: 2025-11-26 01:10:40.001727784 +0000 UTC m=+0.733134321 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 20:10:40 np0005535963 podman[176338]: 2025-11-26 01:10:40.708611439 +0000 UTC m=+5.626610717 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Nov 25 20:10:40 np0005535963 podman[176559]: 2025-11-26 01:10:40.935305526 +0000 UTC m=+0.081853795 container create 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, container_name=kepler, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Nov 25 20:10:40 np0005535963 podman[176559]: 2025-11-26 01:10:40.89347992 +0000 UTC m=+0.040028219 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Nov 25 20:10:40 np0005535963 python3[176324]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Nov 25 20:10:41 np0005535963 python3.9[176749]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:10:42 np0005535963 python3.9[176903]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:43 np0005535963 python3.9[177054]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764119443.2234452-489-63236899585194/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:44 np0005535963 podman[177103]: 2025-11-26 01:10:44.342352839 +0000 UTC m=+0.077966403 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 20:10:44 np0005535963 podman[177102]: 2025-11-26 01:10:44.357241829 +0000 UTC m=+0.091822446 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 25 20:10:44 np0005535963 python3.9[177175]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 20:10:44 np0005535963 systemd[1]: Reloading.
Nov 25 20:10:44 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:10:44 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:10:45 np0005535963 python3.9[177287]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 20:10:45 np0005535963 systemd[1]: Reloading.
Nov 25 20:10:45 np0005535963 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 20:10:45 np0005535963 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 20:10:46 np0005535963 systemd[1]: Starting kepler container...
Nov 25 20:10:46 np0005535963 systemd[1]: Started libcrun container.
Nov 25 20:10:46 np0005535963 systemd[1]: Started /usr/bin/podman healthcheck run 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9.
Nov 25 20:10:46 np0005535963 podman[177327]: 2025-11-26 01:10:46.792867889 +0000 UTC m=+0.631729375 container init 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.4, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release=1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30)
Nov 25 20:10:46 np0005535963 kepler[177342]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 25 20:10:46 np0005535963 podman[177327]: 2025-11-26 01:10:46.828908033 +0000 UTC m=+0.667769469 container start 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-container, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, architecture=x86_64, container_name=kepler, managed_by=edpm_ansible, release-0.7.12=, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.expose-services=, name=ubi9, distribution-scope=public, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Nov 25 20:10:46 np0005535963 podman[177327]: kepler
Nov 25 20:10:46 np0005535963 kepler[177342]: I1126 01:10:46.839509       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Nov 25 20:10:46 np0005535963 kepler[177342]: I1126 01:10:46.839716       1 config.go:293] using gCgroup ID in the BPF program: true
Nov 25 20:10:46 np0005535963 kepler[177342]: I1126 01:10:46.839743       1 config.go:295] kernel version: 5.14
Nov 25 20:10:46 np0005535963 kepler[177342]: I1126 01:10:46.840564       1 power.go:78] Unable to obtain power, use estimate method
Nov 25 20:10:46 np0005535963 kepler[177342]: I1126 01:10:46.840614       1 redfish.go:169] failed to get redfish credential file path
Nov 25 20:10:46 np0005535963 kepler[177342]: I1126 01:10:46.841279       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Nov 25 20:10:46 np0005535963 kepler[177342]: I1126 01:10:46.841301       1 power.go:79] using none to obtain power
Nov 25 20:10:46 np0005535963 kepler[177342]: E1126 01:10:46.841325       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Nov 25 20:10:46 np0005535963 kepler[177342]: E1126 01:10:46.841371       1 exporter.go:154] failed to init GPU accelerators: no devices found
Nov 25 20:10:46 np0005535963 systemd[1]: Started kepler container.
Nov 25 20:10:46 np0005535963 kepler[177342]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 25 20:10:46 np0005535963 kepler[177342]: I1126 01:10:46.844547       1 exporter.go:84] Number of CPUs: 8
Nov 25 20:10:46 np0005535963 podman[177352]: 2025-11-26 01:10:46.95400664 +0000 UTC m=+0.104776075 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, managed_by=edpm_ansible, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9, release-0.7.12=)
Nov 25 20:10:46 np0005535963 systemd[1]: 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9-ab2f6b422a380ab.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 20:10:46 np0005535963 systemd[1]: 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9-ab2f6b422a380ab.service: Failed with result 'exit-code'.
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.477484       1 watcher.go:83] Using in cluster k8s config
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.477551       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Nov 25 20:10:47 np0005535963 kepler[177342]: E1126 01:10:47.477634       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.484983       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.485039       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.492384       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.492440       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.506571       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.506651       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.506673       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.520328       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.520395       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.520407       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.520418       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.520430       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.520492       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.520629       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.520720       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.520762       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.520908       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.521135       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Nov 25 20:10:47 np0005535963 kepler[177342]: I1126 01:10:47.522129       1 exporter.go:208] Started Kepler in 683.013159ms
Nov 25 20:10:48 np0005535963 python3.9[177539]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:10:48 np0005535963 systemd[1]: Stopping ceilometer_agent_ipmi container...
Nov 25 20:10:48 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:48.236 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 25 20:10:48 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:48.339 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Nov 25 20:10:48 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:48.339 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Nov 25 20:10:48 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:48.340 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Nov 25 20:10:48 np0005535963 ceilometer_agent_ipmi[175824]: 2025-11-26 01:10:48.354 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Nov 25 20:10:48 np0005535963 systemd[1]: libpod-576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22.scope: Deactivated successfully.
Nov 25 20:10:48 np0005535963 systemd[1]: libpod-576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22.scope: Consumed 2.247s CPU time.
Nov 25 20:10:48 np0005535963 podman[177543]: 2025-11-26 01:10:48.51293514 +0000 UTC m=+0.343157309 container died 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 20:10:48 np0005535963 systemd[1]: 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22-3a873c604cfe1019.timer: Deactivated successfully.
Nov 25 20:10:48 np0005535963 systemd[1]: Stopped /usr/bin/podman healthcheck run 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22.
Nov 25 20:10:48 np0005535963 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22-userdata-shm.mount: Deactivated successfully.
Nov 25 20:10:48 np0005535963 systemd[1]: var-lib-containers-storage-overlay-e1d2527f32a0a93c5b9f6304ec3907f9ef039bfc6059ac3dc081152946e0957f-merged.mount: Deactivated successfully.
Nov 25 20:10:49 np0005535963 podman[177543]: 2025-11-26 01:10:49.223159272 +0000 UTC m=+1.053381431 container cleanup 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 20:10:49 np0005535963 podman[177543]: ceilometer_agent_ipmi
Nov 25 20:10:49 np0005535963 podman[177573]: ceilometer_agent_ipmi
Nov 25 20:10:49 np0005535963 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Nov 25 20:10:49 np0005535963 systemd[1]: Stopped ceilometer_agent_ipmi container.
Nov 25 20:10:49 np0005535963 systemd[1]: Starting ceilometer_agent_ipmi container...
Nov 25 20:10:49 np0005535963 systemd[1]: Started libcrun container.
Nov 25 20:10:49 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d2527f32a0a93c5b9f6304ec3907f9ef039bfc6059ac3dc081152946e0957f/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:49 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d2527f32a0a93c5b9f6304ec3907f9ef039bfc6059ac3dc081152946e0957f/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:49 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d2527f32a0a93c5b9f6304ec3907f9ef039bfc6059ac3dc081152946e0957f/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:49 np0005535963 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d2527f32a0a93c5b9f6304ec3907f9ef039bfc6059ac3dc081152946e0957f/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 25 20:10:49 np0005535963 systemd[1]: Started /usr/bin/podman healthcheck run 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22.
Nov 25 20:10:49 np0005535963 podman[177585]: 2025-11-26 01:10:49.582725539 +0000 UTC m=+0.214509889 container init 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: + sudo -E kolla_set_configs
Nov 25 20:10:49 np0005535963 podman[177585]: 2025-11-26 01:10:49.636133448 +0000 UTC m=+0.267917738 container start 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 20:10:49 np0005535963 podman[177585]: ceilometer_agent_ipmi
Nov 25 20:10:49 np0005535963 systemd[1]: Started ceilometer_agent_ipmi container.
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Validating config file
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Copying service configuration files
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: INFO:__main__:Writing out command to execute
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: ++ cat /run_command
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: + ARGS=
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: + sudo kolla_copy_cacerts
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: + [[ ! -n '' ]]
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: + . kolla_extend_start
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: + umask 0022
Nov 25 20:10:49 np0005535963 ceilometer_agent_ipmi[177599]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Nov 25 20:10:49 np0005535963 podman[177607]: 2025-11-26 01:10:49.77214361 +0000 UTC m=+0.117465107 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm)
Nov 25 20:10:49 np0005535963 systemd[1]: 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22-2f61c51c27f1ce0b.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 20:10:49 np0005535963 systemd[1]: 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22-2f61c51c27f1ce0b.service: Failed with result 'exit-code'.
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.621 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.622 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.623 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.623 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.623 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.623 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.624 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.624 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.624 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.624 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.624 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.625 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.625 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.625 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.626 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.626 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.626 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.626 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.627 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.627 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.627 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.627 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.627 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.628 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.628 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.628 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.628 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.629 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.629 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.629 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.629 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.629 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.630 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.630 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.630 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.630 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.630 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.631 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.631 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.631 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.631 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.632 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.632 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.632 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.632 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.632 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.633 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.633 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.633 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.633 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.634 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.634 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.634 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.634 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.635 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.635 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.635 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.635 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.635 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.636 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.636 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.636 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.636 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.637 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.637 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.637 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.637 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.638 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.638 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.638 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.638 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.638 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.639 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.640 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.640 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.640 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.640 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.641 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.641 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.641 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.641 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.642 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.642 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.643 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.643 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.643 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.644 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.644 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.644 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.645 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.645 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.645 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.645 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.646 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.646 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.646 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.646 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.647 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.647 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.647 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.649 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.649 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.649 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.649 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.650 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.650 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.650 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.650 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.651 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.651 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.651 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.651 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.652 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.652 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.652 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.653 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.653 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.653 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.653 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.654 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.654 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.654 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.654 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.655 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.655 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.655 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.655 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.656 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.656 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.656 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.657 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.657 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.657 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.657 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.658 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.658 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.658 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.659 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.659 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.659 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.660 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.660 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.660 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.661 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.661 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.661 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.662 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.662 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.662 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.662 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.663 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.663 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.663 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.664 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.664 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.664 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.664 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.685 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.686 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.688 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 25 20:10:50 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:50.710 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpq9g5lr_d/privsep.sock']
Nov 25 20:10:50 np0005535963 python3.9[177783]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 20:10:50 np0005535963 systemd[1]: Stopping kepler container...
Nov 25 20:10:51 np0005535963 kepler[177342]: I1126 01:10:51.102801       1 exporter.go:218] Received shutdown signal
Nov 25 20:10:51 np0005535963 kepler[177342]: I1126 01:10:51.104018       1 exporter.go:226] Exiting...
Nov 25 20:10:51 np0005535963 systemd[1]: libpod-3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9.scope: Deactivated successfully.
Nov 25 20:10:51 np0005535963 systemd[1]: libpod-3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9.scope: Consumed 1.046s CPU time.
Nov 25 20:10:51 np0005535963 podman[177794]: 2025-11-26 01:10:51.318450179 +0000 UTC m=+0.310021071 container died 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=kepler, io.openshift.tags=base rhel9)
Nov 25 20:10:51 np0005535963 systemd[1]: 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9-ab2f6b422a380ab.timer: Deactivated successfully.
Nov 25 20:10:51 np0005535963 systemd[1]: Stopped /usr/bin/podman healthcheck run 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9.
Nov 25 20:10:51 np0005535963 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9-userdata-shm.mount: Deactivated successfully.
Nov 25 20:10:51 np0005535963 systemd[1]: var-lib-containers-storage-overlay-84011a908dd5543666a0ae9b2e0ed62f8acf014a8498759003ef66ef07d5675e-merged.mount: Deactivated successfully.
Nov 25 20:10:51 np0005535963 podman[177794]: 2025-11-26 01:10:51.367850043 +0000 UTC m=+0.359420895 container cleanup 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, managed_by=edpm_ansible, version=9.4, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-type=git)
Nov 25 20:10:51 np0005535963 podman[177794]: kepler
Nov 25 20:10:51 np0005535963 podman[177823]: kepler
Nov 25 20:10:51 np0005535963 systemd[1]: edpm_kepler.service: Deactivated successfully.
Nov 25 20:10:51 np0005535963 systemd[1]: Stopped kepler container.
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.454 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.455 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpq9g5lr_d/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.309 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.316 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.320 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.320 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Nov 25 20:10:51 np0005535963 systemd[1]: Starting kepler container...
Nov 25 20:10:51 np0005535963 systemd[1]: Started libcrun container.
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.582 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.583 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.584 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.584 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.585 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.585 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.585 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.585 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.586 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.586 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.586 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.586 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.586 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.591 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.591 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.591 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.591 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.591 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.591 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.592 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.592 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.592 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.592 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.592 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.592 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.592 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.593 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.593 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.593 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.593 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.593 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.594 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.594 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.594 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.594 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.594 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.594 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.595 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.595 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.595 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.595 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.595 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.595 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.596 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.596 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.596 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.596 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.596 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.596 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.597 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.597 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.597 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.597 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.597 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.598 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.598 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.598 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.598 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.598 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.598 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.598 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.599 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.599 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.599 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.599 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.599 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.599 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.599 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.600 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.600 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.600 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.600 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.600 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.600 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.601 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.601 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.601 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.601 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.601 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.601 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.602 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.602 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.602 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.602 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.602 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.602 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.603 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.603 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.603 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.603 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.603 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.604 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.604 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.604 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.604 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.604 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.604 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.604 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.605 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.605 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.605 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.605 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.605 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.605 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.605 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.605 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.606 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.606 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.606 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.606 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.606 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.606 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.606 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.606 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.607 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.607 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.607 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.607 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.607 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.607 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.607 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.608 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.608 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.608 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.608 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.608 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.608 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.608 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.608 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.609 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.609 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.609 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.609 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.609 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.609 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.609 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.609 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.610 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.610 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.610 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.610 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.610 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.610 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.610 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.611 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.611 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.611 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.611 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.611 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.611 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.611 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.611 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.612 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.612 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.612 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.612 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.612 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.612 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.612 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.612 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.612 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.613 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.613 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.613 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.613 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.613 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.613 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.613 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.613 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.614 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.614 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.614 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.614 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.614 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.614 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.614 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.614 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.614 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.615 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.615 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.615 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.615 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.615 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.615 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.615 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.615 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.616 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.616 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.616 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.616 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.616 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.616 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.616 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.616 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.619 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.619 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.619 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.619 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.619 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 25 20:10:51 np0005535963 ceilometer_agent_ipmi[177599]: 2025-11-26 01:10:51.622 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 25 20:10:51 np0005535963 systemd[1]: Started /usr/bin/podman healthcheck run 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9.
Nov 25 20:10:51 np0005535963 podman[177838]: 2025-11-26 01:10:51.636269083 +0000 UTC m=+0.153080810 container init 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, name=ubi9, io.openshift.expose-services=, release=1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Nov 25 20:10:51 np0005535963 kepler[177854]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 25 20:10:51 np0005535963 kepler[177854]: I1126 01:10:51.663854       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Nov 25 20:10:51 np0005535963 kepler[177854]: I1126 01:10:51.664223       1 config.go:293] using gCgroup ID in the BPF program: true
Nov 25 20:10:51 np0005535963 kepler[177854]: I1126 01:10:51.664257       1 config.go:295] kernel version: 5.14
Nov 25 20:10:51 np0005535963 podman[177838]: 2025-11-26 01:10:51.66479534 +0000 UTC m=+0.181607047 container start 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 25 20:10:51 np0005535963 kepler[177854]: I1126 01:10:51.665492       1 power.go:78] Unable to obtain power, use estimate method
Nov 25 20:10:51 np0005535963 kepler[177854]: I1126 01:10:51.665526       1 redfish.go:169] failed to get redfish credential file path
Nov 25 20:10:51 np0005535963 kepler[177854]: I1126 01:10:51.666011       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Nov 25 20:10:51 np0005535963 kepler[177854]: I1126 01:10:51.666026       1 power.go:79] using none to obtain power
Nov 25 20:10:51 np0005535963 kepler[177854]: E1126 01:10:51.666043       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Nov 25 20:10:51 np0005535963 kepler[177854]: E1126 01:10:51.666065       1 exporter.go:154] failed to init GPU accelerators: no devices found
Nov 25 20:10:51 np0005535963 kepler[177854]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 25 20:10:51 np0005535963 kepler[177854]: I1126 01:10:51.668188       1 exporter.go:84] Number of CPUs: 8
Nov 25 20:10:51 np0005535963 podman[177838]: kepler
Nov 25 20:10:51 np0005535963 systemd[1]: Started kepler container.
Nov 25 20:10:51 np0005535963 podman[177866]: 2025-11-26 01:10:51.761611946 +0000 UTC m=+0.077277265 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.29.0, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, architecture=x86_64, container_name=kepler, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Nov 25 20:10:51 np0005535963 systemd[1]: 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9-17b1359af5e86b3e.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 20:10:51 np0005535963 systemd[1]: 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9-17b1359af5e86b3e.service: Failed with result 'exit-code'.
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.173313       1 watcher.go:83] Using in cluster k8s config
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.173960       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Nov 25 20:10:52 np0005535963 kepler[177854]: E1126 01:10:52.174198       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.181751       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.182067       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.187001       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.187061       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.199177       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.199257       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.199283       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.211483       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.211541       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.211550       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.211559       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.211568       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.211589       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.212253       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.212333       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.212369       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.212397       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.212583       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Nov 25 20:10:52 np0005535963 kepler[177854]: I1126 01:10:52.212981       1 exporter.go:208] Started Kepler in 549.472392ms
Nov 25 20:10:52 np0005535963 python3.9[178051]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 20:10:54 np0005535963 python3.9[178203]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Nov 25 20:10:55 np0005535963 python3.9[178368]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:10:55 np0005535963 systemd[1]: Started libpod-conmon-e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16.scope.
Nov 25 20:10:55 np0005535963 podman[178369]: 2025-11-26 01:10:55.651032514 +0000 UTC m=+0.152669310 container exec e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:10:55 np0005535963 podman[178369]: 2025-11-26 01:10:55.686912833 +0000 UTC m=+0.188549579 container exec_died e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 20:10:55 np0005535963 systemd[1]: libpod-conmon-e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16.scope: Deactivated successfully.
Nov 25 20:10:56 np0005535963 python3.9[178550]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:10:56 np0005535963 systemd[1]: Started libpod-conmon-e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16.scope.
Nov 25 20:10:56 np0005535963 podman[178551]: 2025-11-26 01:10:56.959752581 +0000 UTC m=+0.120754854 container exec e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 20:10:56 np0005535963 podman[178551]: 2025-11-26 01:10:56.991994295 +0000 UTC m=+0.152996558 container exec_died e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 20:10:57 np0005535963 systemd[1]: libpod-conmon-e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16.scope: Deactivated successfully.
Nov 25 20:10:57 np0005535963 python3.9[178734]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:10:58 np0005535963 podman[178811]: 2025-11-26 01:10:58.518703 +0000 UTC m=+0.079590745 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 20:10:59 np0005535963 python3.9[178908]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Nov 25 20:10:59 np0005535963 podman[158021]: time="2025-11-26T01:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 20:10:59 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18535 "" "Go-http-client/1.1"
Nov 25 20:10:59 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2990 "" "Go-http-client/1.1"
Nov 25 20:11:00 np0005535963 python3.9[179073]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:11:00 np0005535963 systemd[1]: Started libpod-conmon-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.scope.
Nov 25 20:11:00 np0005535963 podman[179074]: 2025-11-26 01:11:00.459645616 +0000 UTC m=+0.157010643 container exec bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 25 20:11:00 np0005535963 podman[179074]: 2025-11-26 01:11:00.494497069 +0000 UTC m=+0.191862116 container exec_died bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118)
Nov 25 20:11:00 np0005535963 systemd[1]: libpod-conmon-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.scope: Deactivated successfully.
Nov 25 20:11:01 np0005535963 podman[179229]: 2025-11-26 01:11:01.355607401 +0000 UTC m=+0.093969612 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2)
Nov 25 20:11:01 np0005535963 openstack_network_exporter[160178]: ERROR   01:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 20:11:01 np0005535963 openstack_network_exporter[160178]: ERROR   01:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 20:11:01 np0005535963 openstack_network_exporter[160178]: ERROR   01:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 20:11:01 np0005535963 openstack_network_exporter[160178]: ERROR   01:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 20:11:01 np0005535963 openstack_network_exporter[160178]: 
Nov 25 20:11:01 np0005535963 openstack_network_exporter[160178]: ERROR   01:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 20:11:01 np0005535963 openstack_network_exporter[160178]: 
Nov 25 20:11:01 np0005535963 python3.9[179276]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:11:01 np0005535963 systemd[1]: Started libpod-conmon-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.scope.
Nov 25 20:11:01 np0005535963 podman[179278]: 2025-11-26 01:11:01.712786296 +0000 UTC m=+0.128749893 container exec bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4)
Nov 25 20:11:01 np0005535963 podman[179278]: 2025-11-26 01:11:01.745776791 +0000 UTC m=+0.161740378 container exec_died bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, org.label-schema.build-date=20251118)
Nov 25 20:11:01 np0005535963 systemd[1]: libpod-conmon-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.scope: Deactivated successfully.
Nov 25 20:11:02 np0005535963 python3.9[179460]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:04 np0005535963 python3.9[179612]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Nov 25 20:11:05 np0005535963 python3.9[179776]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:11:05 np0005535963 systemd[1]: Started libpod-conmon-48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4.scope.
Nov 25 20:11:05 np0005535963 podman[179777]: 2025-11-26 01:11:05.466625673 +0000 UTC m=+0.134754131 container exec 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 20:11:05 np0005535963 podman[179777]: 2025-11-26 01:11:05.501457745 +0000 UTC m=+0.169586153 container exec_died 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 20:11:05 np0005535963 systemd[1]: libpod-conmon-48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4.scope: Deactivated successfully.
Nov 25 20:11:06 np0005535963 python3.9[179957]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:11:06 np0005535963 systemd[1]: Started libpod-conmon-48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4.scope.
Nov 25 20:11:06 np0005535963 podman[179958]: 2025-11-26 01:11:06.83033034 +0000 UTC m=+0.156383637 container exec 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 20:11:06 np0005535963 podman[179958]: 2025-11-26 01:11:06.864638428 +0000 UTC m=+0.190691715 container exec_died 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 20:11:06 np0005535963 systemd[1]: libpod-conmon-48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4.scope: Deactivated successfully.
Nov 25 20:11:08 np0005535963 python3.9[180140]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:09 np0005535963 python3.9[180292]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Nov 25 20:11:10 np0005535963 python3.9[180457]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:11:10 np0005535963 systemd[1]: Started libpod-conmon-cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7.scope.
Nov 25 20:11:10 np0005535963 podman[180458]: 2025-11-26 01:11:10.398188866 +0000 UTC m=+0.145662256 container exec cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 20:11:10 np0005535963 podman[180458]: 2025-11-26 01:11:10.432557146 +0000 UTC m=+0.180030576 container exec_died cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 20:11:10 np0005535963 systemd[1]: libpod-conmon-cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7.scope: Deactivated successfully.
Nov 25 20:11:10 np0005535963 podman[180475]: 2025-11-26 01:11:10.613602548 +0000 UTC m=+0.205447382 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 20:11:11 np0005535963 python3.9[180663]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:11:11 np0005535963 systemd[1]: Started libpod-conmon-cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7.scope.
Nov 25 20:11:11 np0005535963 podman[180664]: 2025-11-26 01:11:11.754003975 +0000 UTC m=+0.152716160 container exec cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 20:11:11 np0005535963 podman[180664]: 2025-11-26 01:11:11.786543447 +0000 UTC m=+0.185255592 container exec_died cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 20:11:11 np0005535963 systemd[1]: libpod-conmon-cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7.scope: Deactivated successfully.
Nov 25 20:11:13 np0005535963 python3.9[180845]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:14 np0005535963 python3.9[180997]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Nov 25 20:11:14 np0005535963 podman[181034]: 2025-11-26 01:11:14.576339726 +0000 UTC m=+0.116918014 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 20:11:14 np0005535963 podman[181028]: 2025-11-26 01:11:14.58301031 +0000 UTC m=+0.125963280 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.openshift.expose-services=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.6, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9)
Nov 25 20:11:15 np0005535963 python3.9[181204]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:11:15 np0005535963 systemd[1]: Started libpod-conmon-3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6.scope.
Nov 25 20:11:15 np0005535963 podman[181205]: 2025-11-26 01:11:15.687893998 +0000 UTC m=+0.174963624 container exec 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.7)
Nov 25 20:11:15 np0005535963 podman[181205]: 2025-11-26 01:11:15.723224373 +0000 UTC m=+0.210293929 container exec_died 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, name=ubi9-minimal, maintainer=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Nov 25 20:11:15 np0005535963 systemd[1]: libpod-conmon-3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6.scope: Deactivated successfully.
Nov 25 20:11:16 np0005535963 python3.9[181387]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:11:17 np0005535963 systemd[1]: Started libpod-conmon-3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6.scope.
Nov 25 20:11:17 np0005535963 podman[181388]: 2025-11-26 01:11:17.118256771 +0000 UTC m=+0.153230725 container exec 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 25 20:11:17 np0005535963 podman[181388]: 2025-11-26 01:11:17.156452291 +0000 UTC m=+0.191426145 container exec_died 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container)
Nov 25 20:11:17 np0005535963 systemd[1]: libpod-conmon-3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6.scope: Deactivated successfully.
Nov 25 20:11:18 np0005535963 python3.9[181569]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:19 np0005535963 python3.9[181721]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Nov 25 20:11:20 np0005535963 podman[181859]: 2025-11-26 01:11:20.592519024 +0000 UTC m=+0.134437552 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 20:11:20 np0005535963 systemd[1]: 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22-2f61c51c27f1ce0b.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 20:11:20 np0005535963 systemd[1]: 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22-2f61c51c27f1ce0b.service: Failed with result 'exit-code'.
Nov 25 20:11:20 np0005535963 python3.9[181905]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:11:20 np0005535963 systemd[1]: Started libpod-conmon-576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22.scope.
Nov 25 20:11:20 np0005535963 podman[181906]: 2025-11-26 01:11:20.958482949 +0000 UTC m=+0.144166827 container exec 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:11:20 np0005535963 podman[181906]: 2025-11-26 01:11:20.992978862 +0000 UTC m=+0.178662740 container exec_died 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm)
Nov 25 20:11:21 np0005535963 systemd[1]: libpod-conmon-576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22.scope: Deactivated successfully.
Nov 25 20:11:22 np0005535963 podman[182058]: 2025-11-26 01:11:21.999260397 +0000 UTC m=+0.103130412 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, vcs-type=git, com.redhat.component=ubi9-container, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., version=9.4)
Nov 25 20:11:22 np0005535963 python3.9[182103]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:11:22 np0005535963 systemd[1]: Started libpod-conmon-576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22.scope.
Nov 25 20:11:22 np0005535963 podman[182104]: 2025-11-26 01:11:22.415278433 +0000 UTC m=+0.175966710 container exec 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 20:11:22 np0005535963 podman[182104]: 2025-11-26 01:11:22.449739844 +0000 UTC m=+0.210428121 container exec_died 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 25 20:11:22 np0005535963 systemd[1]: libpod-conmon-576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22.scope: Deactivated successfully.
Nov 25 20:11:23 np0005535963 python3.9[182285]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:24 np0005535963 python3.9[182437]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Nov 25 20:11:26 np0005535963 python3.9[182602]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:11:26 np0005535963 systemd[1]: Started libpod-conmon-3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9.scope.
Nov 25 20:11:26 np0005535963 podman[182603]: 2025-11-26 01:11:26.287469896 +0000 UTC m=+0.162897632 container exec 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, distribution-scope=public, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, maintainer=Red Hat, Inc.)
Nov 25 20:11:26 np0005535963 podman[182603]: 2025-11-26 01:11:26.324727195 +0000 UTC m=+0.200154891 container exec_died 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, distribution-scope=public, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, name=ubi9, release=1214.1726694543, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, container_name=kepler, release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 20:11:26 np0005535963 systemd[1]: libpod-conmon-3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9.scope: Deactivated successfully.
Nov 25 20:11:27 np0005535963 python3.9[182783]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 20:11:27 np0005535963 systemd[1]: Started libpod-conmon-3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9.scope.
Nov 25 20:11:27 np0005535963 podman[182784]: 2025-11-26 01:11:27.785546073 +0000 UTC m=+0.161630865 container exec 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Nov 25 20:11:27 np0005535963 podman[182784]: 2025-11-26 01:11:27.819236201 +0000 UTC m=+0.195320843 container exec_died 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, vcs-type=git, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=base rhel9, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_id=edpm)
Nov 25 20:11:27 np0005535963 systemd[1]: libpod-conmon-3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9.scope: Deactivated successfully.
Nov 25 20:11:28 np0005535963 podman[182964]: 2025-11-26 01:11:28.965489147 +0000 UTC m=+0.113201959 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 20:11:29 np0005535963 python3.9[182965]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:29 np0005535963 podman[158021]: time="2025-11-26T01:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 20:11:29 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18534 "" "Go-http-client/1.1"
Nov 25 20:11:29 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2989 "" "Go-http-client/1.1"
Nov 25 20:11:30 np0005535963 python3.9[183140]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:31 np0005535963 openstack_network_exporter[160178]: ERROR   01:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 20:11:31 np0005535963 openstack_network_exporter[160178]: ERROR   01:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 20:11:31 np0005535963 openstack_network_exporter[160178]: ERROR   01:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 20:11:31 np0005535963 openstack_network_exporter[160178]: ERROR   01:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 20:11:31 np0005535963 openstack_network_exporter[160178]: 
Nov 25 20:11:31 np0005535963 openstack_network_exporter[160178]: ERROR   01:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 20:11:31 np0005535963 openstack_network_exporter[160178]: 
Nov 25 20:11:31 np0005535963 python3.9[183292]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:31 np0005535963 podman[183293]: 2025-11-26 01:11:31.574783564 +0000 UTC m=+0.120105496 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 20:11:32 np0005535963 python3.9[183435]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764119490.6787312-778-243539636671818/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:33 np0005535963 python3.9[183587]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:34 np0005535963 python3.9[183739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:35 np0005535963 python3.9[183817]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:36 np0005535963 python3.9[183969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:37 np0005535963 python3.9[184047]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.3b47bfkw recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:38 np0005535963 python3.9[184199]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:39 np0005535963 python3.9[184277]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:40 np0005535963 python3.9[184429]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:11:41 np0005535963 podman[184554]: 2025-11-26 01:11:41.419561851 +0000 UTC m=+0.190886697 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 20:11:41 np0005535963 python3[184602]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 20:11:42 np0005535963 python3.9[184762]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:43 np0005535963 python3.9[184840]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:44 np0005535963 python3.9[184992]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:44 np0005535963 podman[184996]: 2025-11-26 01:11:44.826454902 +0000 UTC m=+0.107666171 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 20:11:44 np0005535963 podman[184995]: 2025-11-26 01:11:44.855176179 +0000 UTC m=+0.140403673 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, name=ubi9-minimal, maintainer=Red Hat, Inc.)
Nov 25 20:11:45 np0005535963 python3.9[185115]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:46 np0005535963 python3.9[185267]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:47 np0005535963 python3.9[185345]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:48 np0005535963 python3.9[185497]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:49 np0005535963 python3.9[185575]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:50 np0005535963 python3.9[185728]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:11:51 np0005535963 podman[185825]: 2025-11-26 01:11:51.024240024 +0000 UTC m=+0.147599107 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 20:11:51 np0005535963 python3.9[185872]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764119509.3787367-903-29194503546054/.source.nft follow=False _original_basename=ruleset.j2 checksum=195cfcdc3ed4fc7d98b13eed88ef5cb7956fa1b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:52 np0005535963 podman[186024]: 2025-11-26 01:11:52.2000892 +0000 UTC m=+0.132976171 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.4, name=ubi9, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, config_id=edpm, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 25 20:11:52 np0005535963 python3.9[186025]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:53 np0005535963 python3.9[186194]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:11:54 np0005535963 python3.9[186349]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:56 np0005535963 python3.9[186501]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:11:57 np0005535963 python3.9[186654]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 20:11:58 np0005535963 python3.9[186808]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 20:11:59 np0005535963 podman[186935]: 2025-11-26 01:11:59.457921825 +0000 UTC m=+0.129400600 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 20:11:59 np0005535963 python3.9[186986]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:11:59 np0005535963 podman[158021]: time="2025-11-26T01:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 20:11:59 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Nov 25 20:11:59 np0005535963 podman[158021]: @ - - [26/Nov/2025:01:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2994 "" "Go-http-client/1.1"
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.774 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.775 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.775 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feff248b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff25140e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248a270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff35fda90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff5310410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.779 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff2489520>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff4ce75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feff25140b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feff248b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feff248b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea8b30>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feff248b740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feff248b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feff248b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feff248b1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feff248ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feff248b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feff248b0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feff248ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feff248bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feff248bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feff24894f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feff248b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feff248bc20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feff248b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feff248bcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feff55e84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feff248bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feff248b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feff248bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feff248a2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feff248aea0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feff248afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:11:59 np0005535963 ceilometer_agent_compute[154508]: 2025-11-26 01:11:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 20:12:00 np0005535963 systemd[1]: session-22.scope: Deactivated successfully.
Nov 25 20:12:00 np0005535963 systemd[1]: session-22.scope: Consumed 2min 10.761s CPU time.
Nov 25 20:12:00 np0005535963 systemd-logind[800]: Session 22 logged out. Waiting for processes to exit.
Nov 25 20:12:00 np0005535963 systemd-logind[800]: Removed session 22.
Nov 25 20:12:01 np0005535963 openstack_network_exporter[160178]: ERROR   01:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 20:12:01 np0005535963 openstack_network_exporter[160178]: ERROR   01:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 20:12:01 np0005535963 openstack_network_exporter[160178]: ERROR   01:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 20:12:01 np0005535963 openstack_network_exporter[160178]: ERROR   01:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 20:12:01 np0005535963 openstack_network_exporter[160178]: 
Nov 25 20:12:01 np0005535963 openstack_network_exporter[160178]: ERROR   01:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 20:12:01 np0005535963 openstack_network_exporter[160178]: 
Nov 25 20:12:02 np0005535963 podman[187012]: 2025-11-26 01:12:02.615787917 +0000 UTC m=+0.161848172 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Nov 25 20:12:06 np0005535963 systemd-logind[800]: New session 23 of user zuul.
Nov 25 20:12:06 np0005535963 systemd[1]: Started Session 23 of User zuul.
Nov 25 20:12:07 np0005535963 python3.9[187185]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 20:12:09 np0005535963 python3.9[187341]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Nov 25 20:12:11 np0005535963 python3.9[187494]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 20:12:12 np0005535963 podman[187550]: 2025-11-26 01:12:12.183721825 +0000 UTC m=+0.181208982 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 20:12:12 np0005535963 python3.9[187597]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 20:12:15 np0005535963 podman[187605]: 2025-11-26 01:12:15.589082393 +0000 UTC m=+0.141794602 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_id=edpm, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Nov 25 20:12:15 np0005535963 podman[187606]: 2025-11-26 01:12:15.588684451 +0000 UTC m=+0.135411800 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 20:12:19 np0005535963 python3.9[187802]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:12:20 np0005535963 python3.9[187926]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764119538.639638-54-223067727103783/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:21 np0005535963 podman[188026]: 2025-11-26 01:12:21.573391864 +0000 UTC m=+0.120185777 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 25 20:12:21 np0005535963 python3.9[188099]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 20:12:22 np0005535963 podman[188188]: 2025-11-26 01:12:22.540622591 +0000 UTC m=+0.095705102 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=kepler, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64)
Nov 25 20:12:22 np0005535963 python3.9[188269]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 20:12:23 np0005535963 python3.9[188392]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764119542.2493134-77-73884844523707/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:12:25 compute-0 python3.9[188544]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 01:12:25 compute-0 systemd[1]: Stopping System Logging Service...
Nov 26 01:12:25 compute-0 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] exiting on signal 15.
Nov 26 01:12:25 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Nov 26 01:12:25 compute-0 systemd[1]: Stopped System Logging Service.
Nov 26 01:12:25 compute-0 systemd[1]: rsyslog.service: Consumed 2.175s CPU time, 5.4M memory peak, read 0B from disk, written 3.9M to disk.
Nov 26 01:12:25 compute-0 systemd[1]: Starting System Logging Service...
Nov 26 01:12:25 compute-0 rsyslogd[188548]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="188548" x-info="https://www.rsyslog.com"] start
Nov 26 01:12:25 compute-0 systemd[1]: Started System Logging Service.
Nov 26 01:12:25 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 01:12:25 compute-0 rsyslogd[188548]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Nov 26 01:12:25 compute-0 rsyslogd[188548]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Nov 26 01:12:25 compute-0 rsyslogd[188548]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Nov 26 01:12:25 compute-0 rsyslogd[188548]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Nov 26 01:12:26 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Nov 26 01:12:26 compute-0 systemd[1]: session-23.scope: Consumed 15.917s CPU time.
Nov 26 01:12:26 compute-0 systemd-logind[800]: Session 23 logged out. Waiting for processes to exit.
Nov 26 01:12:26 compute-0 systemd-logind[800]: Removed session 23.
Nov 26 01:12:29 compute-0 podman[158021]: time="2025-11-26T01:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:12:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Nov 26 01:12:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2980 "" "Go-http-client/1.1"
Nov 26 01:12:30 compute-0 podman[188577]: 2025-11-26 01:12:30.600743073 +0000 UTC m=+0.159444672 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:12:31 compute-0 openstack_network_exporter[160178]: ERROR   01:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:12:31 compute-0 openstack_network_exporter[160178]: ERROR   01:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:12:31 compute-0 openstack_network_exporter[160178]: ERROR   01:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:12:31 compute-0 openstack_network_exporter[160178]: ERROR   01:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:12:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:12:31 compute-0 openstack_network_exporter[160178]: ERROR   01:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:12:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:12:33 compute-0 podman[188603]: 2025-11-26 01:12:33.580316613 +0000 UTC m=+0.126076713 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 01:12:33 compute-0 systemd-logind[800]: New session 24 of user zuul.
Nov 26 01:12:33 compute-0 systemd[1]: Started Session 24 of User zuul.
Nov 26 01:12:41 compute-0 python3[189363]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:12:42 compute-0 podman[189424]: 2025-11-26 01:12:42.639975828 +0000 UTC m=+0.188551813 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 26 01:12:43 compute-0 python3[189492]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 01:12:45 compute-0 python3[189519]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 01:12:46 compute-0 podman[189546]: 2025-11-26 01:12:46.002354349 +0000 UTC m=+0.135826660 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:12:46 compute-0 python3[189547]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:12:46 compute-0 podman[189545]: 2025-11-26 01:12:46.026374897 +0000 UTC m=+0.162896911 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 26 01:12:46 compute-0 kernel: loop: module loaded
Nov 26 01:12:46 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Nov 26 01:12:46 compute-0 python3[189622]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:12:46 compute-0 lvm[189625]: PV /dev/loop3 not used.
Nov 26 01:12:46 compute-0 lvm[189627]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 01:12:46 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 26 01:12:46 compute-0 lvm[189635]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 26 01:12:46 compute-0 lvm[189637]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 01:12:46 compute-0 lvm[189637]: VG ceph_vg0 finished
Nov 26 01:12:46 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 26 01:12:47 compute-0 python3[189715]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 01:12:48 compute-0 python3[189788]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764119567.204117-36730-17766386760946/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:12:49 compute-0 python3[189838]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:12:49 compute-0 systemd[1]: Reloading.
Nov 26 01:12:49 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:12:49 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:12:49 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 26 01:12:49 compute-0 bash[189879]: /dev/loop3: [64513]:4194940 (/var/lib/ceph-osd-0.img)
Nov 26 01:12:49 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 26 01:12:49 compute-0 lvm[189881]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 01:12:49 compute-0 lvm[189881]: VG ceph_vg0 finished
Nov 26 01:12:50 compute-0 python3[189907]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 01:12:52 compute-0 podman[189934]: 2025-11-26 01:12:52.011247074 +0000 UTC m=+0.140608210 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Nov 26 01:12:52 compute-0 python3[189935]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 01:12:52 compute-0 python3[189980]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:12:52 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Nov 26 01:12:52 compute-0 podman[190011]: 2025-11-26 01:12:52.934045607 +0000 UTC m=+0.126989857 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.openshift.expose-services=, io.openshift.tags=base rhel9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., architecture=x86_64, version=9.4, com.redhat.component=ubi9-container, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm)
Nov 26 01:12:52 compute-0 python3[190012]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:12:53 compute-0 lvm[190035]: PV /dev/loop4 has no VG metadata.
Nov 26 01:12:53 compute-0 lvm[190035]: PV /dev/loop4 online, VG unknown.
Nov 26 01:12:53 compute-0 lvm[190035]: VG unknown
Nov 26 01:12:53 compute-0 lvm[190044]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 01:12:53 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 26 01:12:53 compute-0 lvm[190046]:  PVs online not found for VG ceph_vg1, using all devices.
Nov 26 01:12:53 compute-0 lvm[190046]:  1 logical volume(s) in volume group "ceph_vg1" now active
Nov 26 01:12:53 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 26 01:12:54 compute-0 python3[190124]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 01:12:54 compute-0 python3[190197]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764119573.5857174-36757-175727382714366/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:12:55 compute-0 python3[190247]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:12:55 compute-0 systemd[1]: Reloading.
Nov 26 01:12:55 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:12:55 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:12:55 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 26 01:12:55 compute-0 bash[190287]: /dev/loop4: [64513]:4328189 (/var/lib/ceph-osd-1.img)
Nov 26 01:12:55 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 26 01:12:56 compute-0 lvm[190288]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 01:12:56 compute-0 lvm[190288]: VG ceph_vg1 finished
Nov 26 01:12:56 compute-0 python3[190314]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 01:12:58 compute-0 python3[190341]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 01:12:58 compute-0 python3[190367]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:12:58 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Nov 26 01:12:58 compute-0 python3[190399]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:12:59 compute-0 lvm[190402]: PV /dev/loop5 not used.
Nov 26 01:12:59 compute-0 lvm[190404]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 01:12:59 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 26 01:12:59 compute-0 lvm[190412]:  1 logical volume(s) in volume group "ceph_vg2" now active
Nov 26 01:12:59 compute-0 lvm[190415]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 01:12:59 compute-0 lvm[190415]: VG ceph_vg2 finished
Nov 26 01:12:59 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 26 01:12:59 compute-0 podman[158021]: time="2025-11-26T01:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:12:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Nov 26 01:12:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2994 "" "Go-http-client/1.1"
Nov 26 01:13:00 compute-0 python3[190493]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 01:13:00 compute-0 python3[190566]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764119579.5934515-36784-145522806675417/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:13:01 compute-0 podman[190616]: 2025-11-26 01:13:01.037252099 +0000 UTC m=+0.101707146 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 01:13:01 compute-0 python3[190617]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:13:01 compute-0 systemd[1]: Reloading.
Nov 26 01:13:01 compute-0 openstack_network_exporter[160178]: ERROR   01:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:13:01 compute-0 openstack_network_exporter[160178]: ERROR   01:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:13:01 compute-0 openstack_network_exporter[160178]: ERROR   01:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:13:01 compute-0 openstack_network_exporter[160178]: ERROR   01:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:13:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:13:01 compute-0 openstack_network_exporter[160178]: ERROR   01:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:13:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:13:01 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:13:01 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:13:01 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 26 01:13:01 compute-0 bash[190680]: /dev/loop5: [64513]:4329603 (/var/lib/ceph-osd-2.img)
Nov 26 01:13:01 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 26 01:13:01 compute-0 lvm[190682]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 01:13:01 compute-0 lvm[190682]: VG ceph_vg2 finished
Nov 26 01:13:03 compute-0 python3[190706]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:13:04 compute-0 podman[190744]: 2025-11-26 01:13:04.594390862 +0000 UTC m=+0.135948233 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 26 01:13:06 compute-0 python3[190826]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 01:13:08 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 01:13:08 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 01:13:09 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 01:13:09 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 01:13:09 compute-0 systemd[1]: run-r320da14f416a48f58edba387134d56be.service: Deactivated successfully.
Nov 26 01:13:09 compute-0 python3[190953]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 01:13:10 compute-0 python3[190981]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:13:11 compute-0 python3[191045]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:13:11 compute-0 python3[191071]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:13:12 compute-0 python3[191150]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 01:13:12 compute-0 podman[191149]: 2025-11-26 01:13:12.911231759 +0000 UTC m=+0.190894047 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:13:13 compute-0 python3[191248]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764119592.3884747-36931-22255207072460/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:13:14 compute-0 python3[191350]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 01:13:15 compute-0 python3[191423]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764119594.0923061-36949-80387218788893/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:13:15 compute-0 python3[191473]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 01:13:16 compute-0 python3[191501]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 01:13:16 compute-0 podman[191503]: 2025-11-26 01:13:16.190217378 +0000 UTC m=+0.096480552 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:13:16 compute-0 podman[191502]: 2025-11-26 01:13:16.207174432 +0000 UTC m=+0.113614791 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, version=9.6)
Nov 26 01:13:16 compute-0 python3[191569]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 01:13:17 compute-0 python3[191597]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:13:17 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 26 01:13:17 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 26 01:13:17 compute-0 systemd-logind[800]: New session 25 of user ceph-admin.
Nov 26 01:13:17 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 26 01:13:17 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 26 01:13:17 compute-0 systemd[191616]: Queued start job for default target Main User Target.
Nov 26 01:13:17 compute-0 systemd[191616]: Created slice User Application Slice.
Nov 26 01:13:17 compute-0 systemd[191616]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 26 01:13:17 compute-0 systemd[191616]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 01:13:17 compute-0 systemd[191616]: Reached target Paths.
Nov 26 01:13:17 compute-0 systemd[191616]: Reached target Timers.
Nov 26 01:13:17 compute-0 systemd[191616]: Starting D-Bus User Message Bus Socket...
Nov 26 01:13:17 compute-0 systemd[191616]: Starting Create User's Volatile Files and Directories...
Nov 26 01:13:17 compute-0 systemd[191616]: Listening on D-Bus User Message Bus Socket.
Nov 26 01:13:17 compute-0 systemd[191616]: Reached target Sockets.
Nov 26 01:13:17 compute-0 systemd[191616]: Finished Create User's Volatile Files and Directories.
Nov 26 01:13:17 compute-0 systemd[191616]: Reached target Basic System.
Nov 26 01:13:17 compute-0 systemd[191616]: Reached target Main User Target.
Nov 26 01:13:17 compute-0 systemd[191616]: Startup finished in 194ms.
Nov 26 01:13:17 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 26 01:13:17 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Nov 26 01:13:17 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Nov 26 01:13:17 compute-0 systemd-logind[800]: Session 25 logged out. Waiting for processes to exit.
Nov 26 01:13:17 compute-0 systemd-logind[800]: Removed session 25.
Nov 26 01:13:22 compute-0 podman[191711]: 2025-11-26 01:13:22.832244876 +0000 UTC m=+0.383640804 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 01:13:23 compute-0 podman[191730]: 2025-11-26 01:13:23.522647547 +0000 UTC m=+0.083603910 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.openshift.expose-services=)
Nov 26 01:13:28 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 26 01:13:28 compute-0 systemd[191616]: Activating special unit Exit the Session...
Nov 26 01:13:28 compute-0 systemd[191616]: Stopped target Main User Target.
Nov 26 01:13:28 compute-0 systemd[191616]: Stopped target Basic System.
Nov 26 01:13:28 compute-0 systemd[191616]: Stopped target Paths.
Nov 26 01:13:28 compute-0 systemd[191616]: Stopped target Sockets.
Nov 26 01:13:28 compute-0 systemd[191616]: Stopped target Timers.
Nov 26 01:13:28 compute-0 systemd[191616]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 26 01:13:28 compute-0 systemd[191616]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 26 01:13:28 compute-0 systemd[191616]: Closed D-Bus User Message Bus Socket.
Nov 26 01:13:28 compute-0 systemd[191616]: Stopped Create User's Volatile Files and Directories.
Nov 26 01:13:28 compute-0 systemd[191616]: Removed slice User Application Slice.
Nov 26 01:13:28 compute-0 systemd[191616]: Reached target Shutdown.
Nov 26 01:13:28 compute-0 systemd[191616]: Finished Exit the Session.
Nov 26 01:13:28 compute-0 systemd[191616]: Reached target Exit the Session.
Nov 26 01:13:28 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 26 01:13:28 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 26 01:13:28 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 26 01:13:28 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 26 01:13:28 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 26 01:13:28 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 26 01:13:28 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 26 01:13:29 compute-0 podman[158021]: time="2025-11-26T01:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:13:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Nov 26 01:13:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2984 "" "Go-http-client/1.1"
Nov 26 01:13:31 compute-0 openstack_network_exporter[160178]: ERROR   01:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:13:31 compute-0 openstack_network_exporter[160178]: ERROR   01:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:13:31 compute-0 openstack_network_exporter[160178]: ERROR   01:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:13:31 compute-0 openstack_network_exporter[160178]: ERROR   01:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:13:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:13:31 compute-0 openstack_network_exporter[160178]: ERROR   01:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:13:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:13:43 compute-0 podman[191765]: 2025-11-26 01:13:43.168556274 +0000 UTC m=+11.725988906 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:13:43 compute-0 podman[191775]: 2025-11-26 01:13:43.201985175 +0000 UTC m=+7.767115017 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118)
Nov 26 01:13:43 compute-0 podman[191670]: 2025-11-26 01:13:43.271643733 +0000 UTC m=+25.271025243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:43 compute-0 podman[191807]: 2025-11-26 01:13:43.367709338 +0000 UTC m=+0.161035041 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 26 01:13:43 compute-0 podman[191826]: 2025-11-26 01:13:43.39806962 +0000 UTC m=+0.088455028 container create f130bfab53637efb0bc78fb2913d088afe655df8dd8435563f5328311c1dea0d (image=quay.io/ceph/ceph:v18, name=ecstatic_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 01:13:43 compute-0 podman[191826]: 2025-11-26 01:13:43.357279726 +0000 UTC m=+0.047665174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:43 compute-0 systemd[1]: Started libpod-conmon-f130bfab53637efb0bc78fb2913d088afe655df8dd8435563f5328311c1dea0d.scope.
Nov 26 01:13:43 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:13:43 compute-0 podman[191826]: 2025-11-26 01:13:43.531645915 +0000 UTC m=+0.222031363 container init f130bfab53637efb0bc78fb2913d088afe655df8dd8435563f5328311c1dea0d (image=quay.io/ceph/ceph:v18, name=ecstatic_northcutt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 01:13:43 compute-0 podman[191826]: 2025-11-26 01:13:43.543410102 +0000 UTC m=+0.233795510 container start f130bfab53637efb0bc78fb2913d088afe655df8dd8435563f5328311c1dea0d (image=quay.io/ceph/ceph:v18, name=ecstatic_northcutt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:13:43 compute-0 podman[191826]: 2025-11-26 01:13:43.549909741 +0000 UTC m=+0.240295159 container attach f130bfab53637efb0bc78fb2913d088afe655df8dd8435563f5328311c1dea0d (image=quay.io/ceph/ceph:v18, name=ecstatic_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:13:43 compute-0 ecstatic_northcutt[191849]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 26 01:13:43 compute-0 systemd[1]: libpod-f130bfab53637efb0bc78fb2913d088afe655df8dd8435563f5328311c1dea0d.scope: Deactivated successfully.
Nov 26 01:13:43 compute-0 podman[191826]: 2025-11-26 01:13:43.849471765 +0000 UTC m=+0.539857183 container died f130bfab53637efb0bc78fb2913d088afe655df8dd8435563f5328311c1dea0d (image=quay.io/ceph/ceph:v18, name=ecstatic_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:13:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0994f5ef233d07bd375dc57d2dec9073c58e02aada150d63e363f9705255f6b3-merged.mount: Deactivated successfully.
Nov 26 01:13:43 compute-0 podman[191826]: 2025-11-26 01:13:43.93936567 +0000 UTC m=+0.629751088 container remove f130bfab53637efb0bc78fb2913d088afe655df8dd8435563f5328311c1dea0d (image=quay.io/ceph/ceph:v18, name=ecstatic_northcutt, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:13:43 compute-0 systemd[1]: libpod-conmon-f130bfab53637efb0bc78fb2913d088afe655df8dd8435563f5328311c1dea0d.scope: Deactivated successfully.
Nov 26 01:13:44 compute-0 podman[191864]: 2025-11-26 01:13:44.068698724 +0000 UTC m=+0.088135550 container create 6e04c4476b89b8871bb2d4e94fbda851017a6e986e112f0ce0511e56aa4cfe35 (image=quay.io/ceph/ceph:v18, name=elated_cohen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 01:13:44 compute-0 podman[191864]: 2025-11-26 01:13:44.035358634 +0000 UTC m=+0.054795460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:44 compute-0 systemd[1]: Started libpod-conmon-6e04c4476b89b8871bb2d4e94fbda851017a6e986e112f0ce0511e56aa4cfe35.scope.
Nov 26 01:13:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:13:44 compute-0 podman[191864]: 2025-11-26 01:13:44.203543201 +0000 UTC m=+0.222980067 container init 6e04c4476b89b8871bb2d4e94fbda851017a6e986e112f0ce0511e56aa4cfe35 (image=quay.io/ceph/ceph:v18, name=elated_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 01:13:44 compute-0 podman[191864]: 2025-11-26 01:13:44.217579937 +0000 UTC m=+0.237016753 container start 6e04c4476b89b8871bb2d4e94fbda851017a6e986e112f0ce0511e56aa4cfe35 (image=quay.io/ceph/ceph:v18, name=elated_cohen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:13:44 compute-0 podman[191864]: 2025-11-26 01:13:44.224709473 +0000 UTC m=+0.244146329 container attach 6e04c4476b89b8871bb2d4e94fbda851017a6e986e112f0ce0511e56aa4cfe35 (image=quay.io/ceph/ceph:v18, name=elated_cohen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:13:44 compute-0 elated_cohen[191879]: 167 167
Nov 26 01:13:44 compute-0 systemd[1]: libpod-6e04c4476b89b8871bb2d4e94fbda851017a6e986e112f0ce0511e56aa4cfe35.scope: Deactivated successfully.
Nov 26 01:13:44 compute-0 podman[191884]: 2025-11-26 01:13:44.317080683 +0000 UTC m=+0.063987800 container died 6e04c4476b89b8871bb2d4e94fbda851017a6e986e112f0ce0511e56aa4cfe35 (image=quay.io/ceph/ceph:v18, name=elated_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 01:13:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c49b4b0f5c2ed6fa9839ee1323a3a7f2275f70863483cf257f2c2309ae19bdee-merged.mount: Deactivated successfully.
Nov 26 01:13:44 compute-0 podman[191884]: 2025-11-26 01:13:44.387166201 +0000 UTC m=+0.134073298 container remove 6e04c4476b89b8871bb2d4e94fbda851017a6e986e112f0ce0511e56aa4cfe35 (image=quay.io/ceph/ceph:v18, name=elated_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:13:44 compute-0 systemd[1]: libpod-conmon-6e04c4476b89b8871bb2d4e94fbda851017a6e986e112f0ce0511e56aa4cfe35.scope: Deactivated successfully.
Nov 26 01:13:44 compute-0 podman[191897]: 2025-11-26 01:13:44.5450833 +0000 UTC m=+0.093928651 container create 830c6cd25e136226769a36e0c541fd955ca730c7e37b65a93b68c0a0fdfc124f (image=quay.io/ceph/ceph:v18, name=reverent_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 01:13:44 compute-0 podman[191897]: 2025-11-26 01:13:44.511649528 +0000 UTC m=+0.060494939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:44 compute-0 systemd[1]: Started libpod-conmon-830c6cd25e136226769a36e0c541fd955ca730c7e37b65a93b68c0a0fdfc124f.scope.
Nov 26 01:13:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:13:44 compute-0 podman[191897]: 2025-11-26 01:13:44.685968515 +0000 UTC m=+0.234813906 container init 830c6cd25e136226769a36e0c541fd955ca730c7e37b65a93b68c0a0fdfc124f (image=quay.io/ceph/ceph:v18, name=reverent_colden, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:13:44 compute-0 podman[191897]: 2025-11-26 01:13:44.703391089 +0000 UTC m=+0.252236440 container start 830c6cd25e136226769a36e0c541fd955ca730c7e37b65a93b68c0a0fdfc124f (image=quay.io/ceph/ceph:v18, name=reverent_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 01:13:44 compute-0 podman[191897]: 2025-11-26 01:13:44.710451163 +0000 UTC m=+0.259296524 container attach 830c6cd25e136226769a36e0c541fd955ca730c7e37b65a93b68c0a0fdfc124f (image=quay.io/ceph/ceph:v18, name=reverent_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:13:44 compute-0 reverent_colden[191911]: AQBIVCZpRyOKLBAAKe+l43Rog+qrFgZDFx7/yQ==
Nov 26 01:13:44 compute-0 systemd[1]: libpod-830c6cd25e136226769a36e0c541fd955ca730c7e37b65a93b68c0a0fdfc124f.scope: Deactivated successfully.
Nov 26 01:13:44 compute-0 podman[191897]: 2025-11-26 01:13:44.756258528 +0000 UTC m=+0.305103879 container died 830c6cd25e136226769a36e0c541fd955ca730c7e37b65a93b68c0a0fdfc124f (image=quay.io/ceph/ceph:v18, name=reverent_colden, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:13:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0776fe9404b8f0b86017b8659f6e391cdcb516846ef098ad089de2996559574-merged.mount: Deactivated successfully.
Nov 26 01:13:44 compute-0 podman[191897]: 2025-11-26 01:13:44.824450217 +0000 UTC m=+0.373295548 container remove 830c6cd25e136226769a36e0c541fd955ca730c7e37b65a93b68c0a0fdfc124f (image=quay.io/ceph/ceph:v18, name=reverent_colden, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:13:44 compute-0 systemd[1]: libpod-conmon-830c6cd25e136226769a36e0c541fd955ca730c7e37b65a93b68c0a0fdfc124f.scope: Deactivated successfully.
Nov 26 01:13:44 compute-0 podman[191932]: 2025-11-26 01:13:44.942282141 +0000 UTC m=+0.080695826 container create c8226a6fa271687cbd4820a4267699458747c417ba02ad3a06e774a6150f8951 (image=quay.io/ceph/ceph:v18, name=vigilant_bassi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 01:13:45 compute-0 podman[191932]: 2025-11-26 01:13:44.90697864 +0000 UTC m=+0.045392355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:45 compute-0 systemd[1]: Started libpod-conmon-c8226a6fa271687cbd4820a4267699458747c417ba02ad3a06e774a6150f8951.scope.
Nov 26 01:13:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:13:45 compute-0 podman[191932]: 2025-11-26 01:13:45.062379644 +0000 UTC m=+0.200793309 container init c8226a6fa271687cbd4820a4267699458747c417ba02ad3a06e774a6150f8951 (image=quay.io/ceph/ceph:v18, name=vigilant_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 01:13:45 compute-0 podman[191932]: 2025-11-26 01:13:45.075740102 +0000 UTC m=+0.214153747 container start c8226a6fa271687cbd4820a4267699458747c417ba02ad3a06e774a6150f8951 (image=quay.io/ceph/ceph:v18, name=vigilant_bassi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 01:13:45 compute-0 podman[191932]: 2025-11-26 01:13:45.080564868 +0000 UTC m=+0.218978523 container attach c8226a6fa271687cbd4820a4267699458747c417ba02ad3a06e774a6150f8951 (image=quay.io/ceph/ceph:v18, name=vigilant_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 01:13:45 compute-0 vigilant_bassi[191948]: AQBJVCZp3fMXBhAAndt/Tx1ncAVDkkKIXTEazQ==
Nov 26 01:13:45 compute-0 systemd[1]: libpod-c8226a6fa271687cbd4820a4267699458747c417ba02ad3a06e774a6150f8951.scope: Deactivated successfully.
Nov 26 01:13:45 compute-0 podman[191932]: 2025-11-26 01:13:45.109469642 +0000 UTC m=+0.247883287 container died c8226a6fa271687cbd4820a4267699458747c417ba02ad3a06e774a6150f8951 (image=quay.io/ceph/ceph:v18, name=vigilant_bassi, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 01:13:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb2c1765ce5e43961a2e6fc95ae8786ab8aa641a8812f30f50d753f8ad40101b-merged.mount: Deactivated successfully.
Nov 26 01:13:45 compute-0 podman[191932]: 2025-11-26 01:13:45.177725033 +0000 UTC m=+0.316138708 container remove c8226a6fa271687cbd4820a4267699458747c417ba02ad3a06e774a6150f8951 (image=quay.io/ceph/ceph:v18, name=vigilant_bassi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 01:13:45 compute-0 systemd[1]: libpod-conmon-c8226a6fa271687cbd4820a4267699458747c417ba02ad3a06e774a6150f8951.scope: Deactivated successfully.
Nov 26 01:13:45 compute-0 podman[191965]: 2025-11-26 01:13:45.308172465 +0000 UTC m=+0.084834534 container create 1753de8a62fa9c6ef119fd42718a2adee1ddd49ff06ee11858876bae032a22d2 (image=quay.io/ceph/ceph:v18, name=magical_lewin, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:13:45 compute-0 podman[191965]: 2025-11-26 01:13:45.273147532 +0000 UTC m=+0.049809621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:45 compute-0 systemd[1]: Started libpod-conmon-1753de8a62fa9c6ef119fd42718a2adee1ddd49ff06ee11858876bae032a22d2.scope.
Nov 26 01:13:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:13:45 compute-0 podman[191965]: 2025-11-26 01:13:45.437195121 +0000 UTC m=+0.213857240 container init 1753de8a62fa9c6ef119fd42718a2adee1ddd49ff06ee11858876bae032a22d2 (image=quay.io/ceph/ceph:v18, name=magical_lewin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:13:45 compute-0 podman[191965]: 2025-11-26 01:13:45.45135698 +0000 UTC m=+0.228019059 container start 1753de8a62fa9c6ef119fd42718a2adee1ddd49ff06ee11858876bae032a22d2 (image=quay.io/ceph/ceph:v18, name=magical_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:13:45 compute-0 podman[191965]: 2025-11-26 01:13:45.459199325 +0000 UTC m=+0.235861394 container attach 1753de8a62fa9c6ef119fd42718a2adee1ddd49ff06ee11858876bae032a22d2 (image=quay.io/ceph/ceph:v18, name=magical_lewin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 01:13:45 compute-0 magical_lewin[191981]: AQBJVCZpN+A3HRAA7aLSYsOsdRgU5ENS6G5CSA==
Nov 26 01:13:45 compute-0 systemd[1]: libpod-1753de8a62fa9c6ef119fd42718a2adee1ddd49ff06ee11858876bae032a22d2.scope: Deactivated successfully.
Nov 26 01:13:45 compute-0 podman[191965]: 2025-11-26 01:13:45.498790617 +0000 UTC m=+0.275452686 container died 1753de8a62fa9c6ef119fd42718a2adee1ddd49ff06ee11858876bae032a22d2 (image=quay.io/ceph/ceph:v18, name=magical_lewin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 01:13:45 compute-0 podman[191965]: 2025-11-26 01:13:45.562208681 +0000 UTC m=+0.338870760 container remove 1753de8a62fa9c6ef119fd42718a2adee1ddd49ff06ee11858876bae032a22d2 (image=quay.io/ceph/ceph:v18, name=magical_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 01:13:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7aa2ada374ac62a0c873877942620d2062f647ed4cd3e5a1c12bc07ec73ce8dc-merged.mount: Deactivated successfully.
Nov 26 01:13:45 compute-0 systemd[1]: libpod-conmon-1753de8a62fa9c6ef119fd42718a2adee1ddd49ff06ee11858876bae032a22d2.scope: Deactivated successfully.
Nov 26 01:13:45 compute-0 podman[191999]: 2025-11-26 01:13:45.695483957 +0000 UTC m=+0.089177126 container create 2e6501910e77204cdc3caabd19a139c50a67abe8038ec0b1772285d9ab8e7265 (image=quay.io/ceph/ceph:v18, name=amazing_fermi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:13:45 compute-0 systemd[1]: Started libpod-conmon-2e6501910e77204cdc3caabd19a139c50a67abe8038ec0b1772285d9ab8e7265.scope.
Nov 26 01:13:45 compute-0 podman[191999]: 2025-11-26 01:13:45.667108197 +0000 UTC m=+0.060801446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0536720b84257fe12a0760e2f337e4c82eb4f98e6d7b42fc2fc3789217ed4acb/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:45 compute-0 podman[191999]: 2025-11-26 01:13:45.82134704 +0000 UTC m=+0.215040239 container init 2e6501910e77204cdc3caabd19a139c50a67abe8038ec0b1772285d9ab8e7265 (image=quay.io/ceph/ceph:v18, name=amazing_fermi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 01:13:45 compute-0 podman[191999]: 2025-11-26 01:13:45.836604268 +0000 UTC m=+0.230297467 container start 2e6501910e77204cdc3caabd19a139c50a67abe8038ec0b1772285d9ab8e7265 (image=quay.io/ceph/ceph:v18, name=amazing_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 01:13:45 compute-0 podman[191999]: 2025-11-26 01:13:45.842674626 +0000 UTC m=+0.236367865 container attach 2e6501910e77204cdc3caabd19a139c50a67abe8038ec0b1772285d9ab8e7265 (image=quay.io/ceph/ceph:v18, name=amazing_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 01:13:45 compute-0 amazing_fermi[192015]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 26 01:13:45 compute-0 amazing_fermi[192015]: setting min_mon_release = pacific
Nov 26 01:13:45 compute-0 amazing_fermi[192015]: /usr/bin/monmaptool: set fsid to 36901f64-240e-5c29-a2e2-29b56f2c329c
Nov 26 01:13:45 compute-0 amazing_fermi[192015]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 26 01:13:45 compute-0 systemd[1]: libpod-2e6501910e77204cdc3caabd19a139c50a67abe8038ec0b1772285d9ab8e7265.scope: Deactivated successfully.
Nov 26 01:13:45 compute-0 podman[191999]: 2025-11-26 01:13:45.877037323 +0000 UTC m=+0.270730512 container died 2e6501910e77204cdc3caabd19a139c50a67abe8038ec0b1772285d9ab8e7265 (image=quay.io/ceph/ceph:v18, name=amazing_fermi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 01:13:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0536720b84257fe12a0760e2f337e4c82eb4f98e6d7b42fc2fc3789217ed4acb-merged.mount: Deactivated successfully.
Nov 26 01:13:45 compute-0 podman[191999]: 2025-11-26 01:13:45.947655865 +0000 UTC m=+0.341349064 container remove 2e6501910e77204cdc3caabd19a139c50a67abe8038ec0b1772285d9ab8e7265 (image=quay.io/ceph/ceph:v18, name=amazing_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 01:13:45 compute-0 systemd[1]: libpod-conmon-2e6501910e77204cdc3caabd19a139c50a67abe8038ec0b1772285d9ab8e7265.scope: Deactivated successfully.
Nov 26 01:13:46 compute-0 podman[192032]: 2025-11-26 01:13:46.04522659 +0000 UTC m=+0.062736597 container create 4387722270e9fae5257af36f02327a40c4a46d6c5d581b2cdbed687e1e908c61 (image=quay.io/ceph/ceph:v18, name=wizardly_turing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:13:46 compute-0 systemd[1]: Started libpod-conmon-4387722270e9fae5257af36f02327a40c4a46d6c5d581b2cdbed687e1e908c61.scope.
Nov 26 01:13:46 compute-0 podman[192032]: 2025-11-26 01:13:46.023299548 +0000 UTC m=+0.040809635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f595318aee2d13b4e7457984525daed73fc3a90036512e3b9eb6a53355b6e5/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f595318aee2d13b4e7457984525daed73fc3a90036512e3b9eb6a53355b6e5/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f595318aee2d13b4e7457984525daed73fc3a90036512e3b9eb6a53355b6e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f595318aee2d13b4e7457984525daed73fc3a90036512e3b9eb6a53355b6e5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:46 compute-0 podman[192032]: 2025-11-26 01:13:46.18360531 +0000 UTC m=+0.201115357 container init 4387722270e9fae5257af36f02327a40c4a46d6c5d581b2cdbed687e1e908c61 (image=quay.io/ceph/ceph:v18, name=wizardly_turing, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 01:13:46 compute-0 podman[192032]: 2025-11-26 01:13:46.202929554 +0000 UTC m=+0.220439551 container start 4387722270e9fae5257af36f02327a40c4a46d6c5d581b2cdbed687e1e908c61 (image=quay.io/ceph/ceph:v18, name=wizardly_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 01:13:46 compute-0 podman[192032]: 2025-11-26 01:13:46.208734175 +0000 UTC m=+0.226244272 container attach 4387722270e9fae5257af36f02327a40c4a46d6c5d581b2cdbed687e1e908c61 (image=quay.io/ceph/ceph:v18, name=wizardly_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 01:13:46 compute-0 systemd[1]: libpod-4387722270e9fae5257af36f02327a40c4a46d6c5d581b2cdbed687e1e908c61.scope: Deactivated successfully.
Nov 26 01:13:46 compute-0 podman[192032]: 2025-11-26 01:13:46.369386376 +0000 UTC m=+0.386896403 container died 4387722270e9fae5257af36f02327a40c4a46d6c5d581b2cdbed687e1e908c61 (image=quay.io/ceph/ceph:v18, name=wizardly_turing, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:13:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-68f595318aee2d13b4e7457984525daed73fc3a90036512e3b9eb6a53355b6e5-merged.mount: Deactivated successfully.
Nov 26 01:13:46 compute-0 podman[192032]: 2025-11-26 01:13:46.449479085 +0000 UTC m=+0.466989102 container remove 4387722270e9fae5257af36f02327a40c4a46d6c5d581b2cdbed687e1e908c61 (image=quay.io/ceph/ceph:v18, name=wizardly_turing, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:13:46 compute-0 systemd[1]: libpod-conmon-4387722270e9fae5257af36f02327a40c4a46d6c5d581b2cdbed687e1e908c61.scope: Deactivated successfully.
Nov 26 01:13:46 compute-0 systemd[1]: Reloading.
Nov 26 01:13:46 compute-0 podman[192078]: 2025-11-26 01:13:46.53130823 +0000 UTC m=+0.102562287 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:13:46 compute-0 podman[192075]: 2025-11-26 01:13:46.557608346 +0000 UTC m=+0.135365812 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=)
Nov 26 01:13:46 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:13:46 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:13:46 compute-0 systemd[1]: Reloading.
Nov 26 01:13:47 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:13:47 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:13:47 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Nov 26 01:13:47 compute-0 systemd[1]: Reloading.
Nov 26 01:13:47 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:13:47 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:13:47 compute-0 systemd[1]: Reached target Ceph cluster 36901f64-240e-5c29-a2e2-29b56f2c329c.
Nov 26 01:13:47 compute-0 systemd[1]: Reloading.
Nov 26 01:13:47 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:13:47 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:13:48 compute-0 systemd[1]: Reloading.
Nov 26 01:13:48 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:13:48 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:13:48 compute-0 systemd[1]: Created slice Slice /system/ceph-36901f64-240e-5c29-a2e2-29b56f2c329c.
Nov 26 01:13:48 compute-0 systemd[1]: Reached target System Time Set.
Nov 26 01:13:48 compute-0 systemd[1]: Reached target System Time Synchronized.
Nov 26 01:13:48 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 36901f64-240e-5c29-a2e2-29b56f2c329c...
Nov 26 01:13:49 compute-0 podman[192364]: 2025-11-26 01:13:49.063353489 +0000 UTC m=+0.096673893 container create 98b91f7de9bc929e0146f12e5c0f0012a2bc350a2885d51358ef744afe3a5cd8 (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 01:13:49 compute-0 podman[192364]: 2025-11-26 01:13:49.023658763 +0000 UTC m=+0.056979207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44b9bcedd56ea00d2e69370f75e756e797d51f9e252be1879885653d204c92b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44b9bcedd56ea00d2e69370f75e756e797d51f9e252be1879885653d204c92b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44b9bcedd56ea00d2e69370f75e756e797d51f9e252be1879885653d204c92b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44b9bcedd56ea00d2e69370f75e756e797d51f9e252be1879885653d204c92b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:49 compute-0 podman[192364]: 2025-11-26 01:13:49.202345984 +0000 UTC m=+0.235666388 container init 98b91f7de9bc929e0146f12e5c0f0012a2bc350a2885d51358ef744afe3a5cd8 (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:13:49 compute-0 podman[192364]: 2025-11-26 01:13:49.230932149 +0000 UTC m=+0.264252553 container start 98b91f7de9bc929e0146f12e5c0f0012a2bc350a2885d51358ef744afe3a5cd8 (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:13:49 compute-0 bash[192364]: 98b91f7de9bc929e0146f12e5c0f0012a2bc350a2885d51358ef744afe3a5cd8
Nov 26 01:13:49 compute-0 systemd[1]: Started Ceph mon.compute-0 for 36901f64-240e-5c29-a2e2-29b56f2c329c.
Nov 26 01:13:49 compute-0 ceph-mon[192383]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 01:13:49 compute-0 ceph-mon[192383]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 26 01:13:49 compute-0 ceph-mon[192383]: pidfile_write: ignore empty --pid-file
Nov 26 01:13:49 compute-0 ceph-mon[192383]: load: jerasure load: lrc 
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: RocksDB version: 7.9.2
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Git sha 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: DB SUMMARY
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: DB Session ID:  L877KIEM64T2L7BYIP0A
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: CURRENT file:  CURRENT
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                         Options.error_if_exists: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                       Options.create_if_missing: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                                     Options.env: 0x5588d2622c40
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                                Options.info_log: 0x5588d309ce80
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                              Options.statistics: (nil)
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                               Options.use_fsync: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                              Options.db_log_dir: 
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                                 Options.wal_dir: 
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                    Options.write_buffer_manager: 0x5588d30acb40
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                  Options.unordered_write: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                               Options.row_cache: None
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                              Options.wal_filter: None
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.two_write_queues: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.wal_compression: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.atomic_flush: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.max_background_jobs: 2
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.max_background_compactions: -1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.max_subcompactions: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.max_total_wal_size: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                          Options.max_open_files: -1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:       Options.compaction_readahead_size: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Compression algorithms supported:
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: #011kZSTD supported: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: #011kXpressCompression supported: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: #011kZlibCompression supported: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:           Options.merge_operator: 
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:        Options.compaction_filter: None
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5588d309ca80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5588d30951f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:        Options.write_buffer_size: 33554432
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:  Options.max_write_buffer_number: 2
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:          Options.compression: NoCompression
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.num_levels: 7
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1b66c307-a42f-4c02-bd88-eabf0b9b04cc
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119629312690, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119629316480, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "L877KIEM64T2L7BYIP0A", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119629316744, "job": 1, "event": "recovery_finished"}
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5588d30bee00
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: DB pointer 0x5588d3148000
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:13:49 compute-0 ceph-mon[192383]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.06 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.06 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5588d30951f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 26 01:13:49 compute-0 ceph-mon[192383]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 36901f64-240e-5c29-a2e2-29b56f2c329c
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@-1(???) e0 preinit fsid 36901f64-240e-5c29-a2e2-29b56f2c329c
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 26 01:13:49 compute-0 ceph-mon[192383]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 01:13:49 compute-0 ceph-mon[192383]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 26 01:13:49 compute-0 podman[192384]: 2025-11-26 01:13:49.378814767 +0000 UTC m=+0.080819849 container create 28eb6ea5bcc57e848399e3d93b8ea0118d9dc975687321a88b560bd397372a8b (image=quay.io/ceph/ceph:v18, name=heuristic_heyrovsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 26 01:13:49 compute-0 ceph-mon[192383]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 01:13:49 compute-0 ceph-mon[192383]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 01:13:49 compute-0 ceph-mon[192383]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-26T01:13:46.282145Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,os=Linux}
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).mds e1 new map
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 26 01:13:49 compute-0 ceph-mon[192383]: log_channel(cluster) log [DBG] : fsmap 
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mkfs 36901f64-240e-5c29-a2e2-29b56f2c329c
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 26 01:13:49 compute-0 ceph-mon[192383]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 26 01:13:49 compute-0 ceph-mon[192383]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 26 01:13:49 compute-0 podman[192384]: 2025-11-26 01:13:49.35440585 +0000 UTC m=+0.056411002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:49 compute-0 ceph-mon[192383]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 01:13:49 compute-0 systemd[1]: Started libpod-conmon-28eb6ea5bcc57e848399e3d93b8ea0118d9dc975687321a88b560bd397372a8b.scope.
Nov 26 01:13:49 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b934c902cb8eb9be4fd9a3afb23cd1e7f52be126880150ba27765a7156f61e5e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b934c902cb8eb9be4fd9a3afb23cd1e7f52be126880150ba27765a7156f61e5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b934c902cb8eb9be4fd9a3afb23cd1e7f52be126880150ba27765a7156f61e5e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:49 compute-0 podman[192384]: 2025-11-26 01:13:49.549405767 +0000 UTC m=+0.251410929 container init 28eb6ea5bcc57e848399e3d93b8ea0118d9dc975687321a88b560bd397372a8b (image=quay.io/ceph/ceph:v18, name=heuristic_heyrovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 26 01:13:49 compute-0 podman[192384]: 2025-11-26 01:13:49.581917135 +0000 UTC m=+0.283922247 container start 28eb6ea5bcc57e848399e3d93b8ea0118d9dc975687321a88b560bd397372a8b (image=quay.io/ceph/ceph:v18, name=heuristic_heyrovsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:13:49 compute-0 podman[192384]: 2025-11-26 01:13:49.589681237 +0000 UTC m=+0.291686419 container attach 28eb6ea5bcc57e848399e3d93b8ea0118d9dc975687321a88b560bd397372a8b (image=quay.io/ceph/ceph:v18, name=heuristic_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:13:50 compute-0 ceph-mon[192383]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 26 01:13:50 compute-0 ceph-mon[192383]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322452013' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 26 01:13:50 compute-0 heuristic_heyrovsky[192439]:  cluster:
Nov 26 01:13:50 compute-0 heuristic_heyrovsky[192439]:    id:     36901f64-240e-5c29-a2e2-29b56f2c329c
Nov 26 01:13:50 compute-0 heuristic_heyrovsky[192439]:    health: HEALTH_OK
Nov 26 01:13:50 compute-0 heuristic_heyrovsky[192439]: 
Nov 26 01:13:50 compute-0 heuristic_heyrovsky[192439]:  services:
Nov 26 01:13:50 compute-0 heuristic_heyrovsky[192439]:    mon: 1 daemons, quorum compute-0 (age 0.650628s)
Nov 26 01:13:50 compute-0 heuristic_heyrovsky[192439]:    mgr: no daemons active
Nov 26 01:13:50 compute-0 heuristic_heyrovsky[192439]:    osd: 0 osds: 0 up, 0 in
Nov 26 01:13:50 compute-0 heuristic_heyrovsky[192439]: 
Nov 26 01:13:50 compute-0 heuristic_heyrovsky[192439]:  data:
Nov 26 01:13:50 compute-0 heuristic_heyrovsky[192439]:    pools:   0 pools, 0 pgs
Nov 26 01:13:50 compute-0 heuristic_heyrovsky[192439]:    objects: 0 objects, 0 B
Nov 26 01:13:50 compute-0 heuristic_heyrovsky[192439]:    usage:   0 B used, 0 B / 0 B avail
Nov 26 01:13:50 compute-0 heuristic_heyrovsky[192439]:    pgs:     
Nov 26 01:13:50 compute-0 heuristic_heyrovsky[192439]: 
Nov 26 01:13:50 compute-0 systemd[1]: libpod-28eb6ea5bcc57e848399e3d93b8ea0118d9dc975687321a88b560bd397372a8b.scope: Deactivated successfully.
Nov 26 01:13:50 compute-0 podman[192466]: 2025-11-26 01:13:50.186688801 +0000 UTC m=+0.074087104 container died 28eb6ea5bcc57e848399e3d93b8ea0118d9dc975687321a88b560bd397372a8b (image=quay.io/ceph/ceph:v18, name=heuristic_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 01:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b934c902cb8eb9be4fd9a3afb23cd1e7f52be126880150ba27765a7156f61e5e-merged.mount: Deactivated successfully.
Nov 26 01:13:50 compute-0 podman[192466]: 2025-11-26 01:13:50.269020098 +0000 UTC m=+0.156418331 container remove 28eb6ea5bcc57e848399e3d93b8ea0118d9dc975687321a88b560bd397372a8b (image=quay.io/ceph/ceph:v18, name=heuristic_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:13:50 compute-0 systemd[1]: libpod-conmon-28eb6ea5bcc57e848399e3d93b8ea0118d9dc975687321a88b560bd397372a8b.scope: Deactivated successfully.
Nov 26 01:13:50 compute-0 podman[192478]: 2025-11-26 01:13:50.40634171 +0000 UTC m=+0.085455860 container create 57c0c66aeb6fd18fdb04333373931bbfdbe71d04d37d6087d22b76319d81f7a5 (image=quay.io/ceph/ceph:v18, name=relaxed_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:13:50 compute-0 ceph-mon[192383]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 01:13:50 compute-0 podman[192478]: 2025-11-26 01:13:50.371901692 +0000 UTC m=+0.051015892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:50 compute-0 systemd[1]: Started libpod-conmon-57c0c66aeb6fd18fdb04333373931bbfdbe71d04d37d6087d22b76319d81f7a5.scope.
Nov 26 01:13:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1563f0fd7c9e3522b770a4a2b6c1a7c58efd781a054fec4855462100596792/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1563f0fd7c9e3522b770a4a2b6c1a7c58efd781a054fec4855462100596792/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1563f0fd7c9e3522b770a4a2b6c1a7c58efd781a054fec4855462100596792/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1563f0fd7c9e3522b770a4a2b6c1a7c58efd781a054fec4855462100596792/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:50 compute-0 podman[192478]: 2025-11-26 01:13:50.584546159 +0000 UTC m=+0.263660389 container init 57c0c66aeb6fd18fdb04333373931bbfdbe71d04d37d6087d22b76319d81f7a5 (image=quay.io/ceph/ceph:v18, name=relaxed_kilby, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 01:13:50 compute-0 podman[192478]: 2025-11-26 01:13:50.598964645 +0000 UTC m=+0.278078795 container start 57c0c66aeb6fd18fdb04333373931bbfdbe71d04d37d6087d22b76319d81f7a5 (image=quay.io/ceph/ceph:v18, name=relaxed_kilby, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 01:13:50 compute-0 podman[192478]: 2025-11-26 01:13:50.603706999 +0000 UTC m=+0.282821219 container attach 57c0c66aeb6fd18fdb04333373931bbfdbe71d04d37d6087d22b76319d81f7a5 (image=quay.io/ceph/ceph:v18, name=relaxed_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:13:51 compute-0 ceph-mon[192383]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 26 01:13:51 compute-0 ceph-mon[192383]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1135833283' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 01:13:51 compute-0 ceph-mon[192383]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1135833283' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 01:13:51 compute-0 relaxed_kilby[192494]: 
Nov 26 01:13:51 compute-0 relaxed_kilby[192494]: [global]
Nov 26 01:13:51 compute-0 relaxed_kilby[192494]: #011fsid = 36901f64-240e-5c29-a2e2-29b56f2c329c
Nov 26 01:13:51 compute-0 relaxed_kilby[192494]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 26 01:13:51 compute-0 relaxed_kilby[192494]: #011osd_crush_chooseleaf_type = 0
Nov 26 01:13:51 compute-0 systemd[1]: libpod-57c0c66aeb6fd18fdb04333373931bbfdbe71d04d37d6087d22b76319d81f7a5.scope: Deactivated successfully.
Nov 26 01:13:51 compute-0 podman[192478]: 2025-11-26 01:13:51.104218885 +0000 UTC m=+0.783333065 container died 57c0c66aeb6fd18fdb04333373931bbfdbe71d04d37d6087d22b76319d81f7a5 (image=quay.io/ceph/ceph:v18, name=relaxed_kilby, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 26 01:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd1563f0fd7c9e3522b770a4a2b6c1a7c58efd781a054fec4855462100596792-merged.mount: Deactivated successfully.
Nov 26 01:13:51 compute-0 podman[192478]: 2025-11-26 01:13:51.187646761 +0000 UTC m=+0.866760911 container remove 57c0c66aeb6fd18fdb04333373931bbfdbe71d04d37d6087d22b76319d81f7a5 (image=quay.io/ceph/ceph:v18, name=relaxed_kilby, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:13:51 compute-0 systemd[1]: libpod-conmon-57c0c66aeb6fd18fdb04333373931bbfdbe71d04d37d6087d22b76319d81f7a5.scope: Deactivated successfully.
Nov 26 01:13:51 compute-0 podman[192532]: 2025-11-26 01:13:51.315701941 +0000 UTC m=+0.094807644 container create a81c8a9077fc49b43c03129de11bc733e59b5dc28fb7166b12899bc095f0f4ba (image=quay.io/ceph/ceph:v18, name=exciting_hodgkin, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:13:51 compute-0 podman[192532]: 2025-11-26 01:13:51.268811878 +0000 UTC m=+0.047917631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:51 compute-0 systemd[1]: Started libpod-conmon-a81c8a9077fc49b43c03129de11bc733e59b5dc28fb7166b12899bc095f0f4ba.scope.
Nov 26 01:13:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f82e0c8607f3176913cb0c9b4335116a91c974e9b70bca395c1a896f5829d84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f82e0c8607f3176913cb0c9b4335116a91c974e9b70bca395c1a896f5829d84/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f82e0c8607f3176913cb0c9b4335116a91c974e9b70bca395c1a896f5829d84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f82e0c8607f3176913cb0c9b4335116a91c974e9b70bca395c1a896f5829d84/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:51 compute-0 ceph-mon[192383]: from='client.? 192.168.122.100:0/1135833283' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 01:13:51 compute-0 ceph-mon[192383]: from='client.? 192.168.122.100:0/1135833283' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 01:13:51 compute-0 podman[192532]: 2025-11-26 01:13:51.506639772 +0000 UTC m=+0.285745525 container init a81c8a9077fc49b43c03129de11bc733e59b5dc28fb7166b12899bc095f0f4ba (image=quay.io/ceph/ceph:v18, name=exciting_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:13:51 compute-0 podman[192532]: 2025-11-26 01:13:51.522224389 +0000 UTC m=+0.301330082 container start a81c8a9077fc49b43c03129de11bc733e59b5dc28fb7166b12899bc095f0f4ba (image=quay.io/ceph/ceph:v18, name=exciting_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:13:51 compute-0 podman[192532]: 2025-11-26 01:13:51.528056431 +0000 UTC m=+0.307162124 container attach a81c8a9077fc49b43c03129de11bc733e59b5dc28fb7166b12899bc095f0f4ba (image=quay.io/ceph/ceph:v18, name=exciting_hodgkin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:13:51 compute-0 ceph-mon[192383]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:13:51 compute-0 ceph-mon[192383]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/319041119' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:13:51 compute-0 systemd[1]: libpod-a81c8a9077fc49b43c03129de11bc733e59b5dc28fb7166b12899bc095f0f4ba.scope: Deactivated successfully.
Nov 26 01:13:51 compute-0 podman[192532]: 2025-11-26 01:13:51.978019668 +0000 UTC m=+0.757125371 container died a81c8a9077fc49b43c03129de11bc733e59b5dc28fb7166b12899bc095f0f4ba (image=quay.io/ceph/ceph:v18, name=exciting_hodgkin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 01:13:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f82e0c8607f3176913cb0c9b4335116a91c974e9b70bca395c1a896f5829d84-merged.mount: Deactivated successfully.
Nov 26 01:13:52 compute-0 podman[192532]: 2025-11-26 01:13:52.070048109 +0000 UTC m=+0.849153792 container remove a81c8a9077fc49b43c03129de11bc733e59b5dc28fb7166b12899bc095f0f4ba (image=quay.io/ceph/ceph:v18, name=exciting_hodgkin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 01:13:52 compute-0 systemd[1]: libpod-conmon-a81c8a9077fc49b43c03129de11bc733e59b5dc28fb7166b12899bc095f0f4ba.scope: Deactivated successfully.
Nov 26 01:13:52 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 36901f64-240e-5c29-a2e2-29b56f2c329c...
Nov 26 01:13:52 compute-0 ceph-mon[192383]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 26 01:13:52 compute-0 ceph-mon[192383]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 26 01:13:52 compute-0 ceph-mon[192383]: mon.compute-0@0(leader) e1 shutdown
Nov 26 01:13:52 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0[192379]: 2025-11-26T01:13:52.477+0000 7f8c5366c640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 26 01:13:52 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0[192379]: 2025-11-26T01:13:52.477+0000 7f8c5366c640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 26 01:13:52 compute-0 ceph-mon[192383]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 01:13:52 compute-0 ceph-mon[192383]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 01:13:52 compute-0 podman[192613]: 2025-11-26 01:13:52.638655811 +0000 UTC m=+0.266334668 container died 98b91f7de9bc929e0146f12e5c0f0012a2bc350a2885d51358ef744afe3a5cd8 (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:13:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c44b9bcedd56ea00d2e69370f75e756e797d51f9e252be1879885653d204c92b-merged.mount: Deactivated successfully.
Nov 26 01:13:52 compute-0 podman[192613]: 2025-11-26 01:13:52.711966034 +0000 UTC m=+0.339644891 container remove 98b91f7de9bc929e0146f12e5c0f0012a2bc350a2885d51358ef744afe3a5cd8 (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:13:52 compute-0 bash[192613]: ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0
Nov 26 01:13:52 compute-0 systemd[1]: ceph-36901f64-240e-5c29-a2e2-29b56f2c329c@mon.compute-0.service: Deactivated successfully.
Nov 26 01:13:52 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 36901f64-240e-5c29-a2e2-29b56f2c329c.
Nov 26 01:13:52 compute-0 systemd[1]: ceph-36901f64-240e-5c29-a2e2-29b56f2c329c@mon.compute-0.service: Consumed 2.280s CPU time.
Nov 26 01:13:52 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 36901f64-240e-5c29-a2e2-29b56f2c329c...
Nov 26 01:13:53 compute-0 podman[192664]: 2025-11-26 01:13:53.131708662 +0000 UTC m=+0.149126321 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:13:53 compute-0 podman[192727]: 2025-11-26 01:13:53.460884699 +0000 UTC m=+0.095924374 container create 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 01:13:53 compute-0 podman[192727]: 2025-11-26 01:13:53.423992206 +0000 UTC m=+0.059031941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61726efc926fa00aa0117ae3224fc89ddfc5072433a29275cefc6501012e5fe4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61726efc926fa00aa0117ae3224fc89ddfc5072433a29275cefc6501012e5fe4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61726efc926fa00aa0117ae3224fc89ddfc5072433a29275cefc6501012e5fe4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61726efc926fa00aa0117ae3224fc89ddfc5072433a29275cefc6501012e5fe4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:53 compute-0 podman[192727]: 2025-11-26 01:13:53.615419 +0000 UTC m=+0.250458715 container init 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 01:13:53 compute-0 podman[192727]: 2025-11-26 01:13:53.629905748 +0000 UTC m=+0.264945423 container start 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:13:53 compute-0 bash[192727]: 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d
Nov 26 01:13:53 compute-0 systemd[1]: Started Ceph mon.compute-0 for 36901f64-240e-5c29-a2e2-29b56f2c329c.
Nov 26 01:13:53 compute-0 ceph-mon[192746]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 01:13:53 compute-0 ceph-mon[192746]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 26 01:13:53 compute-0 ceph-mon[192746]: pidfile_write: ignore empty --pid-file
Nov 26 01:13:53 compute-0 ceph-mon[192746]: load: jerasure load: lrc 
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: RocksDB version: 7.9.2
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Git sha 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: DB SUMMARY
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: DB Session ID:  U5291X29YJY3W7NSASL8
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: CURRENT file:  CURRENT
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55970 ; 
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                         Options.error_if_exists: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                       Options.create_if_missing: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                                     Options.env: 0x5636b79ecc40
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                                Options.info_log: 0x5636b9563040
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                              Options.statistics: (nil)
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                               Options.use_fsync: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                              Options.db_log_dir: 
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                                 Options.wal_dir: 
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                    Options.write_buffer_manager: 0x5636b9572b40
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                  Options.unordered_write: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                               Options.row_cache: None
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                              Options.wal_filter: None
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.two_write_queues: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.wal_compression: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.atomic_flush: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.max_background_jobs: 2
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.max_background_compactions: -1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.max_subcompactions: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.max_total_wal_size: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                          Options.max_open_files: -1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:       Options.compaction_readahead_size: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Compression algorithms supported:
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: #011kZSTD supported: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: #011kXpressCompression supported: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: #011kZlibCompression supported: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:           Options.merge_operator: 
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:        Options.compaction_filter: None
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636b9562c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5636b955b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:        Options.write_buffer_size: 33554432
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:  Options.max_write_buffer_number: 2
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:          Options.compression: NoCompression
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.num_levels: 7
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1b66c307-a42f-4c02-bd88-eabf0b9b04cc
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119633717046, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119633726388, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55468, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 144, "table_properties": {"data_size": 53944, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3079, "raw_average_key_size": 29, "raw_value_size": 51507, "raw_average_value_size": 500, "num_data_blocks": 9, "num_entries": 103, "num_filter_entries": 103, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119633, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119633726906, "job": 1, "event": "recovery_finished"}
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5636b9584e00
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: DB pointer 0x5636b960e000
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:13:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   56.07 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0   56.07 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 1.46 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 1.46 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5636b955b1f0#2 capacity: 512.00 MB usage: 1.80 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,5.03 KB,0.000959635%) FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 26 01:13:53 compute-0 ceph-mon[192746]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 36901f64-240e-5c29-a2e2-29b56f2c329c
Nov 26 01:13:53 compute-0 ceph-mon[192746]: mon.compute-0@-1(???) e1 preinit fsid 36901f64-240e-5c29-a2e2-29b56f2c329c
Nov 26 01:13:53 compute-0 ceph-mon[192746]: mon.compute-0@-1(???).mds e1 new map
Nov 26 01:13:53 compute-0 ceph-mon[192746]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 26 01:13:53 compute-0 ceph-mon[192746]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 26 01:13:53 compute-0 ceph-mon[192746]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 01:13:53 compute-0 ceph-mon[192746]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 01:13:53 compute-0 ceph-mon[192746]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 26 01:13:53 compute-0 ceph-mon[192746]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 26 01:13:53 compute-0 ceph-mon[192746]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 26 01:13:53 compute-0 ceph-mon[192746]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 26 01:13:53 compute-0 ceph-mon[192746]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 26 01:13:53 compute-0 podman[192748]: 2025-11-26 01:13:53.784428718 +0000 UTC m=+0.095418160 container create 6410300da0af179dfc93ddb896581dfb4415c027b621dd1ac0e7a1d189032a13 (image=quay.io/ceph/ceph:v18, name=condescending_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:13:53 compute-0 ceph-mon[192746]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 01:13:53 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 01:13:53 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 26 01:13:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 26 01:13:53 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : fsmap 
Nov 26 01:13:53 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 26 01:13:53 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 26 01:13:53 compute-0 podman[192747]: 2025-11-26 01:13:53.83509684 +0000 UTC m=+0.143572656 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release=1214.1726694543, container_name=kepler, io.openshift.expose-services=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, com.redhat.component=ubi9-container, config_id=edpm)
Nov 26 01:13:53 compute-0 systemd[1]: Started libpod-conmon-6410300da0af179dfc93ddb896581dfb4415c027b621dd1ac0e7a1d189032a13.scope.
Nov 26 01:13:53 compute-0 podman[192748]: 2025-11-26 01:13:53.756362556 +0000 UTC m=+0.067351978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:53 compute-0 ceph-mon[192746]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 26 01:13:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37cf3782465b3aa6651894a4b6a06bb152afb21bf5315adc0740144a3384920/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37cf3782465b3aa6651894a4b6a06bb152afb21bf5315adc0740144a3384920/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37cf3782465b3aa6651894a4b6a06bb152afb21bf5315adc0740144a3384920/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:53 compute-0 podman[192748]: 2025-11-26 01:13:53.930117519 +0000 UTC m=+0.241106931 container init 6410300da0af179dfc93ddb896581dfb4415c027b621dd1ac0e7a1d189032a13 (image=quay.io/ceph/ceph:v18, name=condescending_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 01:13:53 compute-0 podman[192748]: 2025-11-26 01:13:53.951421074 +0000 UTC m=+0.262410516 container start 6410300da0af179dfc93ddb896581dfb4415c027b621dd1ac0e7a1d189032a13 (image=quay.io/ceph/ceph:v18, name=condescending_heyrovsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:13:53 compute-0 podman[192748]: 2025-11-26 01:13:53.957633227 +0000 UTC m=+0.268622679 container attach 6410300da0af179dfc93ddb896581dfb4415c027b621dd1ac0e7a1d189032a13 (image=quay.io/ceph/ceph:v18, name=condescending_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:13:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 26 01:13:54 compute-0 systemd[1]: libpod-6410300da0af179dfc93ddb896581dfb4415c027b621dd1ac0e7a1d189032a13.scope: Deactivated successfully.
Nov 26 01:13:54 compute-0 podman[192748]: 2025-11-26 01:13:54.409862273 +0000 UTC m=+0.720851675 container died 6410300da0af179dfc93ddb896581dfb4415c027b621dd1ac0e7a1d189032a13 (image=quay.io/ceph/ceph:v18, name=condescending_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:13:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a37cf3782465b3aa6651894a4b6a06bb152afb21bf5315adc0740144a3384920-merged.mount: Deactivated successfully.
Nov 26 01:13:54 compute-0 podman[192748]: 2025-11-26 01:13:54.468535374 +0000 UTC m=+0.779524776 container remove 6410300da0af179dfc93ddb896581dfb4415c027b621dd1ac0e7a1d189032a13 (image=quay.io/ceph/ceph:v18, name=condescending_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 26 01:13:54 compute-0 systemd[1]: libpod-conmon-6410300da0af179dfc93ddb896581dfb4415c027b621dd1ac0e7a1d189032a13.scope: Deactivated successfully.
Nov 26 01:13:54 compute-0 podman[192855]: 2025-11-26 01:13:54.57227553 +0000 UTC m=+0.076025664 container create 44b219f9b01ce572781c28cf582d2b12df8472a03e616ec6fdb0c6b94ed5f257 (image=quay.io/ceph/ceph:v18, name=lucid_easley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:13:54 compute-0 podman[192855]: 2025-11-26 01:13:54.53855714 +0000 UTC m=+0.042307324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:54 compute-0 systemd[1]: Started libpod-conmon-44b219f9b01ce572781c28cf582d2b12df8472a03e616ec6fdb0c6b94ed5f257.scope.
Nov 26 01:13:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d67611e2415e243ac6c18d3da114d737fe13ff98c65b217d1c580e518011230/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d67611e2415e243ac6c18d3da114d737fe13ff98c65b217d1c580e518011230/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d67611e2415e243ac6c18d3da114d737fe13ff98c65b217d1c580e518011230/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:54 compute-0 podman[192855]: 2025-11-26 01:13:54.728211478 +0000 UTC m=+0.231961642 container init 44b219f9b01ce572781c28cf582d2b12df8472a03e616ec6fdb0c6b94ed5f257 (image=quay.io/ceph/ceph:v18, name=lucid_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 01:13:54 compute-0 podman[192855]: 2025-11-26 01:13:54.756705691 +0000 UTC m=+0.260455825 container start 44b219f9b01ce572781c28cf582d2b12df8472a03e616ec6fdb0c6b94ed5f257 (image=quay.io/ceph/ceph:v18, name=lucid_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 01:13:54 compute-0 podman[192855]: 2025-11-26 01:13:54.762527033 +0000 UTC m=+0.266277227 container attach 44b219f9b01ce572781c28cf582d2b12df8472a03e616ec6fdb0c6b94ed5f257 (image=quay.io/ceph/ceph:v18, name=lucid_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:13:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 26 01:13:55 compute-0 systemd[1]: libpod-44b219f9b01ce572781c28cf582d2b12df8472a03e616ec6fdb0c6b94ed5f257.scope: Deactivated successfully.
Nov 26 01:13:55 compute-0 podman[192855]: 2025-11-26 01:13:55.197635253 +0000 UTC m=+0.701385397 container died 44b219f9b01ce572781c28cf582d2b12df8472a03e616ec6fdb0c6b94ed5f257 (image=quay.io/ceph/ceph:v18, name=lucid_easley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Nov 26 01:13:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d67611e2415e243ac6c18d3da114d737fe13ff98c65b217d1c580e518011230-merged.mount: Deactivated successfully.
Nov 26 01:13:55 compute-0 podman[192855]: 2025-11-26 01:13:55.27421355 +0000 UTC m=+0.777963694 container remove 44b219f9b01ce572781c28cf582d2b12df8472a03e616ec6fdb0c6b94ed5f257 (image=quay.io/ceph/ceph:v18, name=lucid_easley, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:13:55 compute-0 systemd[1]: libpod-conmon-44b219f9b01ce572781c28cf582d2b12df8472a03e616ec6fdb0c6b94ed5f257.scope: Deactivated successfully.
Nov 26 01:13:55 compute-0 systemd[1]: Reloading.
Nov 26 01:13:55 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:13:55 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:13:55 compute-0 systemd[1]: Reloading.
Nov 26 01:13:55 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:13:55 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:13:56 compute-0 systemd[1]: Starting Ceph mgr.compute-0.vbisdw for 36901f64-240e-5c29-a2e2-29b56f2c329c...
Nov 26 01:13:56 compute-0 podman[193031]: 2025-11-26 01:13:56.745640702 +0000 UTC m=+0.082313608 container create 7222fbf079f004d0b29a46437b8fe71c8aebc1bfdb76fa41562c4607ed386a7f (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:13:56 compute-0 podman[193031]: 2025-11-26 01:13:56.718259468 +0000 UTC m=+0.054932364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dbf5b4bc93aecd6492155353481839a4e74af56c566c49e38d50ca01b600c9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dbf5b4bc93aecd6492155353481839a4e74af56c566c49e38d50ca01b600c9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dbf5b4bc93aecd6492155353481839a4e74af56c566c49e38d50ca01b600c9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dbf5b4bc93aecd6492155353481839a4e74af56c566c49e38d50ca01b600c9f/merged/var/lib/ceph/mgr/ceph-compute-0.vbisdw supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:56 compute-0 podman[193031]: 2025-11-26 01:13:56.876217978 +0000 UTC m=+0.212890904 container init 7222fbf079f004d0b29a46437b8fe71c8aebc1bfdb76fa41562c4607ed386a7f (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 01:13:56 compute-0 podman[193031]: 2025-11-26 01:13:56.901280312 +0000 UTC m=+0.237953238 container start 7222fbf079f004d0b29a46437b8fe71c8aebc1bfdb76fa41562c4607ed386a7f (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 01:13:56 compute-0 bash[193031]: 7222fbf079f004d0b29a46437b8fe71c8aebc1bfdb76fa41562c4607ed386a7f
Nov 26 01:13:56 compute-0 systemd[1]: Started Ceph mgr.compute-0.vbisdw for 36901f64-240e-5c29-a2e2-29b56f2c329c.
Nov 26 01:13:56 compute-0 ceph-mgr[193049]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 01:13:56 compute-0 ceph-mgr[193049]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 26 01:13:56 compute-0 ceph-mgr[193049]: pidfile_write: ignore empty --pid-file
Nov 26 01:13:57 compute-0 podman[193050]: 2025-11-26 01:13:57.080155078 +0000 UTC m=+0.105091072 container create 19915feeef8be5e5cd58ff2b4f5d156fdeaebc28d49bd5164bb773266b3bfb1f (image=quay.io/ceph/ceph:v18, name=magical_elbakyan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:13:57 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'alerts'
Nov 26 01:13:57 compute-0 podman[193050]: 2025-11-26 01:13:57.046090319 +0000 UTC m=+0.071026363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:13:57 compute-0 systemd[1]: Started libpod-conmon-19915feeef8be5e5cd58ff2b4f5d156fdeaebc28d49bd5164bb773266b3bfb1f.scope.
Nov 26 01:13:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a30c88feedaeea4601c7984739a82bbb88019f13164b0499e384ef3f59c6508/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a30c88feedaeea4601c7984739a82bbb88019f13164b0499e384ef3f59c6508/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a30c88feedaeea4601c7984739a82bbb88019f13164b0499e384ef3f59c6508/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:13:57 compute-0 podman[193050]: 2025-11-26 01:13:57.25352393 +0000 UTC m=+0.278459984 container init 19915feeef8be5e5cd58ff2b4f5d156fdeaebc28d49bd5164bb773266b3bfb1f (image=quay.io/ceph/ceph:v18, name=magical_elbakyan, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:13:57 compute-0 podman[193050]: 2025-11-26 01:13:57.270134074 +0000 UTC m=+0.295070078 container start 19915feeef8be5e5cd58ff2b4f5d156fdeaebc28d49bd5164bb773266b3bfb1f (image=quay.io/ceph/ceph:v18, name=magical_elbakyan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 26 01:13:57 compute-0 podman[193050]: 2025-11-26 01:13:57.275883323 +0000 UTC m=+0.300819318 container attach 19915feeef8be5e5cd58ff2b4f5d156fdeaebc28d49bd5164bb773266b3bfb1f (image=quay.io/ceph/ceph:v18, name=magical_elbakyan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 01:13:57 compute-0 ceph-mgr[193049]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 01:13:57 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:13:57.423+0000 7f64b5e6a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 01:13:57 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'balancer'
Nov 26 01:13:57 compute-0 ceph-mgr[193049]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 01:13:57 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:13:57.679+0000 7f64b5e6a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 01:13:57 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'cephadm'
Nov 26 01:13:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 01:13:57 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4042696676' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]: 
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]: {
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    "fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    "health": {
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "status": "HEALTH_OK",
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "checks": {},
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "mutes": []
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    },
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    "election_epoch": 5,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    "quorum": [
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        0
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    ],
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    "quorum_names": [
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "compute-0"
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    ],
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    "quorum_age": 3,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    "monmap": {
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "epoch": 1,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "min_mon_release_name": "reef",
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "num_mons": 1
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    },
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    "osdmap": {
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "epoch": 1,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "num_osds": 0,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "num_up_osds": 0,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "osd_up_since": 0,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "num_in_osds": 0,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "osd_in_since": 0,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "num_remapped_pgs": 0
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    },
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    "pgmap": {
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "pgs_by_state": [],
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "num_pgs": 0,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "num_pools": 0,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "num_objects": 0,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "data_bytes": 0,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "bytes_used": 0,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "bytes_avail": 0,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "bytes_total": 0
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    },
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    "fsmap": {
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "epoch": 1,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "by_rank": [],
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "up:standby": 0
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    },
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    "mgrmap": {
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "available": false,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "num_standbys": 0,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "modules": [
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:            "iostat",
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:            "nfs",
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:            "restful"
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        ],
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "services": {}
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    },
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    "servicemap": {
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "epoch": 1,
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "modified": "2025-11-26T01:13:49.405054+0000",
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:        "services": {}
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    },
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]:    "progress_events": {}
Nov 26 01:13:57 compute-0 magical_elbakyan[193090]: }
Nov 26 01:13:57 compute-0 systemd[1]: libpod-19915feeef8be5e5cd58ff2b4f5d156fdeaebc28d49bd5164bb773266b3bfb1f.scope: Deactivated successfully.
Nov 26 01:13:57 compute-0 podman[193050]: 2025-11-26 01:13:57.791605456 +0000 UTC m=+0.816541460 container died 19915feeef8be5e5cd58ff2b4f5d156fdeaebc28d49bd5164bb773266b3bfb1f (image=quay.io/ceph/ceph:v18, name=magical_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:13:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a30c88feedaeea4601c7984739a82bbb88019f13164b0499e384ef3f59c6508-merged.mount: Deactivated successfully.
Nov 26 01:13:57 compute-0 podman[193050]: 2025-11-26 01:13:57.884020037 +0000 UTC m=+0.908956041 container remove 19915feeef8be5e5cd58ff2b4f5d156fdeaebc28d49bd5164bb773266b3bfb1f (image=quay.io/ceph/ceph:v18, name=magical_elbakyan, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 26 01:13:57 compute-0 systemd[1]: libpod-conmon-19915feeef8be5e5cd58ff2b4f5d156fdeaebc28d49bd5164bb773266b3bfb1f.scope: Deactivated successfully.
Nov 26 01:13:59 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'crash'
Nov 26 01:13:59 compute-0 podman[158021]: time="2025-11-26T01:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:13:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22105 "" "Go-http-client/1.1"
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.775 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.776 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feff248b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.778 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff25140e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feff25140b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feff248b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feff248b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feff248b740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feff248b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248a270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff35fda90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feff248b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feff248b1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff5310410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feff248ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feff248b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feff248b0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff2489520>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feff248ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feff248bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feff248bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feff24894f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feff248b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3929 "" "Go-http-client/1.1"
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feff248bc20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feff248b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.801 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feff248bcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.801 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feff55e84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff4ce75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.801 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feff248bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.804 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feff248b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.804 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feff248bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.805 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feff248a2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.805 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feff248aea0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.805 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feff248afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.806 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:13:59.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:13:59 compute-0 ceph-mgr[193049]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 01:13:59 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'dashboard'
Nov 26 01:13:59 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:13:59.820+0000 7f64b5e6a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 01:14:00 compute-0 podman[193139]: 2025-11-26 01:14:00.022657043 +0000 UTC m=+0.097443072 container create 32dc2b7bf0ab2af85723249e15c558c43be992f7249f27204668b84eb1c8344b (image=quay.io/ceph/ceph:v18, name=strange_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 01:14:00 compute-0 podman[193139]: 2025-11-26 01:13:59.986219963 +0000 UTC m=+0.061006022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:00 compute-0 systemd[1]: Started libpod-conmon-32dc2b7bf0ab2af85723249e15c558c43be992f7249f27204668b84eb1c8344b.scope.
Nov 26 01:14:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a52f6c162843da7bdde4c82b3872d0fd6fedebdf2a9927646768005fbe4030/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a52f6c162843da7bdde4c82b3872d0fd6fedebdf2a9927646768005fbe4030/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a52f6c162843da7bdde4c82b3872d0fd6fedebdf2a9927646768005fbe4030/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:00 compute-0 podman[193139]: 2025-11-26 01:14:00.157713196 +0000 UTC m=+0.232499225 container init 32dc2b7bf0ab2af85723249e15c558c43be992f7249f27204668b84eb1c8344b (image=quay.io/ceph/ceph:v18, name=strange_bhaskara, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 01:14:00 compute-0 podman[193139]: 2025-11-26 01:14:00.180553862 +0000 UTC m=+0.255339881 container start 32dc2b7bf0ab2af85723249e15c558c43be992f7249f27204668b84eb1c8344b (image=quay.io/ceph/ceph:v18, name=strange_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Nov 26 01:14:00 compute-0 podman[193139]: 2025-11-26 01:14:00.187169325 +0000 UTC m=+0.261955394 container attach 32dc2b7bf0ab2af85723249e15c558c43be992f7249f27204668b84eb1c8344b (image=quay.io/ceph/ceph:v18, name=strange_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 01:14:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 01:14:00 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2940162444' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]: 
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]: {
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    "fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    "health": {
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "status": "HEALTH_OK",
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "checks": {},
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "mutes": []
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    },
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    "election_epoch": 5,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    "quorum": [
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        0
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    ],
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    "quorum_names": [
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "compute-0"
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    ],
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    "quorum_age": 6,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    "monmap": {
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "epoch": 1,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "min_mon_release_name": "reef",
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "num_mons": 1
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    },
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    "osdmap": {
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "epoch": 1,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "num_osds": 0,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "num_up_osds": 0,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "osd_up_since": 0,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "num_in_osds": 0,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "osd_in_since": 0,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "num_remapped_pgs": 0
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    },
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    "pgmap": {
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "pgs_by_state": [],
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "num_pgs": 0,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "num_pools": 0,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "num_objects": 0,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "data_bytes": 0,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "bytes_used": 0,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "bytes_avail": 0,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "bytes_total": 0
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    },
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    "fsmap": {
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "epoch": 1,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "by_rank": [],
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "up:standby": 0
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    },
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    "mgrmap": {
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "available": false,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "num_standbys": 0,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "modules": [
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:            "iostat",
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:            "nfs",
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:            "restful"
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        ],
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "services": {}
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    },
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    "servicemap": {
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "epoch": 1,
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "modified": "2025-11-26T01:13:49.405054+0000",
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:        "services": {}
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    },
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]:    "progress_events": {}
Nov 26 01:14:00 compute-0 strange_bhaskara[193154]: }
Nov 26 01:14:00 compute-0 systemd[1]: libpod-32dc2b7bf0ab2af85723249e15c558c43be992f7249f27204668b84eb1c8344b.scope: Deactivated successfully.
Nov 26 01:14:00 compute-0 podman[193139]: 2025-11-26 01:14:00.677448614 +0000 UTC m=+0.752234643 container died 32dc2b7bf0ab2af85723249e15c558c43be992f7249f27204668b84eb1c8344b (image=quay.io/ceph/ceph:v18, name=strange_bhaskara, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 01:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9a52f6c162843da7bdde4c82b3872d0fd6fedebdf2a9927646768005fbe4030-merged.mount: Deactivated successfully.
Nov 26 01:14:00 compute-0 podman[193139]: 2025-11-26 01:14:00.778667214 +0000 UTC m=+0.853453243 container remove 32dc2b7bf0ab2af85723249e15c558c43be992f7249f27204668b84eb1c8344b (image=quay.io/ceph/ceph:v18, name=strange_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 01:14:00 compute-0 systemd[1]: libpod-conmon-32dc2b7bf0ab2af85723249e15c558c43be992f7249f27204668b84eb1c8344b.scope: Deactivated successfully.
Nov 26 01:14:01 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'devicehealth'
Nov 26 01:14:01 compute-0 openstack_network_exporter[160178]: ERROR   01:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:14:01 compute-0 openstack_network_exporter[160178]: ERROR   01:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:14:01 compute-0 openstack_network_exporter[160178]: ERROR   01:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:14:01 compute-0 ceph-mgr[193049]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 01:14:01 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'diskprediction_local'
Nov 26 01:14:01 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:01.422+0000 7f64b5e6a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 01:14:01 compute-0 openstack_network_exporter[160178]: ERROR   01:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:14:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:14:01 compute-0 openstack_network_exporter[160178]: ERROR   01:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:14:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:14:01 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 26 01:14:01 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 26 01:14:01 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]:  from numpy import show_config as show_numpy_config
Nov 26 01:14:01 compute-0 ceph-mgr[193049]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 01:14:01 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'influx'
Nov 26 01:14:01 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:01.922+0000 7f64b5e6a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 01:14:02 compute-0 ceph-mgr[193049]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 01:14:02 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'insights'
Nov 26 01:14:02 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:02.145+0000 7f64b5e6a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 01:14:02 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'iostat'
Nov 26 01:14:02 compute-0 ceph-mgr[193049]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 01:14:02 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'k8sevents'
Nov 26 01:14:02 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:02.590+0000 7f64b5e6a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 01:14:02 compute-0 podman[193194]: 2025-11-26 01:14:02.907059134 +0000 UTC m=+0.086060236 container create 7b82d2868b43f203c56b975c9e7a14ae285ef8dd9dd6cd60cee72879b0509c26 (image=quay.io/ceph/ceph:v18, name=hardcore_montalcini, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 01:14:02 compute-0 podman[193194]: 2025-11-26 01:14:02.875659995 +0000 UTC m=+0.054661147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:02 compute-0 systemd[1]: Started libpod-conmon-7b82d2868b43f203c56b975c9e7a14ae285ef8dd9dd6cd60cee72879b0509c26.scope.
Nov 26 01:14:03 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae92a353b3293182c2f2dcd747f370049fbeedbfd87e2972ec2e0efb77d39916/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae92a353b3293182c2f2dcd747f370049fbeedbfd87e2972ec2e0efb77d39916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae92a353b3293182c2f2dcd747f370049fbeedbfd87e2972ec2e0efb77d39916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:03 compute-0 podman[193194]: 2025-11-26 01:14:03.061213755 +0000 UTC m=+0.240214897 container init 7b82d2868b43f203c56b975c9e7a14ae285ef8dd9dd6cd60cee72879b0509c26 (image=quay.io/ceph/ceph:v18, name=hardcore_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 26 01:14:03 compute-0 podman[193194]: 2025-11-26 01:14:03.094467683 +0000 UTC m=+0.273468775 container start 7b82d2868b43f203c56b975c9e7a14ae285ef8dd9dd6cd60cee72879b0509c26 (image=quay.io/ceph/ceph:v18, name=hardcore_montalcini, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:14:03 compute-0 podman[193194]: 2025-11-26 01:14:03.100926111 +0000 UTC m=+0.279927203 container attach 7b82d2868b43f203c56b975c9e7a14ae285ef8dd9dd6cd60cee72879b0509c26 (image=quay.io/ceph/ceph:v18, name=hardcore_montalcini, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 01:14:03 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 01:14:03 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2936936268' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]: 
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]: {
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    "fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    "health": {
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "status": "HEALTH_OK",
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "checks": {},
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "mutes": []
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    },
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    "election_epoch": 5,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    "quorum": [
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        0
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    ],
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    "quorum_names": [
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "compute-0"
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    ],
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    "quorum_age": 9,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    "monmap": {
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "epoch": 1,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "min_mon_release_name": "reef",
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "num_mons": 1
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    },
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    "osdmap": {
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "epoch": 1,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "num_osds": 0,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "num_up_osds": 0,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "osd_up_since": 0,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "num_in_osds": 0,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "osd_in_since": 0,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "num_remapped_pgs": 0
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    },
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    "pgmap": {
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "pgs_by_state": [],
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "num_pgs": 0,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "num_pools": 0,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "num_objects": 0,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "data_bytes": 0,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "bytes_used": 0,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "bytes_avail": 0,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "bytes_total": 0
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    },
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    "fsmap": {
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "epoch": 1,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "by_rank": [],
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "up:standby": 0
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    },
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    "mgrmap": {
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "available": false,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "num_standbys": 0,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "modules": [
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:            "iostat",
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:            "nfs",
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:            "restful"
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        ],
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "services": {}
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    },
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    "servicemap": {
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "epoch": 1,
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "modified": "2025-11-26T01:13:49.405054+0000",
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:        "services": {}
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    },
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]:    "progress_events": {}
Nov 26 01:14:03 compute-0 hardcore_montalcini[193210]: }
Nov 26 01:14:03 compute-0 systemd[1]: libpod-7b82d2868b43f203c56b975c9e7a14ae285ef8dd9dd6cd60cee72879b0509c26.scope: Deactivated successfully.
Nov 26 01:14:03 compute-0 podman[193194]: 2025-11-26 01:14:03.555896548 +0000 UTC m=+0.734897640 container died 7b82d2868b43f203c56b975c9e7a14ae285ef8dd9dd6cd60cee72879b0509c26 (image=quay.io/ceph/ceph:v18, name=hardcore_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 01:14:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae92a353b3293182c2f2dcd747f370049fbeedbfd87e2972ec2e0efb77d39916-merged.mount: Deactivated successfully.
Nov 26 01:14:03 compute-0 podman[193194]: 2025-11-26 01:14:03.641106841 +0000 UTC m=+0.820107913 container remove 7b82d2868b43f203c56b975c9e7a14ae285ef8dd9dd6cd60cee72879b0509c26 (image=quay.io/ceph/ceph:v18, name=hardcore_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 01:14:03 compute-0 systemd[1]: libpod-conmon-7b82d2868b43f203c56b975c9e7a14ae285ef8dd9dd6cd60cee72879b0509c26.scope: Deactivated successfully.
Nov 26 01:14:04 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'localpool'
Nov 26 01:14:04 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'mds_autoscaler'
Nov 26 01:14:05 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'mirroring'
Nov 26 01:14:05 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'nfs'
Nov 26 01:14:05 compute-0 podman[193249]: 2025-11-26 01:14:05.786981087 +0000 UTC m=+0.096354815 container create fce96c15306dafa3ba393f55fa30b6a6eb007e95df9b39d9130037c61f5acb87 (image=quay.io/ceph/ceph:v18, name=magical_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:14:05 compute-0 podman[193249]: 2025-11-26 01:14:05.751176483 +0000 UTC m=+0.060550281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:05 compute-0 systemd[1]: Started libpod-conmon-fce96c15306dafa3ba393f55fa30b6a6eb007e95df9b39d9130037c61f5acb87.scope.
Nov 26 01:14:05 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee04619e0231294714ac4b5cf78a178e6779e18f846c8f7eaa4b15ffbcc4b2c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee04619e0231294714ac4b5cf78a178e6779e18f846c8f7eaa4b15ffbcc4b2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee04619e0231294714ac4b5cf78a178e6779e18f846c8f7eaa4b15ffbcc4b2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:05 compute-0 podman[193249]: 2025-11-26 01:14:05.951306194 +0000 UTC m=+0.260679922 container init fce96c15306dafa3ba393f55fa30b6a6eb007e95df9b39d9130037c61f5acb87 (image=quay.io/ceph/ceph:v18, name=magical_nightingale, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 01:14:05 compute-0 podman[193249]: 2025-11-26 01:14:05.968470531 +0000 UTC m=+0.277844249 container start fce96c15306dafa3ba393f55fa30b6a6eb007e95df9b39d9130037c61f5acb87 (image=quay.io/ceph/ceph:v18, name=magical_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 01:14:05 compute-0 podman[193249]: 2025-11-26 01:14:05.972734562 +0000 UTC m=+0.282108360 container attach fce96c15306dafa3ba393f55fa30b6a6eb007e95df9b39d9130037c61f5acb87 (image=quay.io/ceph/ceph:v18, name=magical_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 01:14:06 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:06.152+0000 7f64b5e6a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 01:14:06 compute-0 ceph-mgr[193049]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 01:14:06 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'orchestrator'
Nov 26 01:14:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 01:14:06 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2822892879' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 01:14:06 compute-0 magical_nightingale[193265]: 
Nov 26 01:14:06 compute-0 magical_nightingale[193265]: {
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    "fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    "health": {
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "status": "HEALTH_OK",
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "checks": {},
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "mutes": []
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    },
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    "election_epoch": 5,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    "quorum": [
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        0
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    ],
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    "quorum_names": [
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "compute-0"
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    ],
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    "quorum_age": 12,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    "monmap": {
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "epoch": 1,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "min_mon_release_name": "reef",
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "num_mons": 1
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    },
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    "osdmap": {
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "epoch": 1,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "num_osds": 0,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "num_up_osds": 0,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "osd_up_since": 0,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "num_in_osds": 0,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "osd_in_since": 0,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "num_remapped_pgs": 0
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    },
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    "pgmap": {
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "pgs_by_state": [],
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "num_pgs": 0,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "num_pools": 0,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "num_objects": 0,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "data_bytes": 0,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "bytes_used": 0,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "bytes_avail": 0,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "bytes_total": 0
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    },
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    "fsmap": {
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "epoch": 1,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "by_rank": [],
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "up:standby": 0
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    },
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    "mgrmap": {
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "available": false,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "num_standbys": 0,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "modules": [
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:            "iostat",
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:            "nfs",
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:            "restful"
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        ],
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "services": {}
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    },
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    "servicemap": {
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "epoch": 1,
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "modified": "2025-11-26T01:13:49.405054+0000",
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:        "services": {}
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    },
Nov 26 01:14:06 compute-0 magical_nightingale[193265]:    "progress_events": {}
Nov 26 01:14:06 compute-0 magical_nightingale[193265]: }
Nov 26 01:14:06 compute-0 systemd[1]: libpod-fce96c15306dafa3ba393f55fa30b6a6eb007e95df9b39d9130037c61f5acb87.scope: Deactivated successfully.
Nov 26 01:14:06 compute-0 podman[193249]: 2025-11-26 01:14:06.477475819 +0000 UTC m=+0.786849577 container died fce96c15306dafa3ba393f55fa30b6a6eb007e95df9b39d9130037c61f5acb87 (image=quay.io/ceph/ceph:v18, name=magical_nightingale, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:14:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ee04619e0231294714ac4b5cf78a178e6779e18f846c8f7eaa4b15ffbcc4b2c-merged.mount: Deactivated successfully.
Nov 26 01:14:06 compute-0 podman[193249]: 2025-11-26 01:14:06.572330673 +0000 UTC m=+0.881704411 container remove fce96c15306dafa3ba393f55fa30b6a6eb007e95df9b39d9130037c61f5acb87 (image=quay.io/ceph/ceph:v18, name=magical_nightingale, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 01:14:06 compute-0 systemd[1]: libpod-conmon-fce96c15306dafa3ba393f55fa30b6a6eb007e95df9b39d9130037c61f5acb87.scope: Deactivated successfully.
Nov 26 01:14:06 compute-0 ceph-mgr[193049]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 01:14:06 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'osd_perf_query'
Nov 26 01:14:06 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:06.793+0000 7f64b5e6a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 01:14:07 compute-0 ceph-mgr[193049]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 01:14:07 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'osd_support'
Nov 26 01:14:07 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:07.055+0000 7f64b5e6a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 01:14:07 compute-0 ceph-mgr[193049]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 01:14:07 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'pg_autoscaler'
Nov 26 01:14:07 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:07.335+0000 7f64b5e6a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 01:14:07 compute-0 ceph-mgr[193049]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 01:14:07 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'progress'
Nov 26 01:14:07 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:07.628+0000 7f64b5e6a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 01:14:07 compute-0 ceph-mgr[193049]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 01:14:07 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'prometheus'
Nov 26 01:14:07 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:07.853+0000 7f64b5e6a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 01:14:08 compute-0 podman[193304]: 2025-11-26 01:14:08.698383761 +0000 UTC m=+0.081098316 container create 0c638714580fe6ec81b65cc5ed1f8222e481e934dfc778822d72df5074acdbfa (image=quay.io/ceph/ceph:v18, name=relaxed_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 01:14:08 compute-0 podman[193304]: 2025-11-26 01:14:08.666661254 +0000 UTC m=+0.049375859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:08 compute-0 systemd[1]: Started libpod-conmon-0c638714580fe6ec81b65cc5ed1f8222e481e934dfc778822d72df5074acdbfa.scope.
Nov 26 01:14:08 compute-0 ceph-mgr[193049]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 01:14:08 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'rbd_support'
Nov 26 01:14:08 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:08.783+0000 7f64b5e6a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 01:14:08 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/544a2f282cab8f8963389e3c32b5f1d7b539df84388234b096ad0128663f5f47/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/544a2f282cab8f8963389e3c32b5f1d7b539df84388234b096ad0128663f5f47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/544a2f282cab8f8963389e3c32b5f1d7b539df84388234b096ad0128663f5f47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:08 compute-0 podman[193304]: 2025-11-26 01:14:08.860593802 +0000 UTC m=+0.243308367 container init 0c638714580fe6ec81b65cc5ed1f8222e481e934dfc778822d72df5074acdbfa (image=quay.io/ceph/ceph:v18, name=relaxed_leavitt, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:14:08 compute-0 podman[193304]: 2025-11-26 01:14:08.875477991 +0000 UTC m=+0.258192546 container start 0c638714580fe6ec81b65cc5ed1f8222e481e934dfc778822d72df5074acdbfa (image=quay.io/ceph/ceph:v18, name=relaxed_leavitt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:14:08 compute-0 podman[193304]: 2025-11-26 01:14:08.881867567 +0000 UTC m=+0.264582172 container attach 0c638714580fe6ec81b65cc5ed1f8222e481e934dfc778822d72df5074acdbfa (image=quay.io/ceph/ceph:v18, name=relaxed_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 01:14:09 compute-0 ceph-mgr[193049]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 01:14:09 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'restful'
Nov 26 01:14:09 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:09.064+0000 7f64b5e6a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 01:14:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 01:14:09 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3018589691' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]: 
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]: {
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    "fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    "health": {
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "status": "HEALTH_OK",
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "checks": {},
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "mutes": []
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    },
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    "election_epoch": 5,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    "quorum": [
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        0
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    ],
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    "quorum_names": [
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "compute-0"
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    ],
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    "quorum_age": 15,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    "monmap": {
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "epoch": 1,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "min_mon_release_name": "reef",
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "num_mons": 1
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    },
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    "osdmap": {
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "epoch": 1,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "num_osds": 0,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "num_up_osds": 0,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "osd_up_since": 0,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "num_in_osds": 0,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "osd_in_since": 0,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "num_remapped_pgs": 0
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    },
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    "pgmap": {
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "pgs_by_state": [],
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "num_pgs": 0,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "num_pools": 0,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "num_objects": 0,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "data_bytes": 0,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "bytes_used": 0,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "bytes_avail": 0,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "bytes_total": 0
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    },
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    "fsmap": {
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "epoch": 1,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "by_rank": [],
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "up:standby": 0
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    },
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    "mgrmap": {
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "available": false,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "num_standbys": 0,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "modules": [
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:            "iostat",
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:            "nfs",
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:            "restful"
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        ],
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "services": {}
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    },
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    "servicemap": {
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "epoch": 1,
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "modified": "2025-11-26T01:13:49.405054+0000",
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:        "services": {}
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    },
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]:    "progress_events": {}
Nov 26 01:14:09 compute-0 relaxed_leavitt[193321]: }
Nov 26 01:14:09 compute-0 systemd[1]: libpod-0c638714580fe6ec81b65cc5ed1f8222e481e934dfc778822d72df5074acdbfa.scope: Deactivated successfully.
Nov 26 01:14:09 compute-0 podman[193347]: 2025-11-26 01:14:09.420761185 +0000 UTC m=+0.066262500 container died 0c638714580fe6ec81b65cc5ed1f8222e481e934dfc778822d72df5074acdbfa (image=quay.io/ceph/ceph:v18, name=relaxed_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:14:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-544a2f282cab8f8963389e3c32b5f1d7b539df84388234b096ad0128663f5f47-merged.mount: Deactivated successfully.
Nov 26 01:14:09 compute-0 podman[193347]: 2025-11-26 01:14:09.539784059 +0000 UTC m=+0.185285314 container remove 0c638714580fe6ec81b65cc5ed1f8222e481e934dfc778822d72df5074acdbfa (image=quay.io/ceph/ceph:v18, name=relaxed_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 01:14:09 compute-0 systemd[1]: libpod-conmon-0c638714580fe6ec81b65cc5ed1f8222e481e934dfc778822d72df5074acdbfa.scope: Deactivated successfully.
Nov 26 01:14:09 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'rgw'
Nov 26 01:14:10 compute-0 ceph-mgr[193049]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 01:14:10 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'rook'
Nov 26 01:14:10 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:10.413+0000 7f64b5e6a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 01:14:11 compute-0 podman[193362]: 2025-11-26 01:14:11.637394375 +0000 UTC m=+0.049694037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:12 compute-0 ceph-mgr[193049]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 01:14:12 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'selftest'
Nov 26 01:14:12 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:12.412+0000 7f64b5e6a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 01:14:12 compute-0 ceph-mgr[193049]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 01:14:12 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'snap_schedule'
Nov 26 01:14:12 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:12.640+0000 7f64b5e6a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 01:14:12 compute-0 ceph-mgr[193049]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 01:14:12 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'stats'
Nov 26 01:14:12 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:12.872+0000 7f64b5e6a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 01:14:12 compute-0 podman[193362]: 2025-11-26 01:14:12.943006863 +0000 UTC m=+1.355306515 container create 80ec8f930dcb3a925e20420881a937cece3bf5c6925ec48dcba55fb9d76b74cb (image=quay.io/ceph/ceph:v18, name=pedantic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:14:13 compute-0 systemd[1]: Started libpod-conmon-80ec8f930dcb3a925e20420881a937cece3bf5c6925ec48dcba55fb9d76b74cb.scope.
Nov 26 01:14:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f19ae029a944b46a5a5328b4f70d74ba517ec8c52e2ebb70f5eb672c74a2e1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f19ae029a944b46a5a5328b4f70d74ba517ec8c52e2ebb70f5eb672c74a2e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f19ae029a944b46a5a5328b4f70d74ba517ec8c52e2ebb70f5eb672c74a2e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:13 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'status'
Nov 26 01:14:13 compute-0 podman[193362]: 2025-11-26 01:14:13.110557823 +0000 UTC m=+1.522857486 container init 80ec8f930dcb3a925e20420881a937cece3bf5c6925ec48dcba55fb9d76b74cb (image=quay.io/ceph/ceph:v18, name=pedantic_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:14:13 compute-0 podman[193362]: 2025-11-26 01:14:13.122753202 +0000 UTC m=+1.535052854 container start 80ec8f930dcb3a925e20420881a937cece3bf5c6925ec48dcba55fb9d76b74cb (image=quay.io/ceph/ceph:v18, name=pedantic_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:14:13 compute-0 podman[193362]: 2025-11-26 01:14:13.129283752 +0000 UTC m=+1.541583464 container attach 80ec8f930dcb3a925e20420881a937cece3bf5c6925ec48dcba55fb9d76b74cb (image=quay.io/ceph/ceph:v18, name=pedantic_goodall, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:14:13 compute-0 ceph-mgr[193049]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 01:14:13 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'telegraf'
Nov 26 01:14:13 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:13.377+0000 7f64b5e6a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 01:14:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 01:14:13 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2570933398' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]: 
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]: {
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    "fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    "health": {
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "status": "HEALTH_OK",
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "checks": {},
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "mutes": []
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    },
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    "election_epoch": 5,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    "quorum": [
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        0
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    ],
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    "quorum_names": [
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "compute-0"
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    ],
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    "quorum_age": 19,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    "monmap": {
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "epoch": 1,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "min_mon_release_name": "reef",
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "num_mons": 1
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    },
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    "osdmap": {
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "epoch": 1,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "num_osds": 0,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "num_up_osds": 0,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "osd_up_since": 0,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "num_in_osds": 0,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "osd_in_since": 0,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "num_remapped_pgs": 0
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    },
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    "pgmap": {
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "pgs_by_state": [],
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "num_pgs": 0,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "num_pools": 0,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "num_objects": 0,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "data_bytes": 0,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "bytes_used": 0,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "bytes_avail": 0,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "bytes_total": 0
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    },
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    "fsmap": {
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "epoch": 1,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "by_rank": [],
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "up:standby": 0
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    },
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    "mgrmap": {
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "available": false,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "num_standbys": 0,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "modules": [
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:            "iostat",
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:            "nfs",
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:            "restful"
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        ],
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "services": {}
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    },
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    "servicemap": {
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "epoch": 1,
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "modified": "2025-11-26T01:13:49.405054+0000",
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:        "services": {}
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    },
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]:    "progress_events": {}
Nov 26 01:14:13 compute-0 pedantic_goodall[193377]: }
Nov 26 01:14:13 compute-0 podman[193382]: 2025-11-26 01:14:13.577915965 +0000 UTC m=+0.123138803 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 01:14:13 compute-0 podman[193384]: 2025-11-26 01:14:13.578011667 +0000 UTC m=+0.115813182 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:14:13 compute-0 systemd[1]: libpod-80ec8f930dcb3a925e20420881a937cece3bf5c6925ec48dcba55fb9d76b74cb.scope: Deactivated successfully.
Nov 26 01:14:13 compute-0 podman[193362]: 2025-11-26 01:14:13.605767041 +0000 UTC m=+2.018066673 container died 80ec8f930dcb3a925e20420881a937cece3bf5c6925ec48dcba55fb9d76b74cb (image=quay.io/ceph/ceph:v18, name=pedantic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:14:13 compute-0 ceph-mgr[193049]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 01:14:13 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'telemetry'
Nov 26 01:14:13 compute-0 podman[193386]: 2025-11-26 01:14:13.620608958 +0000 UTC m=+0.168023164 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:14:13 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:13.608+0000 7f64b5e6a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 01:14:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1f19ae029a944b46a5a5328b4f70d74ba517ec8c52e2ebb70f5eb672c74a2e1-merged.mount: Deactivated successfully.
Nov 26 01:14:13 compute-0 podman[193362]: 2025-11-26 01:14:13.666048084 +0000 UTC m=+2.078347716 container remove 80ec8f930dcb3a925e20420881a937cece3bf5c6925ec48dcba55fb9d76b74cb (image=quay.io/ceph/ceph:v18, name=pedantic_goodall, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 01:14:13 compute-0 systemd[1]: libpod-conmon-80ec8f930dcb3a925e20420881a937cece3bf5c6925ec48dcba55fb9d76b74cb.scope: Deactivated successfully.
Nov 26 01:14:14 compute-0 ceph-mgr[193049]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 01:14:14 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'test_orchestrator'
Nov 26 01:14:14 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:14.192+0000 7f64b5e6a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 01:14:14 compute-0 ceph-mgr[193049]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 01:14:14 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:14.852+0000 7f64b5e6a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 01:14:14 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'volumes'
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 01:14:15 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:15.517+0000 7f64b5e6a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'zabbix'
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: ms_deliver_dispatch: unhandled message 0x55f7e76e51e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 26 01:14:15 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:15.739+0000 7f64b5e6a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 01:14:15 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.vbisdw
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr handle_mgr_map Activating!
Nov 26 01:14:15 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.vbisdw(active, starting, since 0.0157725s)
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr handle_mgr_map I am now activating
Nov 26 01:14:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 26 01:14:15 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/313306120' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 26 01:14:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).mds e1 all = 1
Nov 26 01:14:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 26 01:14:15 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/313306120' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 01:14:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 26 01:14:15 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/313306120' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 26 01:14:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 26 01:14:15 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/313306120' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 01:14:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.vbisdw", "id": "compute-0.vbisdw"} v 0) v1
Nov 26 01:14:15 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/313306120' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "mgr metadata", "who": "compute-0.vbisdw", "id": "compute-0.vbisdw"}]: dispatch
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: balancer
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: crash
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [balancer INFO root] Starting
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:14:15
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [balancer INFO root] No pools available
Nov 26 01:14:15 compute-0 podman[193478]: 2025-11-26 01:14:15.807544835 +0000 UTC m=+0.096445917 container create 2f8337d84046ae9b6ff088887fc4f54f5de6a99938e18ac034184ffe5e7a1cb2 (image=quay.io/ceph/ceph:v18, name=adoring_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:14:15 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : Manager daemon compute-0.vbisdw is now available
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: devicehealth
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [devicehealth INFO root] Starting
Nov 26 01:14:15 compute-0 ceph-mon[192746]: Activating manager daemon compute-0.vbisdw
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: iostat
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: nfs
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: orchestrator
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: pg_autoscaler
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: progress
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [progress INFO root] Loading...
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [progress INFO root] No stored events to load
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [progress INFO root] Loaded [] historic events
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [progress INFO root] Loaded OSDMap, ready.
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:14:15 compute-0 podman[193478]: 2025-11-26 01:14:15.767642944 +0000 UTC m=+0.056544106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:15 compute-0 systemd[1]: Started libpod-conmon-2f8337d84046ae9b6ff088887fc4f54f5de6a99938e18ac034184ffe5e7a1cb2.scope.
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [rbd_support INFO root] recovery thread starting
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [rbd_support INFO root] starting setup
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: rbd_support
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: restful
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: status
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vbisdw/mirror_snapshot_schedule"} v 0) v1
Nov 26 01:14:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/313306120' entity='mgr.compute-0.vbisdw' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vbisdw/mirror_snapshot_schedule"}]: dispatch
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [restful INFO root] server_addr: :: server_port: 8003
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [restful WARNING root] server not running: no certificate configured
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: telemetry
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [rbd_support INFO root] PerfHandler: starting
Nov 26 01:14:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 26 01:14:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TaskHandler: starting
Nov 26 01:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135e7c91400f2cc89715d08bb7b5608333167ad435b9ee3a5ae3895d668a68a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135e7c91400f2cc89715d08bb7b5608333167ad435b9ee3a5ae3895d668a68a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135e7c91400f2cc89715d08bb7b5608333167ad435b9ee3a5ae3895d668a68a7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/313306120' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vbisdw/trash_purge_schedule"} v 0) v1
Nov 26 01:14:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/313306120' entity='mgr.compute-0.vbisdw' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vbisdw/trash_purge_schedule"}]: dispatch
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: [rbd_support INFO root] setup complete
Nov 26 01:14:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 26 01:14:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/313306120' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 26 01:14:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/313306120' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:15 compute-0 podman[193478]: 2025-11-26 01:14:15.962010204 +0000 UTC m=+0.250911376 container init 2f8337d84046ae9b6ff088887fc4f54f5de6a99938e18ac034184ffe5e7a1cb2 (image=quay.io/ceph/ceph:v18, name=adoring_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 01:14:15 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: volumes
Nov 26 01:14:15 compute-0 podman[193478]: 2025-11-26 01:14:15.977091007 +0000 UTC m=+0.265992119 container start 2f8337d84046ae9b6ff088887fc4f54f5de6a99938e18ac034184ffe5e7a1cb2 (image=quay.io/ceph/ceph:v18, name=adoring_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:14:15 compute-0 podman[193478]: 2025-11-26 01:14:15.984448069 +0000 UTC m=+0.273349251 container attach 2f8337d84046ae9b6ff088887fc4f54f5de6a99938e18ac034184ffe5e7a1cb2 (image=quay.io/ceph/ceph:v18, name=adoring_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:14:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 01:14:16 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1915249336' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 01:14:16 compute-0 adoring_kirch[193532]: 
Nov 26 01:14:16 compute-0 adoring_kirch[193532]: {
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    "fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    "health": {
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "status": "HEALTH_OK",
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "checks": {},
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "mutes": []
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    },
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    "election_epoch": 5,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    "quorum": [
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        0
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    ],
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    "quorum_names": [
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "compute-0"
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    ],
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    "quorum_age": 22,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    "monmap": {
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "epoch": 1,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "min_mon_release_name": "reef",
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "num_mons": 1
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    },
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    "osdmap": {
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "epoch": 1,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "num_osds": 0,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "num_up_osds": 0,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "osd_up_since": 0,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "num_in_osds": 0,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "osd_in_since": 0,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "num_remapped_pgs": 0
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    },
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    "pgmap": {
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "pgs_by_state": [],
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "num_pgs": 0,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "num_pools": 0,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "num_objects": 0,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "data_bytes": 0,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "bytes_used": 0,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "bytes_avail": 0,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "bytes_total": 0
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    },
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    "fsmap": {
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "epoch": 1,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "by_rank": [],
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "up:standby": 0
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    },
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    "mgrmap": {
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "available": false,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "num_standbys": 0,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "modules": [
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:            "iostat",
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:            "nfs",
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:            "restful"
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        ],
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "services": {}
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    },
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    "servicemap": {
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "epoch": 1,
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "modified": "2025-11-26T01:13:49.405054+0000",
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:        "services": {}
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    },
Nov 26 01:14:16 compute-0 adoring_kirch[193532]:    "progress_events": {}
Nov 26 01:14:16 compute-0 adoring_kirch[193532]: }
Nov 26 01:14:16 compute-0 systemd[1]: libpod-2f8337d84046ae9b6ff088887fc4f54f5de6a99938e18ac034184ffe5e7a1cb2.scope: Deactivated successfully.
Nov 26 01:14:16 compute-0 podman[193478]: 2025-11-26 01:14:16.429636892 +0000 UTC m=+0.718538004 container died 2f8337d84046ae9b6ff088887fc4f54f5de6a99938e18ac034184ffe5e7a1cb2 (image=quay.io/ceph/ceph:v18, name=adoring_kirch, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 01:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-135e7c91400f2cc89715d08bb7b5608333167ad435b9ee3a5ae3895d668a68a7-merged.mount: Deactivated successfully.
Nov 26 01:14:16 compute-0 podman[193478]: 2025-11-26 01:14:16.508688314 +0000 UTC m=+0.797589416 container remove 2f8337d84046ae9b6ff088887fc4f54f5de6a99938e18ac034184ffe5e7a1cb2 (image=quay.io/ceph/ceph:v18, name=adoring_kirch, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:14:16 compute-0 systemd[1]: libpod-conmon-2f8337d84046ae9b6ff088887fc4f54f5de6a99938e18ac034184ffe5e7a1cb2.scope: Deactivated successfully.
Nov 26 01:14:16 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.vbisdw(active, since 1.03396s)
Nov 26 01:14:16 compute-0 ceph-mon[192746]: Manager daemon compute-0.vbisdw is now available
Nov 26 01:14:16 compute-0 ceph-mon[192746]: from='mgr.14102 192.168.122.100:0/313306120' entity='mgr.compute-0.vbisdw' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vbisdw/mirror_snapshot_schedule"}]: dispatch
Nov 26 01:14:16 compute-0 ceph-mon[192746]: from='mgr.14102 192.168.122.100:0/313306120' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:16 compute-0 ceph-mon[192746]: from='mgr.14102 192.168.122.100:0/313306120' entity='mgr.compute-0.vbisdw' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vbisdw/trash_purge_schedule"}]: dispatch
Nov 26 01:14:16 compute-0 ceph-mon[192746]: from='mgr.14102 192.168.122.100:0/313306120' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:16 compute-0 ceph-mon[192746]: from='mgr.14102 192.168.122.100:0/313306120' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:17 compute-0 podman[193609]: 2025-11-26 01:14:17.589456827 +0000 UTC m=+0.135378973 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:14:17 compute-0 podman[193608]: 2025-11-26 01:14:17.596554082 +0000 UTC m=+0.141499932 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, release=1755695350, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=edpm)
Nov 26 01:14:17 compute-0 ceph-mgr[193049]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 01:14:17 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.vbisdw(active, since 2s)
Nov 26 01:14:18 compute-0 podman[193653]: 2025-11-26 01:14:18.654365054 +0000 UTC m=+0.095571575 container create fb08e7af9707035bbcf5a505d1900e9932e88204342d2564ced828d358135f46 (image=quay.io/ceph/ceph:v18, name=nice_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:14:18 compute-0 podman[193653]: 2025-11-26 01:14:18.62082831 +0000 UTC m=+0.062034831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:18 compute-0 systemd[1]: Started libpod-conmon-fb08e7af9707035bbcf5a505d1900e9932e88204342d2564ced828d358135f46.scope.
Nov 26 01:14:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9271050cab676125c92c83775c5f4bdc84fb6bdcd975354c4d9937822f1c0f2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9271050cab676125c92c83775c5f4bdc84fb6bdcd975354c4d9937822f1c0f2f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9271050cab676125c92c83775c5f4bdc84fb6bdcd975354c4d9937822f1c0f2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:18 compute-0 podman[193653]: 2025-11-26 01:14:18.821417682 +0000 UTC m=+0.262624213 container init fb08e7af9707035bbcf5a505d1900e9932e88204342d2564ced828d358135f46 (image=quay.io/ceph/ceph:v18, name=nice_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 26 01:14:18 compute-0 podman[193653]: 2025-11-26 01:14:18.844177086 +0000 UTC m=+0.285383597 container start fb08e7af9707035bbcf5a505d1900e9932e88204342d2564ced828d358135f46 (image=quay.io/ceph/ceph:v18, name=nice_poitras, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 01:14:18 compute-0 podman[193653]: 2025-11-26 01:14:18.852268687 +0000 UTC m=+0.293475258 container attach fb08e7af9707035bbcf5a505d1900e9932e88204342d2564ced828d358135f46 (image=quay.io/ceph/ceph:v18, name=nice_poitras, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:14:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 26 01:14:19 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3983268464' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 26 01:14:19 compute-0 nice_poitras[193668]: 
Nov 26 01:14:19 compute-0 nice_poitras[193668]: {
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    "fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    "health": {
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "status": "HEALTH_OK",
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "checks": {},
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "mutes": []
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    },
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    "election_epoch": 5,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    "quorum": [
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        0
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    ],
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    "quorum_names": [
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "compute-0"
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    ],
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    "quorum_age": 25,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    "monmap": {
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "epoch": 1,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "min_mon_release_name": "reef",
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "num_mons": 1
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    },
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    "osdmap": {
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "epoch": 1,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "num_osds": 0,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "num_up_osds": 0,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "osd_up_since": 0,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "num_in_osds": 0,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "osd_in_since": 0,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "num_remapped_pgs": 0
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    },
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    "pgmap": {
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "pgs_by_state": [],
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "num_pgs": 0,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "num_pools": 0,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "num_objects": 0,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "data_bytes": 0,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "bytes_used": 0,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "bytes_avail": 0,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "bytes_total": 0
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    },
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    "fsmap": {
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "epoch": 1,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "by_rank": [],
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "up:standby": 0
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    },
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    "mgrmap": {
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "available": true,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "num_standbys": 0,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "modules": [
Nov 26 01:14:19 compute-0 nice_poitras[193668]:            "iostat",
Nov 26 01:14:19 compute-0 nice_poitras[193668]:            "nfs",
Nov 26 01:14:19 compute-0 nice_poitras[193668]:            "restful"
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        ],
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "services": {}
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    },
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    "servicemap": {
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "epoch": 1,
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "modified": "2025-11-26T01:13:49.405054+0000",
Nov 26 01:14:19 compute-0 nice_poitras[193668]:        "services": {}
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    },
Nov 26 01:14:19 compute-0 nice_poitras[193668]:    "progress_events": {}
Nov 26 01:14:19 compute-0 nice_poitras[193668]: }
Nov 26 01:14:19 compute-0 systemd[1]: libpod-fb08e7af9707035bbcf5a505d1900e9932e88204342d2564ced828d358135f46.scope: Deactivated successfully.
Nov 26 01:14:19 compute-0 podman[193695]: 2025-11-26 01:14:19.65167117 +0000 UTC m=+0.062868091 container died fb08e7af9707035bbcf5a505d1900e9932e88204342d2564ced828d358135f46 (image=quay.io/ceph/ceph:v18, name=nice_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 01:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9271050cab676125c92c83775c5f4bdc84fb6bdcd975354c4d9937822f1c0f2f-merged.mount: Deactivated successfully.
Nov 26 01:14:19 compute-0 podman[193695]: 2025-11-26 01:14:19.735450435 +0000 UTC m=+0.146647356 container remove fb08e7af9707035bbcf5a505d1900e9932e88204342d2564ced828d358135f46 (image=quay.io/ceph/ceph:v18, name=nice_poitras, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 26 01:14:19 compute-0 systemd[1]: libpod-conmon-fb08e7af9707035bbcf5a505d1900e9932e88204342d2564ced828d358135f46.scope: Deactivated successfully.
Nov 26 01:14:19 compute-0 ceph-mgr[193049]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 01:14:19 compute-0 podman[193708]: 2025-11-26 01:14:19.891525176 +0000 UTC m=+0.094545457 container create b2736f89234a6426ccb554f73b48418411313b177288d22a457965c002985bdd (image=quay.io/ceph/ceph:v18, name=suspicious_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:14:19 compute-0 podman[193708]: 2025-11-26 01:14:19.861092362 +0000 UTC m=+0.064112663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:19 compute-0 systemd[1]: Started libpod-conmon-b2736f89234a6426ccb554f73b48418411313b177288d22a457965c002985bdd.scope.
Nov 26 01:14:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb31b9314ad76b3c2da746530c1d738dd23f12991e37b70282422d83ae67db9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb31b9314ad76b3c2da746530c1d738dd23f12991e37b70282422d83ae67db9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb31b9314ad76b3c2da746530c1d738dd23f12991e37b70282422d83ae67db9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb31b9314ad76b3c2da746530c1d738dd23f12991e37b70282422d83ae67db9/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:20 compute-0 podman[193708]: 2025-11-26 01:14:20.063204475 +0000 UTC m=+0.266224816 container init b2736f89234a6426ccb554f73b48418411313b177288d22a457965c002985bdd (image=quay.io/ceph/ceph:v18, name=suspicious_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 01:14:20 compute-0 podman[193708]: 2025-11-26 01:14:20.075740282 +0000 UTC m=+0.278760563 container start b2736f89234a6426ccb554f73b48418411313b177288d22a457965c002985bdd (image=quay.io/ceph/ceph:v18, name=suspicious_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:14:20 compute-0 podman[193708]: 2025-11-26 01:14:20.082439167 +0000 UTC m=+0.285459458 container attach b2736f89234a6426ccb554f73b48418411313b177288d22a457965c002985bdd (image=quay.io/ceph/ceph:v18, name=suspicious_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 01:14:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 26 01:14:20 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3521965053' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 01:14:20 compute-0 systemd[1]: libpod-b2736f89234a6426ccb554f73b48418411313b177288d22a457965c002985bdd.scope: Deactivated successfully.
Nov 26 01:14:20 compute-0 podman[193708]: 2025-11-26 01:14:20.658531884 +0000 UTC m=+0.861552145 container died b2736f89234a6426ccb554f73b48418411313b177288d22a457965c002985bdd (image=quay.io/ceph/ceph:v18, name=suspicious_khorana, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:14:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbb31b9314ad76b3c2da746530c1d738dd23f12991e37b70282422d83ae67db9-merged.mount: Deactivated successfully.
Nov 26 01:14:20 compute-0 podman[193708]: 2025-11-26 01:14:20.715531511 +0000 UTC m=+0.918551762 container remove b2736f89234a6426ccb554f73b48418411313b177288d22a457965c002985bdd (image=quay.io/ceph/ceph:v18, name=suspicious_khorana, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:14:20 compute-0 systemd[1]: libpod-conmon-b2736f89234a6426ccb554f73b48418411313b177288d22a457965c002985bdd.scope: Deactivated successfully.
Nov 26 01:14:20 compute-0 podman[193761]: 2025-11-26 01:14:20.843020947 +0000 UTC m=+0.086113798 container create 3e0b15d512a69994947216e08f4c48432be8b3f29800c77bf077a8527e8caba5 (image=quay.io/ceph/ceph:v18, name=optimistic_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:14:20 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/3521965053' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 01:14:20 compute-0 podman[193761]: 2025-11-26 01:14:20.813462436 +0000 UTC m=+0.056555297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:20 compute-0 systemd[1]: Started libpod-conmon-3e0b15d512a69994947216e08f4c48432be8b3f29800c77bf077a8527e8caba5.scope.
Nov 26 01:14:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb045e94530c206380f1e650785f65a2f06f5fab248d42f0d2f7ee329302954d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb045e94530c206380f1e650785f65a2f06f5fab248d42f0d2f7ee329302954d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb045e94530c206380f1e650785f65a2f06f5fab248d42f0d2f7ee329302954d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:20 compute-0 podman[193761]: 2025-11-26 01:14:20.982810503 +0000 UTC m=+0.225903324 container init 3e0b15d512a69994947216e08f4c48432be8b3f29800c77bf077a8527e8caba5 (image=quay.io/ceph/ceph:v18, name=optimistic_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:14:21 compute-0 podman[193761]: 2025-11-26 01:14:21.001323856 +0000 UTC m=+0.244416667 container start 3e0b15d512a69994947216e08f4c48432be8b3f29800c77bf077a8527e8caba5 (image=quay.io/ceph/ceph:v18, name=optimistic_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 01:14:21 compute-0 podman[193761]: 2025-11-26 01:14:21.006681126 +0000 UTC m=+0.249773947 container attach 3e0b15d512a69994947216e08f4c48432be8b3f29800c77bf077a8527e8caba5 (image=quay.io/ceph/ceph:v18, name=optimistic_bhabha, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:14:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 26 01:14:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/587652147' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 26 01:14:21 compute-0 ceph-mgr[193049]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 01:14:21 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/587652147' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 26 01:14:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/587652147' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 26 01:14:21 compute-0 ceph-mgr[193049]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 26 01:14:21 compute-0 ceph-mgr[193049]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 26 01:14:21 compute-0 ceph-mgr[193049]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 26 01:14:21 compute-0 ceph-mgr[193049]: mgr respawn  1: '-n'
Nov 26 01:14:21 compute-0 ceph-mgr[193049]: mgr respawn  2: 'mgr.compute-0.vbisdw'
Nov 26 01:14:21 compute-0 ceph-mgr[193049]: mgr respawn  3: '-f'
Nov 26 01:14:21 compute-0 ceph-mgr[193049]: mgr respawn  4: '--setuser'
Nov 26 01:14:21 compute-0 ceph-mgr[193049]: mgr respawn  5: 'ceph'
Nov 26 01:14:21 compute-0 ceph-mgr[193049]: mgr respawn  6: '--setgroup'
Nov 26 01:14:21 compute-0 ceph-mgr[193049]: mgr respawn  7: 'ceph'
Nov 26 01:14:21 compute-0 ceph-mgr[193049]: mgr respawn  8: '--default-log-to-file=false'
Nov 26 01:14:21 compute-0 ceph-mgr[193049]: mgr respawn  9: '--default-log-to-journald=true'
Nov 26 01:14:21 compute-0 ceph-mgr[193049]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 26 01:14:21 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.vbisdw(active, since 6s)
Nov 26 01:14:21 compute-0 systemd[1]: libpod-3e0b15d512a69994947216e08f4c48432be8b3f29800c77bf077a8527e8caba5.scope: Deactivated successfully.
Nov 26 01:14:21 compute-0 podman[193761]: 2025-11-26 01:14:21.968679019 +0000 UTC m=+1.211771860 container died 3e0b15d512a69994947216e08f4c48432be8b3f29800c77bf077a8527e8caba5 (image=quay.io/ceph/ceph:v18, name=optimistic_bhabha, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 01:14:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb045e94530c206380f1e650785f65a2f06f5fab248d42f0d2f7ee329302954d-merged.mount: Deactivated successfully.
Nov 26 01:14:22 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: ignoring --setuser ceph since I am not root
Nov 26 01:14:22 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: ignoring --setgroup ceph since I am not root
Nov 26 01:14:22 compute-0 ceph-mgr[193049]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 26 01:14:22 compute-0 ceph-mgr[193049]: pidfile_write: ignore empty --pid-file
Nov 26 01:14:22 compute-0 podman[193761]: 2025-11-26 01:14:22.088823363 +0000 UTC m=+1.331916174 container remove 3e0b15d512a69994947216e08f4c48432be8b3f29800c77bf077a8527e8caba5 (image=quay.io/ceph/ceph:v18, name=optimistic_bhabha, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 01:14:22 compute-0 systemd[1]: libpod-conmon-3e0b15d512a69994947216e08f4c48432be8b3f29800c77bf077a8527e8caba5.scope: Deactivated successfully.
Nov 26 01:14:22 compute-0 podman[193835]: 2025-11-26 01:14:22.193062522 +0000 UTC m=+0.066178507 container create 019445977fd1f3a4d287f5b6e6ca56c94e677b8a70de715d6ff5253163a7f81e (image=quay.io/ceph/ceph:v18, name=competent_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 26 01:14:22 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'alerts'
Nov 26 01:14:22 compute-0 systemd[1]: Started libpod-conmon-019445977fd1f3a4d287f5b6e6ca56c94e677b8a70de715d6ff5253163a7f81e.scope.
Nov 26 01:14:22 compute-0 podman[193835]: 2025-11-26 01:14:22.169283282 +0000 UTC m=+0.042399287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a85c36b06fd37c7bf7c6aefc6fec279525d49d7ef197091a003866ed3619cf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a85c36b06fd37c7bf7c6aefc6fec279525d49d7ef197091a003866ed3619cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a85c36b06fd37c7bf7c6aefc6fec279525d49d7ef197091a003866ed3619cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:22 compute-0 podman[193835]: 2025-11-26 01:14:22.316470171 +0000 UTC m=+0.189586166 container init 019445977fd1f3a4d287f5b6e6ca56c94e677b8a70de715d6ff5253163a7f81e (image=quay.io/ceph/ceph:v18, name=competent_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:14:22 compute-0 podman[193835]: 2025-11-26 01:14:22.334149042 +0000 UTC m=+0.207265047 container start 019445977fd1f3a4d287f5b6e6ca56c94e677b8a70de715d6ff5253163a7f81e (image=quay.io/ceph/ceph:v18, name=competent_rhodes, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 01:14:22 compute-0 podman[193835]: 2025-11-26 01:14:22.342166081 +0000 UTC m=+0.215282146 container attach 019445977fd1f3a4d287f5b6e6ca56c94e677b8a70de715d6ff5253163a7f81e (image=quay.io/ceph/ceph:v18, name=competent_rhodes, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 01:14:22 compute-0 ceph-mgr[193049]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 01:14:22 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'balancer'
Nov 26 01:14:22 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:22.490+0000 7f7678420140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 01:14:22 compute-0 ceph-mgr[193049]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 01:14:22 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'cephadm'
Nov 26 01:14:22 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:22.726+0000 7f7678420140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 01:14:22 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/587652147' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 26 01:14:22 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 26 01:14:22 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4232892344' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 26 01:14:22 compute-0 competent_rhodes[193852]: {
Nov 26 01:14:22 compute-0 competent_rhodes[193852]:    "epoch": 5,
Nov 26 01:14:22 compute-0 competent_rhodes[193852]:    "available": true,
Nov 26 01:14:22 compute-0 competent_rhodes[193852]:    "active_name": "compute-0.vbisdw",
Nov 26 01:14:22 compute-0 competent_rhodes[193852]:    "num_standby": 0
Nov 26 01:14:22 compute-0 competent_rhodes[193852]: }
Nov 26 01:14:23 compute-0 systemd[1]: libpod-019445977fd1f3a4d287f5b6e6ca56c94e677b8a70de715d6ff5253163a7f81e.scope: Deactivated successfully.
Nov 26 01:14:23 compute-0 conmon[193852]: conmon 019445977fd1f3a4d287 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-019445977fd1f3a4d287f5b6e6ca56c94e677b8a70de715d6ff5253163a7f81e.scope/container/memory.events
Nov 26 01:14:23 compute-0 podman[193835]: 2025-11-26 01:14:23.009988132 +0000 UTC m=+0.883104147 container died 019445977fd1f3a4d287f5b6e6ca56c94e677b8a70de715d6ff5253163a7f81e (image=quay.io/ceph/ceph:v18, name=competent_rhodes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 26 01:14:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0a85c36b06fd37c7bf7c6aefc6fec279525d49d7ef197091a003866ed3619cf-merged.mount: Deactivated successfully.
Nov 26 01:14:23 compute-0 podman[193835]: 2025-11-26 01:14:23.092946065 +0000 UTC m=+0.966062080 container remove 019445977fd1f3a4d287f5b6e6ca56c94e677b8a70de715d6ff5253163a7f81e (image=quay.io/ceph/ceph:v18, name=competent_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 01:14:23 compute-0 systemd[1]: libpod-conmon-019445977fd1f3a4d287f5b6e6ca56c94e677b8a70de715d6ff5253163a7f81e.scope: Deactivated successfully.
Nov 26 01:14:23 compute-0 podman[193888]: 2025-11-26 01:14:23.236563702 +0000 UTC m=+0.092184256 container create 682c9388ee62458aa3999e187d575480e971a36d1a59156c395a554918498ea9 (image=quay.io/ceph/ceph:v18, name=nifty_wescoff, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 01:14:23 compute-0 podman[193888]: 2025-11-26 01:14:23.189962476 +0000 UTC m=+0.045583070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:23 compute-0 systemd[1]: Started libpod-conmon-682c9388ee62458aa3999e187d575480e971a36d1a59156c395a554918498ea9.scope.
Nov 26 01:14:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11a7450a780cca61d1eabe0f2afb14bd15d4f09025449a024d4bae83ae4a1fbb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11a7450a780cca61d1eabe0f2afb14bd15d4f09025449a024d4bae83ae4a1fbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11a7450a780cca61d1eabe0f2afb14bd15d4f09025449a024d4bae83ae4a1fbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:23 compute-0 podman[193888]: 2025-11-26 01:14:23.404971045 +0000 UTC m=+0.260591609 container init 682c9388ee62458aa3999e187d575480e971a36d1a59156c395a554918498ea9 (image=quay.io/ceph/ceph:v18, name=nifty_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 01:14:23 compute-0 podman[193888]: 2025-11-26 01:14:23.43163597 +0000 UTC m=+0.287256524 container start 682c9388ee62458aa3999e187d575480e971a36d1a59156c395a554918498ea9 (image=quay.io/ceph/ceph:v18, name=nifty_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:14:23 compute-0 podman[193888]: 2025-11-26 01:14:23.439696871 +0000 UTC m=+0.295317445 container attach 682c9388ee62458aa3999e187d575480e971a36d1a59156c395a554918498ea9 (image=quay.io/ceph/ceph:v18, name=nifty_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 01:14:23 compute-0 podman[193901]: 2025-11-26 01:14:23.467107896 +0000 UTC m=+0.159449071 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 01:14:24 compute-0 podman[193957]: 2025-11-26 01:14:24.580364285 +0000 UTC m=+0.125390912 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release-0.7.12=, managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Nov 26 01:14:24 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'crash'
Nov 26 01:14:25 compute-0 ceph-mgr[193049]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 01:14:25 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:25.129+0000 7f7678420140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 01:14:25 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'dashboard'
Nov 26 01:14:26 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'devicehealth'
Nov 26 01:14:26 compute-0 ceph-mgr[193049]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 01:14:26 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:26.688+0000 7f7678420140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 26 01:14:26 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'diskprediction_local'
Nov 26 01:14:27 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 26 01:14:27 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 26 01:14:27 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]:  from numpy import show_config as show_numpy_config
Nov 26 01:14:27 compute-0 ceph-mgr[193049]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 01:14:27 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:27.187+0000 7f7678420140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 26 01:14:27 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'influx'
Nov 26 01:14:27 compute-0 ceph-mgr[193049]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 01:14:27 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:27.413+0000 7f7678420140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 26 01:14:27 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'insights'
Nov 26 01:14:27 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'iostat'
Nov 26 01:14:27 compute-0 ceph-mgr[193049]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 01:14:27 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:27.868+0000 7f7678420140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 26 01:14:27 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'k8sevents'
Nov 26 01:14:29 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'localpool'
Nov 26 01:14:29 compute-0 podman[158021]: time="2025-11-26T01:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:14:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23483 "" "Go-http-client/1.1"
Nov 26 01:14:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4359 "" "Go-http-client/1.1"
Nov 26 01:14:29 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'mds_autoscaler'
Nov 26 01:14:30 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'mirroring'
Nov 26 01:14:30 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'nfs'
Nov 26 01:14:31 compute-0 ceph-mgr[193049]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 01:14:31 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:31.291+0000 7f7678420140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 26 01:14:31 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'orchestrator'
Nov 26 01:14:31 compute-0 openstack_network_exporter[160178]: ERROR   01:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:14:31 compute-0 openstack_network_exporter[160178]: ERROR   01:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:14:31 compute-0 openstack_network_exporter[160178]: ERROR   01:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:14:31 compute-0 openstack_network_exporter[160178]: ERROR   01:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:14:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:14:31 compute-0 openstack_network_exporter[160178]: ERROR   01:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:14:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:14:31 compute-0 ceph-mgr[193049]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 01:14:31 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:31.971+0000 7f7678420140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 26 01:14:31 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'osd_perf_query'
Nov 26 01:14:32 compute-0 ceph-mgr[193049]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 01:14:32 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:32.227+0000 7f7678420140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 26 01:14:32 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'osd_support'
Nov 26 01:14:32 compute-0 ceph-mgr[193049]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 01:14:32 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:32.478+0000 7f7678420140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 26 01:14:32 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'pg_autoscaler'
Nov 26 01:14:32 compute-0 ceph-mgr[193049]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 01:14:32 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:32.731+0000 7f7678420140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 26 01:14:32 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'progress'
Nov 26 01:14:32 compute-0 ceph-mgr[193049]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 01:14:32 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:32.955+0000 7f7678420140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 26 01:14:32 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'prometheus'
Nov 26 01:14:33 compute-0 ceph-mgr[193049]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 01:14:33 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:33.886+0000 7f7678420140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 26 01:14:33 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'rbd_support'
Nov 26 01:14:34 compute-0 ceph-mgr[193049]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 01:14:34 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:34.165+0000 7f7678420140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 26 01:14:34 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'restful'
Nov 26 01:14:34 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'rgw'
Nov 26 01:14:35 compute-0 ceph-mgr[193049]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 01:14:35 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:35.556+0000 7f7678420140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 26 01:14:35 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'rook'
Nov 26 01:14:37 compute-0 ceph-mgr[193049]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 01:14:37 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:37.595+0000 7f7678420140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 26 01:14:37 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'selftest'
Nov 26 01:14:37 compute-0 ceph-mgr[193049]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 01:14:37 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:37.825+0000 7f7678420140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 26 01:14:37 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'snap_schedule'
Nov 26 01:14:38 compute-0 ceph-mgr[193049]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 01:14:38 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:38.076+0000 7f7678420140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 26 01:14:38 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'stats'
Nov 26 01:14:38 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'status'
Nov 26 01:14:38 compute-0 ceph-mgr[193049]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 01:14:38 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:38.550+0000 7f7678420140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 26 01:14:38 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'telegraf'
Nov 26 01:14:38 compute-0 ceph-mgr[193049]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 01:14:38 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:38.774+0000 7f7678420140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 26 01:14:38 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'telemetry'
Nov 26 01:14:39 compute-0 ceph-mgr[193049]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 01:14:39 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:39.382+0000 7f7678420140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 26 01:14:39 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'test_orchestrator'
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 01:14:40 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:40.003+0000 7f7678420140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'volumes'
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 01:14:40 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:40.677+0000 7f7678420140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: mgr[py] Loading python module 'zabbix'
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 01:14:40 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T01:14:40.908+0000 7f7678420140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 26 01:14:40 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : Active manager daemon compute-0.vbisdw restarted
Nov 26 01:14:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: ms_deliver_dispatch: unhandled message 0x5624af7431e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 26 01:14:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 01:14:40 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.vbisdw
Nov 26 01:14:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 26 01:14:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 26 01:14:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 26 01:14:40 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 26 01:14:40 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.vbisdw(active, starting, since 0.0167327s)
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: mgr handle_mgr_map Activating!
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: mgr handle_mgr_map I am now activating
Nov 26 01:14:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 26 01:14:40 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 01:14:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.vbisdw", "id": "compute-0.vbisdw"} v 0) v1
Nov 26 01:14:40 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "mgr metadata", "who": "compute-0.vbisdw", "id": "compute-0.vbisdw"}]: dispatch
Nov 26 01:14:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 26 01:14:40 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 26 01:14:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).mds e1 all = 1
Nov 26 01:14:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 26 01:14:40 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 01:14:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 26 01:14:40 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: balancer
Nov 26 01:14:40 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : Manager daemon compute-0.vbisdw is now available
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Starting
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:14:40
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: [balancer INFO root] No pools available
Nov 26 01:14:40 compute-0 ceph-mgr[193049]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:40 compute-0 ceph-mon[192746]: Active manager daemon compute-0.vbisdw restarted
Nov 26 01:14:40 compute-0 ceph-mon[192746]: Activating manager daemon compute-0.vbisdw
Nov 26 01:14:40 compute-0 ceph-mon[192746]: Manager daemon compute-0.vbisdw is now available
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 26 01:14:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 26 01:14:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 26 01:14:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: cephadm
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: crash
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: devicehealth
Nov 26 01:14:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 01:14:41 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [devicehealth INFO root] Starting
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: iostat
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: nfs
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: orchestrator
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: pg_autoscaler
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: progress
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 01:14:41 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [progress INFO root] Loading...
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [progress INFO root] No stored events to load
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [progress INFO root] Loaded [] historic events
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [progress INFO root] Loaded OSDMap, ready.
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] recovery thread starting
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] starting setup
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: rbd_support
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: restful
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [restful INFO root] server_addr: :: server_port: 8003
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [restful WARNING root] server not running: no certificate configured
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: status
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: telemetry
Nov 26 01:14:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vbisdw/mirror_snapshot_schedule"} v 0) v1
Nov 26 01:14:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vbisdw/mirror_snapshot_schedule"}]: dispatch
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] PerfHandler: starting
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TaskHandler: starting
Nov 26 01:14:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vbisdw/trash_purge_schedule"} v 0) v1
Nov 26 01:14:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vbisdw/trash_purge_schedule"}]: dispatch
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] setup complete
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: mgr load Constructed class from module: volumes
Nov 26 01:14:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 26 01:14:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 26 01:14:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:41 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.vbisdw(active, since 1.03116s)
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 26 01:14:41 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 26 01:14:41 compute-0 nifty_wescoff[193909]: {
Nov 26 01:14:41 compute-0 nifty_wescoff[193909]:    "mgrmap_epoch": 7,
Nov 26 01:14:41 compute-0 nifty_wescoff[193909]:    "initialized": true
Nov 26 01:14:41 compute-0 nifty_wescoff[193909]: }
Nov 26 01:14:42 compute-0 systemd[1]: libpod-682c9388ee62458aa3999e187d575480e971a36d1a59156c395a554918498ea9.scope: Deactivated successfully.
Nov 26 01:14:42 compute-0 podman[193888]: 2025-11-26 01:14:42.00559294 +0000 UTC m=+18.861213474 container died 682c9388ee62458aa3999e187d575480e971a36d1a59156c395a554918498ea9 (image=quay.io/ceph/ceph:v18, name=nifty_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:14:42 compute-0 ceph-mon[192746]: Found migration_current of "None". Setting to last migration.
Nov 26 01:14:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vbisdw/mirror_snapshot_schedule"}]: dispatch
Nov 26 01:14:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vbisdw/trash_purge_schedule"}]: dispatch
Nov 26 01:14:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-11a7450a780cca61d1eabe0f2afb14bd15d4f09025449a024d4bae83ae4a1fbb-merged.mount: Deactivated successfully.
Nov 26 01:14:42 compute-0 podman[193888]: 2025-11-26 01:14:42.088465194 +0000 UTC m=+18.944085748 container remove 682c9388ee62458aa3999e187d575480e971a36d1a59156c395a554918498ea9 (image=quay.io/ceph/ceph:v18, name=nifty_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:14:42 compute-0 systemd[1]: libpod-conmon-682c9388ee62458aa3999e187d575480e971a36d1a59156c395a554918498ea9.scope: Deactivated successfully.
Nov 26 01:14:42 compute-0 podman[194099]: 2025-11-26 01:14:42.206513891 +0000 UTC m=+0.074001828 container create ded738bb53c5b3f760e6da9a5673c668882cfe97a6b11eff82ccbf5e65031ddb (image=quay.io/ceph/ceph:v18, name=happy_carver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 26 01:14:42 compute-0 systemd[1]: Started libpod-conmon-ded738bb53c5b3f760e6da9a5673c668882cfe97a6b11eff82ccbf5e65031ddb.scope.
Nov 26 01:14:42 compute-0 podman[194099]: 2025-11-26 01:14:42.183978671 +0000 UTC m=+0.051466648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3921201275eab5ec1be819f0490d987c143a3999b1516882e4ae53825e2b7652/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3921201275eab5ec1be819f0490d987c143a3999b1516882e4ae53825e2b7652/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3921201275eab5ec1be819f0490d987c143a3999b1516882e4ae53825e2b7652/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:42 compute-0 podman[194099]: 2025-11-26 01:14:42.343939248 +0000 UTC m=+0.211427215 container init ded738bb53c5b3f760e6da9a5673c668882cfe97a6b11eff82ccbf5e65031ddb (image=quay.io/ceph/ceph:v18, name=happy_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 01:14:42 compute-0 podman[194099]: 2025-11-26 01:14:42.368103843 +0000 UTC m=+0.235591810 container start ded738bb53c5b3f760e6da9a5673c668882cfe97a6b11eff82ccbf5e65031ddb (image=quay.io/ceph/ceph:v18, name=happy_carver, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:14:42 compute-0 podman[194099]: 2025-11-26 01:14:42.374939104 +0000 UTC m=+0.242427061 container attach ded738bb53c5b3f760e6da9a5673c668882cfe97a6b11eff82ccbf5e65031ddb (image=quay.io/ceph/ceph:v18, name=happy_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:14:42 compute-0 ceph-mgr[193049]: [cephadm INFO cherrypy.error] [26/Nov/2025:01:14:42] ENGINE Bus STARTING
Nov 26 01:14:42 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : [26/Nov/2025:01:14:42] ENGINE Bus STARTING
Nov 26 01:14:42 compute-0 ceph-mgr[193049]: [cephadm INFO cherrypy.error] [26/Nov/2025:01:14:42] ENGINE Serving on https://192.168.122.100:7150
Nov 26 01:14:42 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : [26/Nov/2025:01:14:42] ENGINE Serving on https://192.168.122.100:7150
Nov 26 01:14:42 compute-0 ceph-mgr[193049]: [cephadm INFO cherrypy.error] [26/Nov/2025:01:14:42] ENGINE Client ('192.168.122.100', 41890) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 26 01:14:42 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : [26/Nov/2025:01:14:42] ENGINE Client ('192.168.122.100', 41890) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 26 01:14:42 compute-0 ceph-mgr[193049]: [cephadm INFO cherrypy.error] [26/Nov/2025:01:14:42] ENGINE Serving on http://192.168.122.100:8765
Nov 26 01:14:42 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : [26/Nov/2025:01:14:42] ENGINE Serving on http://192.168.122.100:8765
Nov 26 01:14:42 compute-0 ceph-mgr[193049]: [cephadm INFO cherrypy.error] [26/Nov/2025:01:14:42] ENGINE Bus STARTED
Nov 26 01:14:42 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : [26/Nov/2025:01:14:42] ENGINE Bus STARTED
Nov 26 01:14:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 01:14:42 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 01:14:42 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 01:14:42 compute-0 ceph-mgr[193049]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 01:14:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 26 01:14:42 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 01:14:42 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 01:14:42 compute-0 podman[194099]: 2025-11-26 01:14:42.987350713 +0000 UTC m=+0.854838650 container died ded738bb53c5b3f760e6da9a5673c668882cfe97a6b11eff82ccbf5e65031ddb (image=quay.io/ceph/ceph:v18, name=happy_carver, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 01:14:42 compute-0 systemd[1]: libpod-ded738bb53c5b3f760e6da9a5673c668882cfe97a6b11eff82ccbf5e65031ddb.scope: Deactivated successfully.
Nov 26 01:14:43 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.vbisdw(active, since 2s)
Nov 26 01:14:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3921201275eab5ec1be819f0490d987c143a3999b1516882e4ae53825e2b7652-merged.mount: Deactivated successfully.
Nov 26 01:14:43 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:43 compute-0 podman[194099]: 2025-11-26 01:14:43.067568163 +0000 UTC m=+0.935056130 container remove ded738bb53c5b3f760e6da9a5673c668882cfe97a6b11eff82ccbf5e65031ddb (image=quay.io/ceph/ceph:v18, name=happy_carver, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 01:14:43 compute-0 systemd[1]: libpod-conmon-ded738bb53c5b3f760e6da9a5673c668882cfe97a6b11eff82ccbf5e65031ddb.scope: Deactivated successfully.
Nov 26 01:14:43 compute-0 podman[194176]: 2025-11-26 01:14:43.194466387 +0000 UTC m=+0.085404666 container create f3f691197925ff760bf29e3547915eaaafbd0eab79135505e10c605d7f776fc1 (image=quay.io/ceph/ceph:v18, name=mystifying_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 01:14:43 compute-0 podman[194176]: 2025-11-26 01:14:43.16127121 +0000 UTC m=+0.052209549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:43 compute-0 systemd[1]: Started libpod-conmon-f3f691197925ff760bf29e3547915eaaafbd0eab79135505e10c605d7f776fc1.scope.
Nov 26 01:14:43 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6a4f3d6e4759e5cfe073a1182abfb477756de1f26348aecb07ce503659cdf0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6a4f3d6e4759e5cfe073a1182abfb477756de1f26348aecb07ce503659cdf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6a4f3d6e4759e5cfe073a1182abfb477756de1f26348aecb07ce503659cdf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:43 compute-0 podman[194176]: 2025-11-26 01:14:43.369596417 +0000 UTC m=+0.260534746 container init f3f691197925ff760bf29e3547915eaaafbd0eab79135505e10c605d7f776fc1 (image=quay.io/ceph/ceph:v18, name=mystifying_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 01:14:43 compute-0 podman[194176]: 2025-11-26 01:14:43.386017916 +0000 UTC m=+0.276956185 container start f3f691197925ff760bf29e3547915eaaafbd0eab79135505e10c605d7f776fc1 (image=quay.io/ceph/ceph:v18, name=mystifying_almeida, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:14:43 compute-0 podman[194176]: 2025-11-26 01:14:43.392735593 +0000 UTC m=+0.283673922 container attach f3f691197925ff760bf29e3547915eaaafbd0eab79135505e10c605d7f776fc1 (image=quay.io/ceph/ceph:v18, name=mystifying_almeida, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:14:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019923902 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:14:43 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 01:14:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 26 01:14:43 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:43 compute-0 ceph-mgr[193049]: [cephadm INFO root] Set ssh ssh_user
Nov 26 01:14:43 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 26 01:14:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 26 01:14:43 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:43 compute-0 ceph-mgr[193049]: [cephadm INFO root] Set ssh ssh_config
Nov 26 01:14:43 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 26 01:14:43 compute-0 ceph-mgr[193049]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 26 01:14:43 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 26 01:14:43 compute-0 mystifying_almeida[194192]: ssh user set to ceph-admin. sudo will be used
Nov 26 01:14:43 compute-0 systemd[1]: libpod-f3f691197925ff760bf29e3547915eaaafbd0eab79135505e10c605d7f776fc1.scope: Deactivated successfully.
Nov 26 01:14:43 compute-0 podman[194176]: 2025-11-26 01:14:43.998692883 +0000 UTC m=+0.889631152 container died f3f691197925ff760bf29e3547915eaaafbd0eab79135505e10c605d7f776fc1 (image=quay.io/ceph/ceph:v18, name=mystifying_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 01:14:44 compute-0 ceph-mon[192746]: [26/Nov/2025:01:14:42] ENGINE Bus STARTING
Nov 26 01:14:44 compute-0 ceph-mon[192746]: [26/Nov/2025:01:14:42] ENGINE Serving on https://192.168.122.100:7150
Nov 26 01:14:44 compute-0 ceph-mon[192746]: [26/Nov/2025:01:14:42] ENGINE Client ('192.168.122.100', 41890) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 26 01:14:44 compute-0 ceph-mon[192746]: [26/Nov/2025:01:14:42] ENGINE Serving on http://192.168.122.100:8765
Nov 26 01:14:44 compute-0 ceph-mon[192746]: [26/Nov/2025:01:14:42] ENGINE Bus STARTED
Nov 26 01:14:44 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:44 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb6a4f3d6e4759e5cfe073a1182abfb477756de1f26348aecb07ce503659cdf0-merged.mount: Deactivated successfully.
Nov 26 01:14:44 compute-0 podman[194176]: 2025-11-26 01:14:44.08058558 +0000 UTC m=+0.971523829 container remove f3f691197925ff760bf29e3547915eaaafbd0eab79135505e10c605d7f776fc1 (image=quay.io/ceph/ceph:v18, name=mystifying_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:14:44 compute-0 systemd[1]: libpod-conmon-f3f691197925ff760bf29e3547915eaaafbd0eab79135505e10c605d7f776fc1.scope: Deactivated successfully.
Nov 26 01:14:44 compute-0 podman[194221]: 2025-11-26 01:14:44.130488634 +0000 UTC m=+0.079828520 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:14:44 compute-0 podman[194218]: 2025-11-26 01:14:44.138489937 +0000 UTC m=+0.097843183 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 26 01:14:44 compute-0 podman[194256]: 2025-11-26 01:14:44.178378151 +0000 UTC m=+0.065959163 container create 68efffe25077afde242326ba3fdd86313c98ba3e695b41009f4aee88a7e37662 (image=quay.io/ceph/ceph:v18, name=awesome_margulis, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:14:44 compute-0 podman[194227]: 2025-11-26 01:14:44.199797469 +0000 UTC m=+0.141012719 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 01:14:44 compute-0 systemd[1]: Started libpod-conmon-68efffe25077afde242326ba3fdd86313c98ba3e695b41009f4aee88a7e37662.scope.
Nov 26 01:14:44 compute-0 podman[194256]: 2025-11-26 01:14:44.15648746 +0000 UTC m=+0.044068472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e6549833e9f2d9a2d758538d89ad2e17570f604f02333c1808e5f1b049c0c9/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e6549833e9f2d9a2d758538d89ad2e17570f604f02333c1808e5f1b049c0c9/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e6549833e9f2d9a2d758538d89ad2e17570f604f02333c1808e5f1b049c0c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e6549833e9f2d9a2d758538d89ad2e17570f604f02333c1808e5f1b049c0c9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e6549833e9f2d9a2d758538d89ad2e17570f604f02333c1808e5f1b049c0c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:44 compute-0 podman[194256]: 2025-11-26 01:14:44.337923696 +0000 UTC m=+0.225504798 container init 68efffe25077afde242326ba3fdd86313c98ba3e695b41009f4aee88a7e37662 (image=quay.io/ceph/ceph:v18, name=awesome_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 26 01:14:44 compute-0 podman[194256]: 2025-11-26 01:14:44.355197528 +0000 UTC m=+0.242778560 container start 68efffe25077afde242326ba3fdd86313c98ba3e695b41009f4aee88a7e37662 (image=quay.io/ceph/ceph:v18, name=awesome_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:14:44 compute-0 podman[194256]: 2025-11-26 01:14:44.361453283 +0000 UTC m=+0.249034385 container attach 68efffe25077afde242326ba3fdd86313c98ba3e695b41009f4aee88a7e37662 (image=quay.io/ceph/ceph:v18, name=awesome_margulis, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:14:44 compute-0 ceph-mgr[193049]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 01:14:44 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 01:14:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 26 01:14:45 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:45 compute-0 ceph-mgr[193049]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 26 01:14:45 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 26 01:14:45 compute-0 ceph-mgr[193049]: [cephadm INFO root] Set ssh private key
Nov 26 01:14:45 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 26 01:14:45 compute-0 ceph-mon[192746]: Set ssh ssh_user
Nov 26 01:14:45 compute-0 ceph-mon[192746]: Set ssh ssh_config
Nov 26 01:14:45 compute-0 ceph-mon[192746]: ssh user set to ceph-admin. sudo will be used
Nov 26 01:14:45 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:45 compute-0 systemd[1]: libpod-68efffe25077afde242326ba3fdd86313c98ba3e695b41009f4aee88a7e37662.scope: Deactivated successfully.
Nov 26 01:14:45 compute-0 podman[194256]: 2025-11-26 01:14:45.061454029 +0000 UTC m=+0.949035041 container died 68efffe25077afde242326ba3fdd86313c98ba3e695b41009f4aee88a7e37662 (image=quay.io/ceph/ceph:v18, name=awesome_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:14:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5e6549833e9f2d9a2d758538d89ad2e17570f604f02333c1808e5f1b049c0c9-merged.mount: Deactivated successfully.
Nov 26 01:14:45 compute-0 podman[194256]: 2025-11-26 01:14:45.124889721 +0000 UTC m=+1.012470753 container remove 68efffe25077afde242326ba3fdd86313c98ba3e695b41009f4aee88a7e37662 (image=quay.io/ceph/ceph:v18, name=awesome_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:14:45 compute-0 systemd[1]: libpod-conmon-68efffe25077afde242326ba3fdd86313c98ba3e695b41009f4aee88a7e37662.scope: Deactivated successfully.
Nov 26 01:14:45 compute-0 podman[194345]: 2025-11-26 01:14:45.232121665 +0000 UTC m=+0.070390146 container create 882bf484fb77eea7748a51b9f4f79079c674683b151109f6b2b273605646a6a3 (image=quay.io/ceph/ceph:v18, name=unruffled_driscoll, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:14:45 compute-0 systemd[1]: Started libpod-conmon-882bf484fb77eea7748a51b9f4f79079c674683b151109f6b2b273605646a6a3.scope.
Nov 26 01:14:45 compute-0 podman[194345]: 2025-11-26 01:14:45.207607721 +0000 UTC m=+0.045876192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00979de2ca3ca0a45cdf374ea779860c0b4c65ffe984ea5a86aabd4c7a9ec9db/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00979de2ca3ca0a45cdf374ea779860c0b4c65ffe984ea5a86aabd4c7a9ec9db/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00979de2ca3ca0a45cdf374ea779860c0b4c65ffe984ea5a86aabd4c7a9ec9db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00979de2ca3ca0a45cdf374ea779860c0b4c65ffe984ea5a86aabd4c7a9ec9db/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00979de2ca3ca0a45cdf374ea779860c0b4c65ffe984ea5a86aabd4c7a9ec9db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:45 compute-0 podman[194345]: 2025-11-26 01:14:45.375580421 +0000 UTC m=+0.213848912 container init 882bf484fb77eea7748a51b9f4f79079c674683b151109f6b2b273605646a6a3 (image=quay.io/ceph/ceph:v18, name=unruffled_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 01:14:45 compute-0 podman[194345]: 2025-11-26 01:14:45.389459758 +0000 UTC m=+0.227728239 container start 882bf484fb77eea7748a51b9f4f79079c674683b151109f6b2b273605646a6a3 (image=quay.io/ceph/ceph:v18, name=unruffled_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:14:45 compute-0 podman[194345]: 2025-11-26 01:14:45.395284801 +0000 UTC m=+0.233553282 container attach 882bf484fb77eea7748a51b9f4f79079c674683b151109f6b2b273605646a6a3 (image=quay.io/ceph/ceph:v18, name=unruffled_driscoll, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:14:45 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 01:14:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 26 01:14:45 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:45 compute-0 ceph-mgr[193049]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 26 01:14:45 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 26 01:14:46 compute-0 systemd[1]: libpod-882bf484fb77eea7748a51b9f4f79079c674683b151109f6b2b273605646a6a3.scope: Deactivated successfully.
Nov 26 01:14:46 compute-0 podman[194345]: 2025-11-26 01:14:46.035651312 +0000 UTC m=+0.873919813 container died 882bf484fb77eea7748a51b9f4f79079c674683b151109f6b2b273605646a6a3 (image=quay.io/ceph/ceph:v18, name=unruffled_driscoll, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:14:46 compute-0 ceph-mon[192746]: Set ssh ssh_identity_key
Nov 26 01:14:46 compute-0 ceph-mon[192746]: Set ssh private key
Nov 26 01:14:46 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-00979de2ca3ca0a45cdf374ea779860c0b4c65ffe984ea5a86aabd4c7a9ec9db-merged.mount: Deactivated successfully.
Nov 26 01:14:46 compute-0 podman[194345]: 2025-11-26 01:14:46.12296141 +0000 UTC m=+0.961229901 container remove 882bf484fb77eea7748a51b9f4f79079c674683b151109f6b2b273605646a6a3 (image=quay.io/ceph/ceph:v18, name=unruffled_driscoll, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:14:46 compute-0 systemd[1]: libpod-conmon-882bf484fb77eea7748a51b9f4f79079c674683b151109f6b2b273605646a6a3.scope: Deactivated successfully.
Nov 26 01:14:46 compute-0 podman[194399]: 2025-11-26 01:14:46.230429561 +0000 UTC m=+0.078575695 container create cbd5b86d255619ab502ee72b636f0c9b38a4af57df7169c7e3e9f7e4d1c7f357 (image=quay.io/ceph/ceph:v18, name=hungry_gould, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 01:14:46 compute-0 podman[194399]: 2025-11-26 01:14:46.195704352 +0000 UTC m=+0.043850536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:46 compute-0 systemd[1]: Started libpod-conmon-cbd5b86d255619ab502ee72b636f0c9b38a4af57df7169c7e3e9f7e4d1c7f357.scope.
Nov 26 01:14:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f30df64f4291aa2197d8068c616644a1a65428d8a79c2a31c6ae2edf8b19fda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f30df64f4291aa2197d8068c616644a1a65428d8a79c2a31c6ae2edf8b19fda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f30df64f4291aa2197d8068c616644a1a65428d8a79c2a31c6ae2edf8b19fda/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:46 compute-0 podman[194399]: 2025-11-26 01:14:46.400239983 +0000 UTC m=+0.248386167 container init cbd5b86d255619ab502ee72b636f0c9b38a4af57df7169c7e3e9f7e4d1c7f357 (image=quay.io/ceph/ceph:v18, name=hungry_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 01:14:46 compute-0 podman[194399]: 2025-11-26 01:14:46.415220251 +0000 UTC m=+0.263366375 container start cbd5b86d255619ab502ee72b636f0c9b38a4af57df7169c7e3e9f7e4d1c7f357 (image=quay.io/ceph/ceph:v18, name=hungry_gould, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:14:46 compute-0 podman[194399]: 2025-11-26 01:14:46.422095943 +0000 UTC m=+0.270242087 container attach cbd5b86d255619ab502ee72b636f0c9b38a4af57df7169c7e3e9f7e4d1c7f357 (image=quay.io/ceph/ceph:v18, name=hungry_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:14:46 compute-0 ceph-mgr[193049]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 01:14:46 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 01:14:46 compute-0 hungry_gould[194414]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCq7u4x/ccwEyrsnMSeiPp3Z7J3qxEFbJHM8LhksY1oRKAu7jNoTkzw3YKdlN5UNaqvRuFIbsb612wM/+gII5cf4fnPx+sepmJK1AsyuNf8lTQPm/uyYGkmayLJvWGumxT94lPXCxl27k6jG/8xNUY6TM6ONpZprUF0pRqkdvHq0lhtes7qKw6eQDkJdsWqt8zd1N0GK6nP6aYTijWUtGLfQ/SXGYhVHFClPjF3MFCU1ZRIfwhYS5JoKbxSbDn7/4BQO12nUxihGjcyOnhpli6wSVCv76To3PqcDYPzXmQPHWNtKQomlX8A9SKWl/6SSqedQ0n4ajg2iEA1XD0e7e8YlduuMAdkDF4C6Zak4r1qFY3MbJshuJmk7cQrECyA5KelHAdHestE4Yo4Sftb7UAUUBqDLO69RAPV6wproHg1IoU10VFw8IXD6rKkpc3hm/ZI+TXxkCCS0YhMjXEzZLapQMODiuWju+TiLmkFmKFcn1pBz8nHj+xIhoHmvM3IFtU= zuul@controller
Nov 26 01:14:46 compute-0 systemd[1]: libpod-cbd5b86d255619ab502ee72b636f0c9b38a4af57df7169c7e3e9f7e4d1c7f357.scope: Deactivated successfully.
Nov 26 01:14:46 compute-0 conmon[194414]: conmon cbd5b86d255619ab502e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cbd5b86d255619ab502ee72b636f0c9b38a4af57df7169c7e3e9f7e4d1c7f357.scope/container/memory.events
Nov 26 01:14:46 compute-0 podman[194399]: 2025-11-26 01:14:46.994077874 +0000 UTC m=+0.842223968 container died cbd5b86d255619ab502ee72b636f0c9b38a4af57df7169c7e3e9f7e4d1c7f357 (image=quay.io/ceph/ceph:v18, name=hungry_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 26 01:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f30df64f4291aa2197d8068c616644a1a65428d8a79c2a31c6ae2edf8b19fda-merged.mount: Deactivated successfully.
Nov 26 01:14:47 compute-0 podman[194399]: 2025-11-26 01:14:47.051762425 +0000 UTC m=+0.899908559 container remove cbd5b86d255619ab502ee72b636f0c9b38a4af57df7169c7e3e9f7e4d1c7f357 (image=quay.io/ceph/ceph:v18, name=hungry_gould, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:14:47 compute-0 ceph-mon[192746]: Set ssh ssh_identity_pub
Nov 26 01:14:47 compute-0 systemd[1]: libpod-conmon-cbd5b86d255619ab502ee72b636f0c9b38a4af57df7169c7e3e9f7e4d1c7f357.scope: Deactivated successfully.
Nov 26 01:14:47 compute-0 podman[194452]: 2025-11-26 01:14:47.192189666 +0000 UTC m=+0.093614395 container create 1d637727ad52ea4c7582b7935e6b2dcfcc90f988e55ca4bf74ffb6aa96da3876 (image=quay.io/ceph/ceph:v18, name=sad_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 01:14:47 compute-0 podman[194452]: 2025-11-26 01:14:47.157563689 +0000 UTC m=+0.058988428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:47 compute-0 systemd[1]: Started libpod-conmon-1d637727ad52ea4c7582b7935e6b2dcfcc90f988e55ca4bf74ffb6aa96da3876.scope.
Nov 26 01:14:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099ecd39afd68ac3f6c7fd49de43b0f97d4b1f0fd2e3d2cfea98933217628054/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099ecd39afd68ac3f6c7fd49de43b0f97d4b1f0fd2e3d2cfea98933217628054/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/099ecd39afd68ac3f6c7fd49de43b0f97d4b1f0fd2e3d2cfea98933217628054/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:47 compute-0 podman[194452]: 2025-11-26 01:14:47.351035541 +0000 UTC m=+0.252460310 container init 1d637727ad52ea4c7582b7935e6b2dcfcc90f988e55ca4bf74ffb6aa96da3876 (image=quay.io/ceph/ceph:v18, name=sad_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 01:14:47 compute-0 podman[194452]: 2025-11-26 01:14:47.384456975 +0000 UTC m=+0.285881684 container start 1d637727ad52ea4c7582b7935e6b2dcfcc90f988e55ca4bf74ffb6aa96da3876 (image=quay.io/ceph/ceph:v18, name=sad_mcnulty, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:14:47 compute-0 podman[194452]: 2025-11-26 01:14:47.390552435 +0000 UTC m=+0.291977154 container attach 1d637727ad52ea4c7582b7935e6b2dcfcc90f988e55ca4bf74ffb6aa96da3876 (image=quay.io/ceph/ceph:v18, name=sad_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:14:47 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 01:14:48 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 26 01:14:48 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 26 01:14:48 compute-0 systemd-logind[800]: New session 27 of user ceph-admin.
Nov 26 01:14:48 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 26 01:14:48 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 26 01:14:48 compute-0 podman[194497]: 2025-11-26 01:14:48.420183336 +0000 UTC m=+0.149833625 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.6, architecture=x86_64)
Nov 26 01:14:48 compute-0 podman[194499]: 2025-11-26 01:14:48.422010477 +0000 UTC m=+0.150657248 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:14:48 compute-0 systemd-logind[800]: New session 29 of user ceph-admin.
Nov 26 01:14:48 compute-0 systemd[194522]: Queued start job for default target Main User Target.
Nov 26 01:14:48 compute-0 systemd[194522]: Created slice User Application Slice.
Nov 26 01:14:48 compute-0 systemd[194522]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 26 01:14:48 compute-0 systemd[194522]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 01:14:48 compute-0 systemd[194522]: Reached target Paths.
Nov 26 01:14:48 compute-0 systemd[194522]: Reached target Timers.
Nov 26 01:14:48 compute-0 systemd[194522]: Starting D-Bus User Message Bus Socket...
Nov 26 01:14:48 compute-0 systemd[194522]: Starting Create User's Volatile Files and Directories...
Nov 26 01:14:48 compute-0 systemd[194522]: Listening on D-Bus User Message Bus Socket.
Nov 26 01:14:48 compute-0 systemd[194522]: Reached target Sockets.
Nov 26 01:14:48 compute-0 systemd[194522]: Finished Create User's Volatile Files and Directories.
Nov 26 01:14:48 compute-0 systemd[194522]: Reached target Basic System.
Nov 26 01:14:48 compute-0 systemd[194522]: Reached target Main User Target.
Nov 26 01:14:48 compute-0 systemd[194522]: Startup finished in 178ms.
Nov 26 01:14:48 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 26 01:14:48 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Nov 26 01:14:48 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Nov 26 01:14:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053059 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:14:48 compute-0 ceph-mgr[193049]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 01:14:49 compute-0 systemd-logind[800]: New session 30 of user ceph-admin.
Nov 26 01:14:49 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Nov 26 01:14:49 compute-0 systemd-logind[800]: New session 31 of user ceph-admin.
Nov 26 01:14:49 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Nov 26 01:14:50 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 26 01:14:50 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 26 01:14:50 compute-0 systemd-logind[800]: New session 32 of user ceph-admin.
Nov 26 01:14:50 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Nov 26 01:14:50 compute-0 ceph-mgr[193049]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 01:14:51 compute-0 systemd-logind[800]: New session 33 of user ceph-admin.
Nov 26 01:14:51 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Nov 26 01:14:51 compute-0 ceph-mon[192746]: Deploying cephadm binary to compute-0
Nov 26 01:14:51 compute-0 systemd-logind[800]: New session 34 of user ceph-admin.
Nov 26 01:14:51 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Nov 26 01:14:52 compute-0 systemd-logind[800]: New session 35 of user ceph-admin.
Nov 26 01:14:52 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Nov 26 01:14:52 compute-0 ceph-mgr[193049]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 01:14:52 compute-0 systemd-logind[800]: New session 36 of user ceph-admin.
Nov 26 01:14:52 compute-0 systemd[1]: Started Session 36 of User ceph-admin.
Nov 26 01:14:53 compute-0 systemd-logind[800]: New session 37 of user ceph-admin.
Nov 26 01:14:53 compute-0 systemd[1]: Started Session 37 of User ceph-admin.
Nov 26 01:14:53 compute-0 podman[194993]: 2025-11-26 01:14:53.727363539 +0000 UTC m=+0.122379577 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Nov 26 01:14:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:14:54 compute-0 systemd-logind[800]: New session 38 of user ceph-admin.
Nov 26 01:14:54 compute-0 systemd[1]: Started Session 38 of User ceph-admin.
Nov 26 01:14:54 compute-0 podman[195092]: 2025-11-26 01:14:54.808813847 +0000 UTC m=+0.131447172 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, architecture=x86_64, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, release=1214.1726694543, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, version=9.4, name=ubi9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 01:14:54 compute-0 ceph-mgr[193049]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 01:14:54 compute-0 systemd-logind[800]: New session 39 of user ceph-admin.
Nov 26 01:14:54 compute-0 systemd[1]: Started Session 39 of User ceph-admin.
Nov 26 01:14:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 01:14:55 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:55 compute-0 ceph-mgr[193049]: [cephadm INFO root] Added host compute-0
Nov 26 01:14:55 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 26 01:14:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 01:14:55 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 01:14:55 compute-0 sad_mcnulty[194469]: Added host 'compute-0' with addr '192.168.122.100'
Nov 26 01:14:55 compute-0 systemd[1]: libpod-1d637727ad52ea4c7582b7935e6b2dcfcc90f988e55ca4bf74ffb6aa96da3876.scope: Deactivated successfully.
Nov 26 01:14:55 compute-0 podman[194452]: 2025-11-26 01:14:55.742481908 +0000 UTC m=+8.643906617 container died 1d637727ad52ea4c7582b7935e6b2dcfcc90f988e55ca4bf74ffb6aa96da3876 (image=quay.io/ceph/ceph:v18, name=sad_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:14:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-099ecd39afd68ac3f6c7fd49de43b0f97d4b1f0fd2e3d2cfea98933217628054-merged.mount: Deactivated successfully.
Nov 26 01:14:55 compute-0 podman[194452]: 2025-11-26 01:14:55.830208438 +0000 UTC m=+8.731633127 container remove 1d637727ad52ea4c7582b7935e6b2dcfcc90f988e55ca4bf74ffb6aa96da3876 (image=quay.io/ceph/ceph:v18, name=sad_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:14:55 compute-0 systemd[1]: libpod-conmon-1d637727ad52ea4c7582b7935e6b2dcfcc90f988e55ca4bf74ffb6aa96da3876.scope: Deactivated successfully.
Nov 26 01:14:55 compute-0 podman[195218]: 2025-11-26 01:14:55.929908442 +0000 UTC m=+0.061800247 container create 43bc0b27356b981491ea8b21a2872376adfd7d6be949a6090f81c59101aeccc7 (image=quay.io/ceph/ceph:v18, name=friendly_buck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 01:14:55 compute-0 systemd[1]: Started libpod-conmon-43bc0b27356b981491ea8b21a2872376adfd7d6be949a6090f81c59101aeccc7.scope.
Nov 26 01:14:56 compute-0 podman[195218]: 2025-11-26 01:14:55.909042289 +0000 UTC m=+0.040934124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bceac905b5a5fc44f86b55d517b85c85f088630c95db75d309a091f34a5c909f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bceac905b5a5fc44f86b55d517b85c85f088630c95db75d309a091f34a5c909f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bceac905b5a5fc44f86b55d517b85c85f088630c95db75d309a091f34a5c909f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:56 compute-0 podman[195218]: 2025-11-26 01:14:56.064730676 +0000 UTC m=+0.196622551 container init 43bc0b27356b981491ea8b21a2872376adfd7d6be949a6090f81c59101aeccc7 (image=quay.io/ceph/ceph:v18, name=friendly_buck, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:14:56 compute-0 podman[195218]: 2025-11-26 01:14:56.077004199 +0000 UTC m=+0.208896034 container start 43bc0b27356b981491ea8b21a2872376adfd7d6be949a6090f81c59101aeccc7 (image=quay.io/ceph/ceph:v18, name=friendly_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 01:14:56 compute-0 podman[195218]: 2025-11-26 01:14:56.08420923 +0000 UTC m=+0.216101065 container attach 43bc0b27356b981491ea8b21a2872376adfd7d6be949a6090f81c59101aeccc7 (image=quay.io/ceph/ceph:v18, name=friendly_buck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 01:14:56 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:56 compute-0 ceph-mon[192746]: Added host compute-0
Nov 26 01:14:56 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 01:14:56 compute-0 ceph-mgr[193049]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 26 01:14:56 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 26 01:14:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 01:14:56 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:56 compute-0 friendly_buck[195259]: Scheduled mon update...
Nov 26 01:14:56 compute-0 podman[195358]: 2025-11-26 01:14:56.726443614 +0000 UTC m=+0.086531988 container create 49ca9924a759c8109c19ce4c1979da8d24db11874b92c7005cd006c3d57ee3de (image=quay.io/ceph/ceph:v18, name=elegant_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:14:56 compute-0 systemd[1]: libpod-43bc0b27356b981491ea8b21a2872376adfd7d6be949a6090f81c59101aeccc7.scope: Deactivated successfully.
Nov 26 01:14:56 compute-0 podman[195218]: 2025-11-26 01:14:56.750934197 +0000 UTC m=+0.882825992 container died 43bc0b27356b981491ea8b21a2872376adfd7d6be949a6090f81c59101aeccc7 (image=quay.io/ceph/ceph:v18, name=friendly_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:14:56 compute-0 podman[195358]: 2025-11-26 01:14:56.686673253 +0000 UTC m=+0.046761677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:56 compute-0 systemd[1]: Started libpod-conmon-49ca9924a759c8109c19ce4c1979da8d24db11874b92c7005cd006c3d57ee3de.scope.
Nov 26 01:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-bceac905b5a5fc44f86b55d517b85c85f088630c95db75d309a091f34a5c909f-merged.mount: Deactivated successfully.
Nov 26 01:14:56 compute-0 podman[195218]: 2025-11-26 01:14:56.837134794 +0000 UTC m=+0.969026609 container remove 43bc0b27356b981491ea8b21a2872376adfd7d6be949a6090f81c59101aeccc7 (image=quay.io/ceph/ceph:v18, name=friendly_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 26 01:14:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:56 compute-0 systemd[1]: libpod-conmon-43bc0b27356b981491ea8b21a2872376adfd7d6be949a6090f81c59101aeccc7.scope: Deactivated successfully.
Nov 26 01:14:56 compute-0 podman[195358]: 2025-11-26 01:14:56.863186662 +0000 UTC m=+0.223275116 container init 49ca9924a759c8109c19ce4c1979da8d24db11874b92c7005cd006c3d57ee3de (image=quay.io/ceph/ceph:v18, name=elegant_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 01:14:56 compute-0 podman[195358]: 2025-11-26 01:14:56.879323653 +0000 UTC m=+0.239412037 container start 49ca9924a759c8109c19ce4c1979da8d24db11874b92c7005cd006c3d57ee3de (image=quay.io/ceph/ceph:v18, name=elegant_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:14:56 compute-0 podman[195358]: 2025-11-26 01:14:56.889479876 +0000 UTC m=+0.249568330 container attach 49ca9924a759c8109c19ce4c1979da8d24db11874b92c7005cd006c3d57ee3de (image=quay.io/ceph/ceph:v18, name=elegant_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 01:14:56 compute-0 ceph-mgr[193049]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 01:14:56 compute-0 podman[195391]: 2025-11-26 01:14:56.953109063 +0000 UTC m=+0.078440781 container create ea9040408bd1b99426c4ace30f229ce5a76d2cd43b7aec7cafcae28e874865f8 (image=quay.io/ceph/ceph:v18, name=focused_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:14:57 compute-0 podman[195391]: 2025-11-26 01:14:56.916557732 +0000 UTC m=+0.041889490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:57 compute-0 systemd[1]: Started libpod-conmon-ea9040408bd1b99426c4ace30f229ce5a76d2cd43b7aec7cafcae28e874865f8.scope.
Nov 26 01:14:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0766c20372a3badce4695f67858a1a73443cdc100c10897070dde18fd1d112df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0766c20372a3badce4695f67858a1a73443cdc100c10897070dde18fd1d112df/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0766c20372a3badce4695f67858a1a73443cdc100c10897070dde18fd1d112df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:57 compute-0 podman[195391]: 2025-11-26 01:14:57.106443184 +0000 UTC m=+0.231774902 container init ea9040408bd1b99426c4ace30f229ce5a76d2cd43b7aec7cafcae28e874865f8 (image=quay.io/ceph/ceph:v18, name=focused_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:14:57 compute-0 podman[195391]: 2025-11-26 01:14:57.122171534 +0000 UTC m=+0.247503252 container start ea9040408bd1b99426c4ace30f229ce5a76d2cd43b7aec7cafcae28e874865f8 (image=quay.io/ceph/ceph:v18, name=focused_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:14:57 compute-0 podman[195391]: 2025-11-26 01:14:57.129095997 +0000 UTC m=+0.254427755 container attach ea9040408bd1b99426c4ace30f229ce5a76d2cd43b7aec7cafcae28e874865f8 (image=quay.io/ceph/ceph:v18, name=focused_davinci, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:14:57 compute-0 elegant_moser[195385]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 26 01:14:57 compute-0 systemd[1]: libpod-49ca9924a759c8109c19ce4c1979da8d24db11874b92c7005cd006c3d57ee3de.scope: Deactivated successfully.
Nov 26 01:14:57 compute-0 podman[195414]: 2025-11-26 01:14:57.261140453 +0000 UTC m=+0.050155191 container died 49ca9924a759c8109c19ce4c1979da8d24db11874b92c7005cd006c3d57ee3de (image=quay.io/ceph/ceph:v18, name=elegant_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-57cbd49ab8f6ae55f8803bf89bd1ea267899c4b96d6163a610bb54df3b135474-merged.mount: Deactivated successfully.
Nov 26 01:14:57 compute-0 podman[195414]: 2025-11-26 01:14:57.349635344 +0000 UTC m=+0.138650032 container remove 49ca9924a759c8109c19ce4c1979da8d24db11874b92c7005cd006c3d57ee3de (image=quay.io/ceph/ceph:v18, name=elegant_moser, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 01:14:57 compute-0 systemd[1]: libpod-conmon-49ca9924a759c8109c19ce4c1979da8d24db11874b92c7005cd006c3d57ee3de.scope: Deactivated successfully.
Nov 26 01:14:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 26 01:14:57 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:57 compute-0 ceph-mon[192746]: Saving service mon spec with placement count:5
Nov 26 01:14:57 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:57 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:57 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 01:14:57 compute-0 ceph-mgr[193049]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 26 01:14:57 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 26 01:14:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 01:14:57 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:57 compute-0 focused_davinci[195409]: Scheduled mgr update...
Nov 26 01:14:57 compute-0 systemd[1]: libpod-ea9040408bd1b99426c4ace30f229ce5a76d2cd43b7aec7cafcae28e874865f8.scope: Deactivated successfully.
Nov 26 01:14:57 compute-0 podman[195391]: 2025-11-26 01:14:57.812095068 +0000 UTC m=+0.937426746 container died ea9040408bd1b99426c4ace30f229ce5a76d2cd43b7aec7cafcae28e874865f8 (image=quay.io/ceph/ceph:v18, name=focused_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 01:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0766c20372a3badce4695f67858a1a73443cdc100c10897070dde18fd1d112df-merged.mount: Deactivated successfully.
Nov 26 01:14:57 compute-0 podman[195391]: 2025-11-26 01:14:57.870243892 +0000 UTC m=+0.995575570 container remove ea9040408bd1b99426c4ace30f229ce5a76d2cd43b7aec7cafcae28e874865f8 (image=quay.io/ceph/ceph:v18, name=focused_davinci, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 01:14:57 compute-0 systemd[1]: libpod-conmon-ea9040408bd1b99426c4ace30f229ce5a76d2cd43b7aec7cafcae28e874865f8.scope: Deactivated successfully.
Nov 26 01:14:57 compute-0 podman[195552]: 2025-11-26 01:14:57.931981855 +0000 UTC m=+0.040721268 container create 5ea8ba2ff0de4870515214ad2fe047de078cf3389ea259403b22eae9cd483b2e (image=quay.io/ceph/ceph:v18, name=objective_dirac, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 01:14:57 compute-0 systemd[1]: Started libpod-conmon-5ea8ba2ff0de4870515214ad2fe047de078cf3389ea259403b22eae9cd483b2e.scope.
Nov 26 01:14:58 compute-0 podman[195552]: 2025-11-26 01:14:57.916902434 +0000 UTC m=+0.025641867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4972cd6df44f7b775c9729282f94d46d3e9750748f9847d8cc775d174cb17ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4972cd6df44f7b775c9729282f94d46d3e9750748f9847d8cc775d174cb17ee/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4972cd6df44f7b775c9729282f94d46d3e9750748f9847d8cc775d174cb17ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:58 compute-0 podman[195552]: 2025-11-26 01:14:58.05427536 +0000 UTC m=+0.163014863 container init 5ea8ba2ff0de4870515214ad2fe047de078cf3389ea259403b22eae9cd483b2e (image=quay.io/ceph/ceph:v18, name=objective_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:14:58 compute-0 podman[195552]: 2025-11-26 01:14:58.069106274 +0000 UTC m=+0.177845727 container start 5ea8ba2ff0de4870515214ad2fe047de078cf3389ea259403b22eae9cd483b2e (image=quay.io/ceph/ceph:v18, name=objective_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:14:58 compute-0 podman[195552]: 2025-11-26 01:14:58.075247546 +0000 UTC m=+0.183986969 container attach 5ea8ba2ff0de4870515214ad2fe047de078cf3389ea259403b22eae9cd483b2e (image=quay.io/ceph/ceph:v18, name=objective_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:14:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:14:58 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:58 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 01:14:58 compute-0 ceph-mgr[193049]: [cephadm INFO root] Saving service crash spec with placement *
Nov 26 01:14:58 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 26 01:14:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 26 01:14:58 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:58 compute-0 objective_dirac[195574]: Scheduled crash update...
Nov 26 01:14:58 compute-0 systemd[1]: libpod-5ea8ba2ff0de4870515214ad2fe047de078cf3389ea259403b22eae9cd483b2e.scope: Deactivated successfully.
Nov 26 01:14:58 compute-0 podman[195552]: 2025-11-26 01:14:58.674422157 +0000 UTC m=+0.783161660 container died 5ea8ba2ff0de4870515214ad2fe047de078cf3389ea259403b22eae9cd483b2e (image=quay.io/ceph/ceph:v18, name=objective_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:14:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4972cd6df44f7b775c9729282f94d46d3e9750748f9847d8cc775d174cb17ee-merged.mount: Deactivated successfully.
Nov 26 01:14:58 compute-0 podman[195552]: 2025-11-26 01:14:58.757750994 +0000 UTC m=+0.866490407 container remove 5ea8ba2ff0de4870515214ad2fe047de078cf3389ea259403b22eae9cd483b2e (image=quay.io/ceph/ceph:v18, name=objective_dirac, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:14:58 compute-0 systemd[1]: libpod-conmon-5ea8ba2ff0de4870515214ad2fe047de078cf3389ea259403b22eae9cd483b2e.scope: Deactivated successfully.
Nov 26 01:14:58 compute-0 ceph-mon[192746]: Saving service mgr spec with placement count:2
Nov 26 01:14:58 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:58 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:58 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:14:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:14:58 compute-0 podman[195734]: 2025-11-26 01:14:58.872113317 +0000 UTC m=+0.081926349 container create e46ebafb1347eedf1e018545b4b53aa2a36071d922cada2f8de635d21827fcf9 (image=quay.io/ceph/ceph:v18, name=nervous_heisenberg, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 01:14:58 compute-0 podman[195734]: 2025-11-26 01:14:58.835513605 +0000 UTC m=+0.045326697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:58 compute-0 systemd[1]: Started libpod-conmon-e46ebafb1347eedf1e018545b4b53aa2a36071d922cada2f8de635d21827fcf9.scope.
Nov 26 01:14:58 compute-0 ceph-mgr[193049]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 26 01:14:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f08dfd2a0f134b1c4871fa8450727bcad9e709d9afae0a89bb6d6a77e821d33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f08dfd2a0f134b1c4871fa8450727bcad9e709d9afae0a89bb6d6a77e821d33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f08dfd2a0f134b1c4871fa8450727bcad9e709d9afae0a89bb6d6a77e821d33/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:59 compute-0 podman[195734]: 2025-11-26 01:14:59.026077726 +0000 UTC m=+0.235890828 container init e46ebafb1347eedf1e018545b4b53aa2a36071d922cada2f8de635d21827fcf9 (image=quay.io/ceph/ceph:v18, name=nervous_heisenberg, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:14:59 compute-0 podman[195734]: 2025-11-26 01:14:59.047398852 +0000 UTC m=+0.257211854 container start e46ebafb1347eedf1e018545b4b53aa2a36071d922cada2f8de635d21827fcf9 (image=quay.io/ceph/ceph:v18, name=nervous_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:14:59 compute-0 podman[195734]: 2025-11-26 01:14:59.052393831 +0000 UTC m=+0.262206913 container attach e46ebafb1347eedf1e018545b4b53aa2a36071d922cada2f8de635d21827fcf9 (image=quay.io/ceph/ceph:v18, name=nervous_heisenberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:14:59 compute-0 podman[195832]: 2025-11-26 01:14:59.610740633 +0000 UTC m=+0.155130133 container exec 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:14:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 26 01:14:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1950139032' entity='client.admin' 
Nov 26 01:14:59 compute-0 podman[195734]: 2025-11-26 01:14:59.655016669 +0000 UTC m=+0.864829691 container died e46ebafb1347eedf1e018545b4b53aa2a36071d922cada2f8de635d21827fcf9 (image=quay.io/ceph/ceph:v18, name=nervous_heisenberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 01:14:59 compute-0 systemd[1]: libpod-e46ebafb1347eedf1e018545b4b53aa2a36071d922cada2f8de635d21827fcf9.scope: Deactivated successfully.
Nov 26 01:14:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f08dfd2a0f134b1c4871fa8450727bcad9e709d9afae0a89bb6d6a77e821d33-merged.mount: Deactivated successfully.
Nov 26 01:14:59 compute-0 podman[195734]: 2025-11-26 01:14:59.734274712 +0000 UTC m=+0.944087714 container remove e46ebafb1347eedf1e018545b4b53aa2a36071d922cada2f8de635d21827fcf9 (image=quay.io/ceph/ceph:v18, name=nervous_heisenberg, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 01:14:59 compute-0 podman[158021]: time="2025-11-26T01:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:14:59 compute-0 systemd[1]: libpod-conmon-e46ebafb1347eedf1e018545b4b53aa2a36071d922cada2f8de635d21827fcf9.scope: Deactivated successfully.
Nov 26 01:14:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22105 "" "Go-http-client/1.1"
Nov 26 01:14:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3940 "" "Go-http-client/1.1"
Nov 26 01:14:59 compute-0 ceph-mon[192746]: Saving service crash spec with placement *
Nov 26 01:14:59 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/1950139032' entity='client.admin' 
Nov 26 01:14:59 compute-0 podman[195866]: 2025-11-26 01:14:59.853226254 +0000 UTC m=+0.076127277 container create 54ef30647e81f48caf409986e171ca08406a35c63b7e85e52d72d23b587b69b4 (image=quay.io/ceph/ceph:v18, name=magical_galois, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 26 01:14:59 compute-0 systemd[1]: Started libpod-conmon-54ef30647e81f48caf409986e171ca08406a35c63b7e85e52d72d23b587b69b4.scope.
Nov 26 01:14:59 compute-0 podman[195866]: 2025-11-26 01:14:59.826799786 +0000 UTC m=+0.049700799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:14:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1887de0bee708548f5ea1be043ab5656a63b92db611b019fc62179e0911a3298/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1887de0bee708548f5ea1be043ab5656a63b92db611b019fc62179e0911a3298/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1887de0bee708548f5ea1be043ab5656a63b92db611b019fc62179e0911a3298/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:14:59 compute-0 podman[195866]: 2025-11-26 01:14:59.991235188 +0000 UTC m=+0.214136261 container init 54ef30647e81f48caf409986e171ca08406a35c63b7e85e52d72d23b587b69b4 (image=quay.io/ceph/ceph:v18, name=magical_galois, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:15:00 compute-0 podman[195866]: 2025-11-26 01:15:00.005755463 +0000 UTC m=+0.228656496 container start 54ef30647e81f48caf409986e171ca08406a35c63b7e85e52d72d23b587b69b4 (image=quay.io/ceph/ceph:v18, name=magical_galois, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:15:00 compute-0 podman[195866]: 2025-11-26 01:15:00.013369296 +0000 UTC m=+0.236270369 container attach 54ef30647e81f48caf409986e171ca08406a35c63b7e85e52d72d23b587b69b4 (image=quay.io/ceph/ceph:v18, name=magical_galois, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:15:00 compute-0 podman[195832]: 2025-11-26 01:15:00.02067799 +0000 UTC m=+0.565067470 container exec_died 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 01:15:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:15:00 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:00 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 01:15:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 26 01:15:00 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:00 compute-0 systemd[1]: libpod-54ef30647e81f48caf409986e171ca08406a35c63b7e85e52d72d23b587b69b4.scope: Deactivated successfully.
Nov 26 01:15:00 compute-0 podman[195866]: 2025-11-26 01:15:00.59870442 +0000 UTC m=+0.821605423 container died 54ef30647e81f48caf409986e171ca08406a35c63b7e85e52d72d23b587b69b4 (image=quay.io/ceph/ceph:v18, name=magical_galois, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1887de0bee708548f5ea1be043ab5656a63b92db611b019fc62179e0911a3298-merged.mount: Deactivated successfully.
Nov 26 01:15:00 compute-0 podman[195866]: 2025-11-26 01:15:00.660915877 +0000 UTC m=+0.883816880 container remove 54ef30647e81f48caf409986e171ca08406a35c63b7e85e52d72d23b587b69b4 (image=quay.io/ceph/ceph:v18, name=magical_galois, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 01:15:00 compute-0 systemd[1]: libpod-conmon-54ef30647e81f48caf409986e171ca08406a35c63b7e85e52d72d23b587b69b4.scope: Deactivated successfully.
Nov 26 01:15:00 compute-0 podman[196021]: 2025-11-26 01:15:00.771347181 +0000 UTC m=+0.083805161 container create 40ac263e3f9fb0b52689a286f00326a151ceff5c392c150261c6606c797dc988 (image=quay.io/ceph/ceph:v18, name=quirky_cartwright, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:15:00 compute-0 podman[196021]: 2025-11-26 01:15:00.735777288 +0000 UTC m=+0.048235288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:15:00 compute-0 systemd[1]: Started libpod-conmon-40ac263e3f9fb0b52689a286f00326a151ceff5c392c150261c6606c797dc988.scope.
Nov 26 01:15:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ffaad014f4f47e67336f79c289e1566ef1b88a0d55064991e6cffaa7d719d0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ffaad014f4f47e67336f79c289e1566ef1b88a0d55064991e6cffaa7d719d0a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ffaad014f4f47e67336f79c289e1566ef1b88a0d55064991e6cffaa7d719d0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:00 compute-0 podman[196021]: 2025-11-26 01:15:00.905547787 +0000 UTC m=+0.218005807 container init 40ac263e3f9fb0b52689a286f00326a151ceff5c392c150261c6606c797dc988 (image=quay.io/ceph/ceph:v18, name=quirky_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:15:00 compute-0 podman[196021]: 2025-11-26 01:15:00.927109369 +0000 UTC m=+0.239567339 container start 40ac263e3f9fb0b52689a286f00326a151ceff5c392c150261c6606c797dc988 (image=quay.io/ceph/ceph:v18, name=quirky_cartwright, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:15:00 compute-0 podman[196021]: 2025-11-26 01:15:00.947991162 +0000 UTC m=+0.260449172 container attach 40ac263e3f9fb0b52689a286f00326a151ceff5c392c150261c6606c797dc988 (image=quay.io/ceph/ceph:v18, name=quirky_cartwright, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:15:00 compute-0 ceph-mgr[193049]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 26 01:15:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:00 compute-0 ceph-mon[192746]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 26 01:15:01 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 196080 (sysctl)
Nov 26 01:15:01 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 26 01:15:01 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 26 01:15:01 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:01 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:01 compute-0 ceph-mon[192746]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 26 01:15:01 compute-0 openstack_network_exporter[160178]: ERROR   01:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:15:01 compute-0 openstack_network_exporter[160178]: ERROR   01:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:15:01 compute-0 openstack_network_exporter[160178]: ERROR   01:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:15:01 compute-0 openstack_network_exporter[160178]: ERROR   01:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:15:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:15:01 compute-0 openstack_network_exporter[160178]: ERROR   01:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:15:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:15:01 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 01:15:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 01:15:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:01 compute-0 ceph-mgr[193049]: [cephadm INFO root] Added label _admin to host compute-0
Nov 26 01:15:01 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 26 01:15:01 compute-0 quirky_cartwright[196062]: Added label _admin to host compute-0
Nov 26 01:15:01 compute-0 systemd[1]: libpod-40ac263e3f9fb0b52689a286f00326a151ceff5c392c150261c6606c797dc988.scope: Deactivated successfully.
Nov 26 01:15:01 compute-0 podman[196021]: 2025-11-26 01:15:01.573407216 +0000 UTC m=+0.885865156 container died 40ac263e3f9fb0b52689a286f00326a151ceff5c392c150261c6606c797dc988 (image=quay.io/ceph/ceph:v18, name=quirky_cartwright, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ffaad014f4f47e67336f79c289e1566ef1b88a0d55064991e6cffaa7d719d0a-merged.mount: Deactivated successfully.
Nov 26 01:15:01 compute-0 podman[196021]: 2025-11-26 01:15:01.64514964 +0000 UTC m=+0.957607580 container remove 40ac263e3f9fb0b52689a286f00326a151ceff5c392c150261c6606c797dc988 (image=quay.io/ceph/ceph:v18, name=quirky_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:01 compute-0 systemd[1]: libpod-conmon-40ac263e3f9fb0b52689a286f00326a151ceff5c392c150261c6606c797dc988.scope: Deactivated successfully.
Nov 26 01:15:01 compute-0 podman[196157]: 2025-11-26 01:15:01.773933556 +0000 UTC m=+0.082626478 container create 6172551af334c71e29814e812156927a32c58d0500f8d4a478d3da2bbbd31ed5 (image=quay.io/ceph/ceph:v18, name=dreamy_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:15:01 compute-0 podman[196157]: 2025-11-26 01:15:01.744334039 +0000 UTC m=+0.053027041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:15:01 compute-0 systemd[1]: Started libpod-conmon-6172551af334c71e29814e812156927a32c58d0500f8d4a478d3da2bbbd31ed5.scope.
Nov 26 01:15:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862afb499fd8a87fa55c1753699114c93feefd226ae7f73e1ddb8c534de487ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862afb499fd8a87fa55c1753699114c93feefd226ae7f73e1ddb8c534de487ec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862afb499fd8a87fa55c1753699114c93feefd226ae7f73e1ddb8c534de487ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:01 compute-0 podman[196157]: 2025-11-26 01:15:01.939234522 +0000 UTC m=+0.247927534 container init 6172551af334c71e29814e812156927a32c58d0500f8d4a478d3da2bbbd31ed5 (image=quay.io/ceph/ceph:v18, name=dreamy_cray, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 01:15:01 compute-0 podman[196157]: 2025-11-26 01:15:01.958927952 +0000 UTC m=+0.267620894 container start 6172551af334c71e29814e812156927a32c58d0500f8d4a478d3da2bbbd31ed5 (image=quay.io/ceph/ceph:v18, name=dreamy_cray, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 01:15:01 compute-0 podman[196157]: 2025-11-26 01:15:01.968635483 +0000 UTC m=+0.277328405 container attach 6172551af334c71e29814e812156927a32c58d0500f8d4a478d3da2bbbd31ed5 (image=quay.io/ceph/ceph:v18, name=dreamy_cray, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 01:15:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:15:02 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 26 01:15:02 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3947709593' entity='client.admin' 
Nov 26 01:15:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:02 compute-0 ceph-mon[192746]: Added label _admin to host compute-0
Nov 26 01:15:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:02 compute-0 podman[196157]: 2025-11-26 01:15:02.556505128 +0000 UTC m=+0.865198070 container died 6172551af334c71e29814e812156927a32c58d0500f8d4a478d3da2bbbd31ed5 (image=quay.io/ceph/ceph:v18, name=dreamy_cray, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:15:02 compute-0 systemd[1]: libpod-6172551af334c71e29814e812156927a32c58d0500f8d4a478d3da2bbbd31ed5.scope: Deactivated successfully.
Nov 26 01:15:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-862afb499fd8a87fa55c1753699114c93feefd226ae7f73e1ddb8c534de487ec-merged.mount: Deactivated successfully.
Nov 26 01:15:02 compute-0 podman[196157]: 2025-11-26 01:15:02.634624089 +0000 UTC m=+0.943317041 container remove 6172551af334c71e29814e812156927a32c58d0500f8d4a478d3da2bbbd31ed5 (image=quay.io/ceph/ceph:v18, name=dreamy_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 26 01:15:02 compute-0 systemd[1]: libpod-conmon-6172551af334c71e29814e812156927a32c58d0500f8d4a478d3da2bbbd31ed5.scope: Deactivated successfully.
Nov 26 01:15:02 compute-0 podman[196340]: 2025-11-26 01:15:02.757352306 +0000 UTC m=+0.082039352 container create 1bb94e686b7e29cc2a37636b39c6e32b2d2fe660bfc2474311e618c186b19c37 (image=quay.io/ceph/ceph:v18, name=funny_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:15:02 compute-0 podman[196340]: 2025-11-26 01:15:02.718531162 +0000 UTC m=+0.043218218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:15:02 compute-0 systemd[1]: Started libpod-conmon-1bb94e686b7e29cc2a37636b39c6e32b2d2fe660bfc2474311e618c186b19c37.scope.
Nov 26 01:15:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baeb5c9294e71f876300242e3eb52612bdf0089dadd6a73513061852df5cef6a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baeb5c9294e71f876300242e3eb52612bdf0089dadd6a73513061852df5cef6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baeb5c9294e71f876300242e3eb52612bdf0089dadd6a73513061852df5cef6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:02 compute-0 podman[196340]: 2025-11-26 01:15:02.907368666 +0000 UTC m=+0.232055722 container init 1bb94e686b7e29cc2a37636b39c6e32b2d2fe660bfc2474311e618c186b19c37 (image=quay.io/ceph/ceph:v18, name=funny_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 01:15:02 compute-0 podman[196340]: 2025-11-26 01:15:02.928437554 +0000 UTC m=+0.253124580 container start 1bb94e686b7e29cc2a37636b39c6e32b2d2fe660bfc2474311e618c186b19c37 (image=quay.io/ceph/ceph:v18, name=funny_antonelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 01:15:02 compute-0 podman[196340]: 2025-11-26 01:15:02.9343796 +0000 UTC m=+0.259066706 container attach 1bb94e686b7e29cc2a37636b39c6e32b2d2fe660bfc2474311e618c186b19c37 (image=quay.io/ceph/ceph:v18, name=funny_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 01:15:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:03 compute-0 podman[196476]: 2025-11-26 01:15:03.496714202 +0000 UTC m=+0.071045175 container create c00992dd298577ffad996437127ef1bb34d7afddd8bef328c72b64a386aa6ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 01:15:03 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/3947709593' entity='client.admin' 
Nov 26 01:15:03 compute-0 podman[196476]: 2025-11-26 01:15:03.469598345 +0000 UTC m=+0.043929398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:03 compute-0 systemd[1]: Started libpod-conmon-c00992dd298577ffad996437127ef1bb34d7afddd8bef328c72b64a386aa6ecf.scope.
Nov 26 01:15:03 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 26 01:15:03 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:03 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2575222315' entity='client.admin' 
Nov 26 01:15:03 compute-0 funny_antonelli[196390]: set mgr/dashboard/cluster/status
Nov 26 01:15:03 compute-0 podman[196476]: 2025-11-26 01:15:03.640546278 +0000 UTC m=+0.214877341 container init c00992dd298577ffad996437127ef1bb34d7afddd8bef328c72b64a386aa6ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 01:15:03 compute-0 systemd[1]: libpod-1bb94e686b7e29cc2a37636b39c6e32b2d2fe660bfc2474311e618c186b19c37.scope: Deactivated successfully.
Nov 26 01:15:03 compute-0 podman[196340]: 2025-11-26 01:15:03.645599979 +0000 UTC m=+0.970287035 container died 1bb94e686b7e29cc2a37636b39c6e32b2d2fe660bfc2474311e618c186b19c37 (image=quay.io/ceph/ceph:v18, name=funny_antonelli, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 01:15:03 compute-0 podman[196476]: 2025-11-26 01:15:03.653653434 +0000 UTC m=+0.227984407 container start c00992dd298577ffad996437127ef1bb34d7afddd8bef328c72b64a386aa6ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:15:03 compute-0 condescending_payne[196493]: 167 167
Nov 26 01:15:03 compute-0 systemd[1]: libpod-c00992dd298577ffad996437127ef1bb34d7afddd8bef328c72b64a386aa6ecf.scope: Deactivated successfully.
Nov 26 01:15:03 compute-0 podman[196476]: 2025-11-26 01:15:03.668767756 +0000 UTC m=+0.243098829 container attach c00992dd298577ffad996437127ef1bb34d7afddd8bef328c72b64a386aa6ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 01:15:03 compute-0 podman[196476]: 2025-11-26 01:15:03.669696642 +0000 UTC m=+0.244027655 container died c00992dd298577ffad996437127ef1bb34d7afddd8bef328c72b64a386aa6ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 01:15:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-baeb5c9294e71f876300242e3eb52612bdf0089dadd6a73513061852df5cef6a-merged.mount: Deactivated successfully.
Nov 26 01:15:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4b9655a6512fcf11bb0452e395a8e38b7a04ceab9519fedddc8a425f3be84ac-merged.mount: Deactivated successfully.
Nov 26 01:15:03 compute-0 podman[196340]: 2025-11-26 01:15:03.755149528 +0000 UTC m=+1.079836554 container remove 1bb94e686b7e29cc2a37636b39c6e32b2d2fe660bfc2474311e618c186b19c37 (image=quay.io/ceph/ceph:v18, name=funny_antonelli, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Nov 26 01:15:03 compute-0 systemd[1]: libpod-conmon-1bb94e686b7e29cc2a37636b39c6e32b2d2fe660bfc2474311e618c186b19c37.scope: Deactivated successfully.
Nov 26 01:15:03 compute-0 podman[196476]: 2025-11-26 01:15:03.793748116 +0000 UTC m=+0.368079129 container remove c00992dd298577ffad996437127ef1bb34d7afddd8bef328c72b64a386aa6ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 01:15:03 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:15:03 compute-0 systemd[1]: libpod-conmon-c00992dd298577ffad996437127ef1bb34d7afddd8bef328c72b64a386aa6ecf.scope: Deactivated successfully.
Nov 26 01:15:04 compute-0 podman[196528]: 2025-11-26 01:15:04.071569134 +0000 UTC m=+0.081556918 container create 715043429c371ca5b2e123ebd823a25d80edb1c97d785343215873777c0c1fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wright, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 26 01:15:04 compute-0 podman[196528]: 2025-11-26 01:15:04.031957858 +0000 UTC m=+0.041945692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:04 compute-0 systemd[1]: Started libpod-conmon-715043429c371ca5b2e123ebd823a25d80edb1c97d785343215873777c0c1fc9.scope.
Nov 26 01:15:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26d057f39107c2c039e20a0ca10b08b468ed2e8cab2234079c0c0430b51432c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26d057f39107c2c039e20a0ca10b08b468ed2e8cab2234079c0c0430b51432c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26d057f39107c2c039e20a0ca10b08b468ed2e8cab2234079c0c0430b51432c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26d057f39107c2c039e20a0ca10b08b468ed2e8cab2234079c0c0430b51432c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:04 compute-0 podman[196528]: 2025-11-26 01:15:04.24906322 +0000 UTC m=+0.259051074 container init 715043429c371ca5b2e123ebd823a25d80edb1c97d785343215873777c0c1fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:15:04 compute-0 podman[196528]: 2025-11-26 01:15:04.269264635 +0000 UTC m=+0.279252429 container start 715043429c371ca5b2e123ebd823a25d80edb1c97d785343215873777c0c1fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 01:15:04 compute-0 podman[196528]: 2025-11-26 01:15:04.277137984 +0000 UTC m=+0.287125838 container attach 715043429c371ca5b2e123ebd823a25d80edb1c97d785343215873777c0c1fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wright, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:15:04 compute-0 python3[196574]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:15:04 compute-0 podman[196575]: 2025-11-26 01:15:04.54452544 +0000 UTC m=+0.092152324 container create a41a0534a39e93268e2e2fcd4acfa0a229176805134040b2dd9b1540708df516 (image=quay.io/ceph/ceph:v18, name=admiring_hopper, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:15:04 compute-0 podman[196575]: 2025-11-26 01:15:04.513302838 +0000 UTC m=+0.060929762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:15:04 compute-0 systemd[1]: Started libpod-conmon-a41a0534a39e93268e2e2fcd4acfa0a229176805134040b2dd9b1540708df516.scope.
Nov 26 01:15:04 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/2575222315' entity='client.admin' 
Nov 26 01:15:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b51a9b137bb920a7dc8da56618bb804602334482004d3b67863d25bdbbf08b97/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b51a9b137bb920a7dc8da56618bb804602334482004d3b67863d25bdbbf08b97/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:04 compute-0 podman[196575]: 2025-11-26 01:15:04.744517504 +0000 UTC m=+0.292144388 container init a41a0534a39e93268e2e2fcd4acfa0a229176805134040b2dd9b1540708df516 (image=quay.io/ceph/ceph:v18, name=admiring_hopper, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:04 compute-0 podman[196575]: 2025-11-26 01:15:04.758635979 +0000 UTC m=+0.306262853 container start a41a0534a39e93268e2e2fcd4acfa0a229176805134040b2dd9b1540708df516 (image=quay.io/ceph/ceph:v18, name=admiring_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 01:15:04 compute-0 podman[196575]: 2025-11-26 01:15:04.76764093 +0000 UTC m=+0.315267814 container attach a41a0534a39e93268e2e2fcd4acfa0a229176805134040b2dd9b1540708df516 (image=quay.io/ceph/ceph:v18, name=admiring_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 01:15:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 26 01:15:05 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4276612813' entity='client.admin' 
Nov 26 01:15:05 compute-0 systemd[1]: libpod-a41a0534a39e93268e2e2fcd4acfa0a229176805134040b2dd9b1540708df516.scope: Deactivated successfully.
Nov 26 01:15:05 compute-0 podman[196575]: 2025-11-26 01:15:05.380161904 +0000 UTC m=+0.927788748 container died a41a0534a39e93268e2e2fcd4acfa0a229176805134040b2dd9b1540708df516 (image=quay.io/ceph/ceph:v18, name=admiring_hopper, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 26 01:15:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b51a9b137bb920a7dc8da56618bb804602334482004d3b67863d25bdbbf08b97-merged.mount: Deactivated successfully.
Nov 26 01:15:05 compute-0 podman[196575]: 2025-11-26 01:15:05.466007271 +0000 UTC m=+1.013634125 container remove a41a0534a39e93268e2e2fcd4acfa0a229176805134040b2dd9b1540708df516 (image=quay.io/ceph/ceph:v18, name=admiring_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:15:05 compute-0 systemd[1]: libpod-conmon-a41a0534a39e93268e2e2fcd4acfa0a229176805134040b2dd9b1540708df516.scope: Deactivated successfully.
Nov 26 01:15:05 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/4276612813' entity='client.admin' 
Nov 26 01:15:06 compute-0 peaceful_wright[196545]: [
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:    {
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:        "available": false,
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:        "ceph_device": false,
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:        "lsm_data": {},
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:        "lvs": [],
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:        "path": "/dev/sr0",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:        "rejected_reasons": [
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "Insufficient space (<5GB)",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "Has a FileSystem"
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:        ],
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:        "sys_api": {
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "actuators": null,
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "device_nodes": "sr0",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "devname": "sr0",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "human_readable_size": "482.00 KB",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "id_bus": "ata",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "model": "QEMU DVD-ROM",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "nr_requests": "2",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "parent": "/dev/sr0",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "partitions": {},
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "path": "/dev/sr0",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "removable": "1",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "rev": "2.5+",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "ro": "0",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "rotational": "1",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "sas_address": "",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "sas_device_handle": "",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "scheduler_mode": "mq-deadline",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "sectors": 0,
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "sectorsize": "2048",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "size": 493568.0,
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "support_discard": "2048",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "type": "disk",
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:            "vendor": "QEMU"
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:        }
Nov 26 01:15:06 compute-0 peaceful_wright[196545]:    }
Nov 26 01:15:06 compute-0 peaceful_wright[196545]: ]
Nov 26 01:15:06 compute-0 podman[196528]: 2025-11-26 01:15:06.603577836 +0000 UTC m=+2.613565590 container died 715043429c371ca5b2e123ebd823a25d80edb1c97d785343215873777c0c1fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:06 compute-0 systemd[1]: libpod-715043429c371ca5b2e123ebd823a25d80edb1c97d785343215873777c0c1fc9.scope: Deactivated successfully.
Nov 26 01:15:06 compute-0 systemd[1]: libpod-715043429c371ca5b2e123ebd823a25d80edb1c97d785343215873777c0c1fc9.scope: Consumed 2.379s CPU time.
Nov 26 01:15:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c26d057f39107c2c039e20a0ca10b08b468ed2e8cab2234079c0c0430b51432c-merged.mount: Deactivated successfully.
Nov 26 01:15:06 compute-0 podman[196528]: 2025-11-26 01:15:06.694037922 +0000 UTC m=+2.704025686 container remove 715043429c371ca5b2e123ebd823a25d80edb1c97d785343215873777c0c1fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_wright, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 01:15:06 compute-0 systemd[1]: libpod-conmon-715043429c371ca5b2e123ebd823a25d80edb1c97d785343215873777c0c1fc9.scope: Deactivated successfully.
Nov 26 01:15:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:15:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:15:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:15:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:15:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 26 01:15:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 01:15:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:15:06 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:15:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:15:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:15:06 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 26 01:15:06 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 26 01:15:06 compute-0 ansible-async_wrapper.py[198810]: Invoked with j410643610569 30 /home/zuul/.ansible/tmp/ansible-tmp-1764119705.9726586-36992-233714261392575/AnsiballZ_command.py _
Nov 26 01:15:06 compute-0 ansible-async_wrapper.py[198837]: Starting module and watcher
Nov 26 01:15:06 compute-0 ansible-async_wrapper.py[198837]: Start watching 198838 (30)
Nov 26 01:15:06 compute-0 ansible-async_wrapper.py[198838]: Start module (198838)
Nov 26 01:15:06 compute-0 ansible-async_wrapper.py[198810]: Return async_wrapper task started.
Nov 26 01:15:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:07 compute-0 python3[198840]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:15:07 compute-0 podman[198884]: 2025-11-26 01:15:07.15095782 +0000 UTC m=+0.074748578 container create bf46832fa4a1c5a00d357bb41955541a8afc9334fd45f44969b8d21114157736 (image=quay.io/ceph/ceph:v18, name=sleepy_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 26 01:15:07 compute-0 systemd[1]: Started libpod-conmon-bf46832fa4a1c5a00d357bb41955541a8afc9334fd45f44969b8d21114157736.scope.
Nov 26 01:15:07 compute-0 podman[198884]: 2025-11-26 01:15:07.1283697 +0000 UTC m=+0.052160458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:15:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2129c65ccbd00c4c04b71e8ea6d03a33defda1a465200cbb8b0c952120ef9526/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2129c65ccbd00c4c04b71e8ea6d03a33defda1a465200cbb8b0c952120ef9526/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:07 compute-0 podman[198884]: 2025-11-26 01:15:07.276331301 +0000 UTC m=+0.200122069 container init bf46832fa4a1c5a00d357bb41955541a8afc9334fd45f44969b8d21114157736 (image=quay.io/ceph/ceph:v18, name=sleepy_mclean, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:15:07 compute-0 podman[198884]: 2025-11-26 01:15:07.292486342 +0000 UTC m=+0.216277090 container start bf46832fa4a1c5a00d357bb41955541a8afc9334fd45f44969b8d21114157736 (image=quay.io/ceph/ceph:v18, name=sleepy_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:15:07 compute-0 podman[198884]: 2025-11-26 01:15:07.2974275 +0000 UTC m=+0.221218268 container attach bf46832fa4a1c5a00d357bb41955541a8afc9334fd45f44969b8d21114157736 (image=quay.io/ceph/ceph:v18, name=sleepy_mclean, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:15:07 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:07 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:07 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:07 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:07 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 01:15:07 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:15:07 compute-0 ceph-mon[192746]: Updating compute-0:/etc/ceph/ceph.conf
Nov 26 01:15:07 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 01:15:07 compute-0 sleepy_mclean[198928]: 
Nov 26 01:15:07 compute-0 sleepy_mclean[198928]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 01:15:07 compute-0 systemd[1]: libpod-bf46832fa4a1c5a00d357bb41955541a8afc9334fd45f44969b8d21114157736.scope: Deactivated successfully.
Nov 26 01:15:07 compute-0 podman[198884]: 2025-11-26 01:15:07.947924803 +0000 UTC m=+0.871715591 container died bf46832fa4a1c5a00d357bb41955541a8afc9334fd45f44969b8d21114157736 (image=quay.io/ceph/ceph:v18, name=sleepy_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2129c65ccbd00c4c04b71e8ea6d03a33defda1a465200cbb8b0c952120ef9526-merged.mount: Deactivated successfully.
Nov 26 01:15:08 compute-0 podman[198884]: 2025-11-26 01:15:08.033237926 +0000 UTC m=+0.957028684 container remove bf46832fa4a1c5a00d357bb41955541a8afc9334fd45f44969b8d21114157736 (image=quay.io/ceph/ceph:v18, name=sleepy_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 01:15:08 compute-0 systemd[1]: libpod-conmon-bf46832fa4a1c5a00d357bb41955541a8afc9334fd45f44969b8d21114157736.scope: Deactivated successfully.
Nov 26 01:15:08 compute-0 ansible-async_wrapper.py[198838]: Module complete (198838)
Nov 26 01:15:08 compute-0 python3[199262]: ansible-ansible.legacy.async_status Invoked with jid=j410643610569.198810 mode=status _async_dir=/root/.ansible_async
Nov 26 01:15:08 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/36901f64-240e-5c29-a2e2-29b56f2c329c/config/ceph.conf
Nov 26 01:15:08 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/36901f64-240e-5c29-a2e2-29b56f2c329c/config/ceph.conf
Nov 26 01:15:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:15:08 compute-0 python3[199400]: ansible-ansible.legacy.async_status Invoked with jid=j410643610569.198810 mode=cleanup _async_dir=/root/.ansible_async
Nov 26 01:15:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:09 compute-0 python3[199562]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 01:15:09 compute-0 ceph-mon[192746]: Updating compute-0:/var/lib/ceph/36901f64-240e-5c29-a2e2-29b56f2c329c/config/ceph.conf
Nov 26 01:15:10 compute-0 python3[199715]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:15:10 compute-0 podman[199764]: 2025-11-26 01:15:10.103920036 +0000 UTC m=+0.075454648 container create 39ac9868855723df0668e11ef26c60ee56ee8c7a4ed8a6df9269bd275b80fa5b (image=quay.io/ceph/ceph:v18, name=silly_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:15:10 compute-0 podman[199764]: 2025-11-26 01:15:10.067679384 +0000 UTC m=+0.039213996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:15:10 compute-0 systemd[1]: Started libpod-conmon-39ac9868855723df0668e11ef26c60ee56ee8c7a4ed8a6df9269bd275b80fa5b.scope.
Nov 26 01:15:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5440f453e2b50d101ecac948fab91a976e1abd31370caa4c92f63cf56a03fe/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5440f453e2b50d101ecac948fab91a976e1abd31370caa4c92f63cf56a03fe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5440f453e2b50d101ecac948fab91a976e1abd31370caa4c92f63cf56a03fe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:10 compute-0 podman[199764]: 2025-11-26 01:15:10.285738643 +0000 UTC m=+0.257273275 container init 39ac9868855723df0668e11ef26c60ee56ee8c7a4ed8a6df9269bd275b80fa5b (image=quay.io/ceph/ceph:v18, name=silly_wiles, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 01:15:10 compute-0 podman[199764]: 2025-11-26 01:15:10.294275692 +0000 UTC m=+0.265810274 container start 39ac9868855723df0668e11ef26c60ee56ee8c7a4ed8a6df9269bd275b80fa5b (image=quay.io/ceph/ceph:v18, name=silly_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 01:15:10 compute-0 podman[199764]: 2025-11-26 01:15:10.302543222 +0000 UTC m=+0.274077834 container attach 39ac9868855723df0668e11ef26c60ee56ee8c7a4ed8a6df9269bd275b80fa5b (image=quay.io/ceph/ceph:v18, name=silly_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 01:15:10 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 26 01:15:10 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 26 01:15:10 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 01:15:10 compute-0 silly_wiles[199802]: 
Nov 26 01:15:10 compute-0 silly_wiles[199802]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 01:15:10 compute-0 systemd[1]: libpod-39ac9868855723df0668e11ef26c60ee56ee8c7a4ed8a6df9269bd275b80fa5b.scope: Deactivated successfully.
Nov 26 01:15:10 compute-0 podman[199764]: 2025-11-26 01:15:10.926985519 +0000 UTC m=+0.898520111 container died 39ac9868855723df0668e11ef26c60ee56ee8c7a4ed8a6df9269bd275b80fa5b (image=quay.io/ceph/ceph:v18, name=silly_wiles, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:15:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d5440f453e2b50d101ecac948fab91a976e1abd31370caa4c92f63cf56a03fe-merged.mount: Deactivated successfully.
Nov 26 01:15:11 compute-0 podman[199764]: 2025-11-26 01:15:11.001478969 +0000 UTC m=+0.973013551 container remove 39ac9868855723df0668e11ef26c60ee56ee8c7a4ed8a6df9269bd275b80fa5b (image=quay.io/ceph/ceph:v18, name=silly_wiles, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 01:15:11 compute-0 systemd[1]: libpod-conmon-39ac9868855723df0668e11ef26c60ee56ee8c7a4ed8a6df9269bd275b80fa5b.scope: Deactivated successfully.
Nov 26 01:15:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:15:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:15:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:15:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:15:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:15:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:15:11 compute-0 python3[200136]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:15:11 compute-0 podman[200165]: 2025-11-26 01:15:11.725031902 +0000 UTC m=+0.054367659 container create efad02366241648f293963520eddcca0a9c46986a44748c6f898d90929940795 (image=quay.io/ceph/ceph:v18, name=relaxed_newton, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 01:15:11 compute-0 podman[200165]: 2025-11-26 01:15:11.706546936 +0000 UTC m=+0.035882763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:15:11 compute-0 systemd[1]: Started libpod-conmon-efad02366241648f293963520eddcca0a9c46986a44748c6f898d90929940795.scope.
Nov 26 01:15:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b2e2dce4e399bfd5c5f6dcf254bda581b130c04240308316c2ac179fc3273/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b2e2dce4e399bfd5c5f6dcf254bda581b130c04240308316c2ac179fc3273/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b2e2dce4e399bfd5c5f6dcf254bda581b130c04240308316c2ac179fc3273/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:11 compute-0 ansible-async_wrapper.py[198837]: Done in kid B.
Nov 26 01:15:11 compute-0 podman[200165]: 2025-11-26 01:15:11.966105384 +0000 UTC m=+0.295441181 container init efad02366241648f293963520eddcca0a9c46986a44748c6f898d90929940795 (image=quay.io/ceph/ceph:v18, name=relaxed_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:15:11 compute-0 podman[200165]: 2025-11-26 01:15:11.980766593 +0000 UTC m=+0.310102340 container start efad02366241648f293963520eddcca0a9c46986a44748c6f898d90929940795 (image=quay.io/ceph/ceph:v18, name=relaxed_newton, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 01:15:12 compute-0 podman[200165]: 2025-11-26 01:15:12.072086813 +0000 UTC m=+0.401422620 container attach efad02366241648f293963520eddcca0a9c46986a44748c6f898d90929940795 (image=quay.io/ceph/ceph:v18, name=relaxed_newton, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:15:12 compute-0 ceph-mon[192746]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 26 01:15:12 compute-0 auditd[705]: Audit daemon rotating log files
Nov 26 01:15:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 26 01:15:12 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/36901f64-240e-5c29-a2e2-29b56f2c329c/config/ceph.client.admin.keyring
Nov 26 01:15:12 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/36901f64-240e-5c29-a2e2-29b56f2c329c/config/ceph.client.admin.keyring
Nov 26 01:15:12 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2658313383' entity='client.admin' 
Nov 26 01:15:12 compute-0 systemd[1]: libpod-efad02366241648f293963520eddcca0a9c46986a44748c6f898d90929940795.scope: Deactivated successfully.
Nov 26 01:15:12 compute-0 podman[200165]: 2025-11-26 01:15:12.678469846 +0000 UTC m=+1.007805643 container died efad02366241648f293963520eddcca0a9c46986a44748c6f898d90929940795 (image=quay.io/ceph/ceph:v18, name=relaxed_newton, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 01:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a6b2e2dce4e399bfd5c5f6dcf254bda581b130c04240308316c2ac179fc3273-merged.mount: Deactivated successfully.
Nov 26 01:15:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:12 compute-0 podman[200165]: 2025-11-26 01:15:12.983910615 +0000 UTC m=+1.313246372 container remove efad02366241648f293963520eddcca0a9c46986a44748c6f898d90929940795 (image=quay.io/ceph/ceph:v18, name=relaxed_newton, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 01:15:13 compute-0 systemd[1]: libpod-conmon-efad02366241648f293963520eddcca0a9c46986a44748c6f898d90929940795.scope: Deactivated successfully.
Nov 26 01:15:13 compute-0 python3[200564]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:15:13 compute-0 podman[200614]: 2025-11-26 01:15:13.469648878 +0000 UTC m=+0.049005149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:15:13 compute-0 podman[200614]: 2025-11-26 01:15:13.570614787 +0000 UTC m=+0.149970978 container create 159ca6365b21bc5776333e9d1e786811f2c7ed0d751c9156c4e8987923edcf82 (image=quay.io/ceph/ceph:v18, name=clever_grothendieck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 01:15:13 compute-0 systemd[1]: Started libpod-conmon-159ca6365b21bc5776333e9d1e786811f2c7ed0d751c9156c4e8987923edcf82.scope.
Nov 26 01:15:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9483b746979f747e257a500f789804e0ef68f2e8e12ad64457189370cf01e6ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:13 compute-0 ceph-mon[192746]: Updating compute-0:/var/lib/ceph/36901f64-240e-5c29-a2e2-29b56f2c329c/config/ceph.client.admin.keyring
Nov 26 01:15:13 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/2658313383' entity='client.admin' 
Nov 26 01:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9483b746979f747e257a500f789804e0ef68f2e8e12ad64457189370cf01e6ec/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9483b746979f747e257a500f789804e0ef68f2e8e12ad64457189370cf01e6ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:15:13 compute-0 podman[200614]: 2025-11-26 01:15:13.939554469 +0000 UTC m=+0.518910730 container init 159ca6365b21bc5776333e9d1e786811f2c7ed0d751c9156c4e8987923edcf82 (image=quay.io/ceph/ceph:v18, name=clever_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 01:15:13 compute-0 podman[200614]: 2025-11-26 01:15:13.957714097 +0000 UTC m=+0.537070318 container start 159ca6365b21bc5776333e9d1e786811f2c7ed0d751c9156c4e8987923edcf82 (image=quay.io/ceph/ceph:v18, name=clever_grothendieck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:15:13 compute-0 podman[200614]: 2025-11-26 01:15:13.982675813 +0000 UTC m=+0.562032015 container attach 159ca6365b21bc5776333e9d1e786811f2c7ed0d751c9156c4e8987923edcf82 (image=quay.io/ceph/ceph:v18, name=clever_grothendieck, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 01:15:14 compute-0 podman[200811]: 2025-11-26 01:15:14.402324972 +0000 UTC m=+0.112759750 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:15:14 compute-0 podman[200808]: 2025-11-26 01:15:14.426913308 +0000 UTC m=+0.142625693 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 26 01:15:14 compute-0 podman[200814]: 2025-11-26 01:15:14.457661807 +0000 UTC m=+0.168190408 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 26 01:15:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 26 01:15:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:15:14 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1888791691' entity='client.admin' 
Nov 26 01:15:14 compute-0 systemd[1]: libpod-159ca6365b21bc5776333e9d1e786811f2c7ed0d751c9156c4e8987923edcf82.scope: Deactivated successfully.
Nov 26 01:15:14 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:15:14 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:15:14 compute-0 podman[200971]: 2025-11-26 01:15:14.746313387 +0000 UTC m=+0.062035643 container died 159ca6365b21bc5776333e9d1e786811f2c7ed0d751c9156c4e8987923edcf82 (image=quay.io/ceph/ceph:v18, name=clever_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:14 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:14 compute-0 ceph-mgr[193049]: [progress INFO root] update: starting ev 2bc3c95f-ed20-4640-8a84-279b8073da09 (Updating crash deployment (+1 -> 1))
Nov 26 01:15:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 26 01:15:14 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 26 01:15:14 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 26 01:15:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:15:14 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:15:14 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 26 01:15:14 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 26 01:15:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-9483b746979f747e257a500f789804e0ef68f2e8e12ad64457189370cf01e6ec-merged.mount: Deactivated successfully.
Nov 26 01:15:15 compute-0 podman[200971]: 2025-11-26 01:15:15.208942674 +0000 UTC m=+0.524664930 container remove 159ca6365b21bc5776333e9d1e786811f2c7ed0d751c9156c4e8987923edcf82 (image=quay.io/ceph/ceph:v18, name=clever_grothendieck, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:15:15 compute-0 systemd[1]: libpod-conmon-159ca6365b21bc5776333e9d1e786811f2c7ed0d751c9156c4e8987923edcf82.scope: Deactivated successfully.
Nov 26 01:15:15 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/1888791691' entity='client.admin' 
Nov 26 01:15:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 26 01:15:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 26 01:15:15 compute-0 ceph-mon[192746]: Deploying daemon crash.compute-0 on compute-0
Nov 26 01:15:15 compute-0 python3[201111]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:15:15 compute-0 podman[201125]: 2025-11-26 01:15:15.803174857 +0000 UTC m=+0.076936659 container create 9d02f3b928f7baeef95c7ece646551718887e7950eb6b1bc30b1aeaf80a0900c (image=quay.io/ceph/ceph:v18, name=serene_archimedes, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:15:15 compute-0 podman[201125]: 2025-11-26 01:15:15.766085981 +0000 UTC m=+0.039847773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:15:15 compute-0 systemd[1]: Started libpod-conmon-9d02f3b928f7baeef95c7ece646551718887e7950eb6b1bc30b1aeaf80a0900c.scope.
Nov 26 01:15:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e04a5a9ff778e9485462342636607e28d6429cf4a901ee7cbffd6c576e764c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e04a5a9ff778e9485462342636607e28d6429cf4a901ee7cbffd6c576e764c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e04a5a9ff778e9485462342636607e28d6429cf4a901ee7cbffd6c576e764c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:16 compute-0 podman[201162]: 2025-11-26 01:15:16.100618073 +0000 UTC m=+0.103841781 container create 1e29b32212b8e4ba803826eacc10719c49e0a49694d7721c77ae5ce6e3f1d158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cerf, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:15:16 compute-0 podman[201125]: 2025-11-26 01:15:16.123139551 +0000 UTC m=+0.396901353 container init 9d02f3b928f7baeef95c7ece646551718887e7950eb6b1bc30b1aeaf80a0900c (image=quay.io/ceph/ceph:v18, name=serene_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:16 compute-0 podman[201125]: 2025-11-26 01:15:16.139725185 +0000 UTC m=+0.413486977 container start 9d02f3b928f7baeef95c7ece646551718887e7950eb6b1bc30b1aeaf80a0900c (image=quay.io/ceph/ceph:v18, name=serene_archimedes, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:16 compute-0 podman[201162]: 2025-11-26 01:15:16.059898945 +0000 UTC m=+0.063122723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:16 compute-0 podman[201125]: 2025-11-26 01:15:16.171972635 +0000 UTC m=+0.445734427 container attach 9d02f3b928f7baeef95c7ece646551718887e7950eb6b1bc30b1aeaf80a0900c (image=quay.io/ceph/ceph:v18, name=serene_archimedes, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 01:15:16 compute-0 systemd[1]: Started libpod-conmon-1e29b32212b8e4ba803826eacc10719c49e0a49694d7721c77ae5ce6e3f1d158.scope.
Nov 26 01:15:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:16 compute-0 podman[201162]: 2025-11-26 01:15:16.272652486 +0000 UTC m=+0.275876204 container init 1e29b32212b8e4ba803826eacc10719c49e0a49694d7721c77ae5ce6e3f1d158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cerf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:15:16 compute-0 podman[201162]: 2025-11-26 01:15:16.288067637 +0000 UTC m=+0.291291335 container start 1e29b32212b8e4ba803826eacc10719c49e0a49694d7721c77ae5ce6e3f1d158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:15:16 compute-0 frosty_cerf[201183]: 167 167
Nov 26 01:15:16 compute-0 systemd[1]: libpod-1e29b32212b8e4ba803826eacc10719c49e0a49694d7721c77ae5ce6e3f1d158.scope: Deactivated successfully.
Nov 26 01:15:16 compute-0 podman[201162]: 2025-11-26 01:15:16.308377594 +0000 UTC m=+0.311601352 container attach 1e29b32212b8e4ba803826eacc10719c49e0a49694d7721c77ae5ce6e3f1d158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 01:15:16 compute-0 podman[201162]: 2025-11-26 01:15:16.309965108 +0000 UTC m=+0.313188826 container died 1e29b32212b8e4ba803826eacc10719c49e0a49694d7721c77ae5ce6e3f1d158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cerf, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-53e253f9869deafdbafa06e0aeabf6baef2a5a64f73b15e6a4f7a43639358cb6-merged.mount: Deactivated successfully.
Nov 26 01:15:16 compute-0 podman[201162]: 2025-11-26 01:15:16.473206497 +0000 UTC m=+0.476430185 container remove 1e29b32212b8e4ba803826eacc10719c49e0a49694d7721c77ae5ce6e3f1d158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 26 01:15:16 compute-0 systemd[1]: libpod-conmon-1e29b32212b8e4ba803826eacc10719c49e0a49694d7721c77ae5ce6e3f1d158.scope: Deactivated successfully.
Nov 26 01:15:16 compute-0 systemd[1]: Reloading.
Nov 26 01:15:16 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:15:16 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:15:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 26 01:15:16 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3101108409' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 26 01:15:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 26 01:15:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 01:15:16 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/3101108409' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 26 01:15:17 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3101108409' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 26 01:15:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 26 01:15:17 compute-0 serene_archimedes[201164]: set require_min_compat_client to mimic
Nov 26 01:15:17 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 26 01:15:17 compute-0 systemd[1]: libpod-9d02f3b928f7baeef95c7ece646551718887e7950eb6b1bc30b1aeaf80a0900c.scope: Deactivated successfully.
Nov 26 01:15:17 compute-0 podman[201125]: 2025-11-26 01:15:17.079324331 +0000 UTC m=+1.353086103 container died 9d02f3b928f7baeef95c7ece646551718887e7950eb6b1bc30b1aeaf80a0900c (image=quay.io/ceph/ceph:v18, name=serene_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 01:15:17 compute-0 systemd[1]: Reloading.
Nov 26 01:15:17 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:15:17 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:15:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-59e04a5a9ff778e9485462342636607e28d6429cf4a901ee7cbffd6c576e764c-merged.mount: Deactivated successfully.
Nov 26 01:15:17 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 36901f64-240e-5c29-a2e2-29b56f2c329c...
Nov 26 01:15:17 compute-0 podman[201125]: 2025-11-26 01:15:17.634672239 +0000 UTC m=+1.908434041 container remove 9d02f3b928f7baeef95c7ece646551718887e7950eb6b1bc30b1aeaf80a0900c (image=quay.io/ceph/ceph:v18, name=serene_archimedes, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 01:15:17 compute-0 systemd[1]: libpod-conmon-9d02f3b928f7baeef95c7ece646551718887e7950eb6b1bc30b1aeaf80a0900c.scope: Deactivated successfully.
Nov 26 01:15:18 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/3101108409' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 26 01:15:18 compute-0 podman[201356]: 2025-11-26 01:15:18.043247158 +0000 UTC m=+0.095700584 container create 6e99a14a2bad1497c3b8ae15ad85175314830833cf59cfcae65e469d694c7b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-crash-compute-0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 01:15:18 compute-0 podman[201356]: 2025-11-26 01:15:17.983718625 +0000 UTC m=+0.036172091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea3d7548b83e77e99fccf46ceb16bdd4746b446043719eae625d401e20b716f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea3d7548b83e77e99fccf46ceb16bdd4746b446043719eae625d401e20b716f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea3d7548b83e77e99fccf46ceb16bdd4746b446043719eae625d401e20b716f1/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea3d7548b83e77e99fccf46ceb16bdd4746b446043719eae625d401e20b716f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:18 compute-0 podman[201356]: 2025-11-26 01:15:18.194585344 +0000 UTC m=+0.247038800 container init 6e99a14a2bad1497c3b8ae15ad85175314830833cf59cfcae65e469d694c7b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-crash-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:15:18 compute-0 podman[201356]: 2025-11-26 01:15:18.218357417 +0000 UTC m=+0.270810873 container start 6e99a14a2bad1497c3b8ae15ad85175314830833cf59cfcae65e469d694c7b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 01:15:18 compute-0 bash[201356]: 6e99a14a2bad1497c3b8ae15ad85175314830833cf59cfcae65e469d694c7b65
Nov 26 01:15:18 compute-0 systemd[1]: Started Ceph crash.compute-0 for 36901f64-240e-5c29-a2e2-29b56f2c329c.
Nov 26 01:15:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:15:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:15:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 26 01:15:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:18 compute-0 ceph-mgr[193049]: [progress INFO root] complete: finished ev 2bc3c95f-ed20-4640-8a84-279b8073da09 (Updating crash deployment (+1 -> 1))
Nov 26 01:15:18 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event 2bc3c95f-ed20-4640-8a84-279b8073da09 (Updating crash deployment (+1 -> 1)) in 4 seconds
Nov 26 01:15:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 26 01:15:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:18 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 4e78d3ab-e757-401d-a0d5-d03689a7cc0d does not exist
Nov 26 01:15:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 01:15:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:18 compute-0 ceph-mgr[193049]: [progress INFO root] update: starting ev 41c0c30b-a21e-47e9-a72e-dce51067b525 (Updating mgr deployment (+1 -> 2))
Nov 26 01:15:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.zqtivt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 26 01:15:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zqtivt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 01:15:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zqtivt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 26 01:15:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 01:15:18 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 01:15:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:15:18 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:15:18 compute-0 python3[201401]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:15:18 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.zqtivt on compute-0
Nov 26 01:15:18 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.zqtivt on compute-0
Nov 26 01:15:18 compute-0 podman[201402]: 2025-11-26 01:15:18.480067415 +0000 UTC m=+0.065423868 container create ee47d09f10ca6bca7500754638b2ef268e5ecec4e30556715194e8effacb13ae (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:15:18 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-crash-compute-0[201372]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 26 01:15:18 compute-0 systemd[1]: Started libpod-conmon-ee47d09f10ca6bca7500754638b2ef268e5ecec4e30556715194e8effacb13ae.scope.
Nov 26 01:15:18 compute-0 podman[201402]: 2025-11-26 01:15:18.453274127 +0000 UTC m=+0.038630670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:15:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fee02b04a50e0871a24682e2cb4dad1df594b131e755b4eb756bfe4b4b77914a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fee02b04a50e0871a24682e2cb4dad1df594b131e755b4eb756bfe4b4b77914a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fee02b04a50e0871a24682e2cb4dad1df594b131e755b4eb756bfe4b4b77914a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:18 compute-0 podman[201402]: 2025-11-26 01:15:18.59091173 +0000 UTC m=+0.176268213 container init ee47d09f10ca6bca7500754638b2ef268e5ecec4e30556715194e8effacb13ae (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 01:15:18 compute-0 podman[201402]: 2025-11-26 01:15:18.599455969 +0000 UTC m=+0.184812432 container start ee47d09f10ca6bca7500754638b2ef268e5ecec4e30556715194e8effacb13ae (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 01:15:18 compute-0 podman[201402]: 2025-11-26 01:15:18.604605473 +0000 UTC m=+0.189961926 container attach ee47d09f10ca6bca7500754638b2ef268e5ecec4e30556715194e8effacb13ae (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 01:15:18 compute-0 podman[201442]: 2025-11-26 01:15:18.621233277 +0000 UTC m=+0.088868773 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:15:18 compute-0 podman[201439]: 2025-11-26 01:15:18.649807365 +0000 UTC m=+0.112883973 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, maintainer=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41)
Nov 26 01:15:18 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-crash-compute-0[201372]: 2025-11-26T01:15:18.663+0000 7fd80604a640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 26 01:15:18 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-crash-compute-0[201372]: 2025-11-26T01:15:18.663+0000 7fd80604a640 -1 AuthRegistry(0x7fd800066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 26 01:15:18 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-crash-compute-0[201372]: 2025-11-26T01:15:18.664+0000 7fd80604a640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 26 01:15:18 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-crash-compute-0[201372]: 2025-11-26T01:15:18.664+0000 7fd80604a640 -1 AuthRegistry(0x7fd806049000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 26 01:15:18 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-crash-compute-0[201372]: 2025-11-26T01:15:18.670+0000 7fd7ff7fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 26 01:15:18 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-crash-compute-0[201372]: 2025-11-26T01:15:18.670+0000 7fd80604a640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 26 01:15:18 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-crash-compute-0[201372]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 26 01:15:18 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-crash-compute-0[201372]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 26 01:15:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:15:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:19 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 01:15:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zqtivt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 01:15:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zqtivt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 26 01:15:19 compute-0 ceph-mon[192746]: Deploying daemon mgr.compute-0.zqtivt on compute-0
Nov 26 01:15:19 compute-0 podman[201655]: 2025-11-26 01:15:19.416095131 +0000 UTC m=+0.074649995 container create dd7e66b004397393d606033284505434e146b6cd73d858139a3b91056173c38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:19 compute-0 systemd[1]: Started libpod-conmon-dd7e66b004397393d606033284505434e146b6cd73d858139a3b91056173c38c.scope.
Nov 26 01:15:19 compute-0 podman[201655]: 2025-11-26 01:15:19.386371381 +0000 UTC m=+0.044926325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:19 compute-0 podman[201655]: 2025-11-26 01:15:19.533025466 +0000 UTC m=+0.191580350 container init dd7e66b004397393d606033284505434e146b6cd73d858139a3b91056173c38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_johnson, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 01:15:19 compute-0 podman[201655]: 2025-11-26 01:15:19.542903772 +0000 UTC m=+0.201458636 container start dd7e66b004397393d606033284505434e146b6cd73d858139a3b91056173c38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:19 compute-0 podman[201655]: 2025-11-26 01:15:19.547178272 +0000 UTC m=+0.205733136 container attach dd7e66b004397393d606033284505434e146b6cd73d858139a3b91056173c38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 01:15:19 compute-0 zen_johnson[201700]: 167 167
Nov 26 01:15:19 compute-0 systemd[1]: libpod-dd7e66b004397393d606033284505434e146b6cd73d858139a3b91056173c38c.scope: Deactivated successfully.
Nov 26 01:15:19 compute-0 podman[201655]: 2025-11-26 01:15:19.554264739 +0000 UTC m=+0.212819653 container died dd7e66b004397393d606033284505434e146b6cd73d858139a3b91056173c38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_johnson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 01:15:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-08552b93b2fef57299ae0cab43d0c8ec01d1e39e3975ca62a4fc45f194077802-merged.mount: Deactivated successfully.
Nov 26 01:15:19 compute-0 podman[201655]: 2025-11-26 01:15:19.624744617 +0000 UTC m=+0.283299481 container remove dd7e66b004397393d606033284505434e146b6cd73d858139a3b91056173c38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_johnson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:15:19 compute-0 systemd[1]: libpod-conmon-dd7e66b004397393d606033284505434e146b6cd73d858139a3b91056173c38c.scope: Deactivated successfully.
Nov 26 01:15:19 compute-0 systemd[1]: Reloading.
Nov 26 01:15:19 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:15:19 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:15:20 compute-0 systemd[1]: Reloading.
Nov 26 01:15:20 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:15:20 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:15:20 compute-0 systemd[1]: Starting Ceph mgr.compute-0.zqtivt for 36901f64-240e-5c29-a2e2-29b56f2c329c...
Nov 26 01:15:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 01:15:20 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 01:15:20 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 01:15:20 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 26 01:15:20 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:20 compute-0 ceph-mgr[193049]: [cephadm INFO root] Added host compute-0
Nov 26 01:15:20 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 26 01:15:20 compute-0 ceph-mgr[193049]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 26 01:15:20 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 26 01:15:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 01:15:20 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:20 compute-0 ceph-mgr[193049]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 26 01:15:20 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 26 01:15:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 01:15:20 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:20 compute-0 ceph-mgr[193049]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 26 01:15:20 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 26 01:15:20 compute-0 ceph-mgr[193049]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 26 01:15:20 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 26 01:15:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 26 01:15:20 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:20 compute-0 dazzling_hermann[201445]: Added host 'compute-0' with addr '192.168.122.100'
Nov 26 01:15:20 compute-0 dazzling_hermann[201445]: Scheduled mon update...
Nov 26 01:15:20 compute-0 dazzling_hermann[201445]: Scheduled mgr update...
Nov 26 01:15:20 compute-0 dazzling_hermann[201445]: Scheduled osd.default_drive_group update...
Nov 26 01:15:20 compute-0 systemd[1]: libpod-ee47d09f10ca6bca7500754638b2ef268e5ecec4e30556715194e8effacb13ae.scope: Deactivated successfully.
Nov 26 01:15:20 compute-0 podman[201402]: 2025-11-26 01:15:20.743297771 +0000 UTC m=+2.328654244 container died ee47d09f10ca6bca7500754638b2ef268e5ecec4e30556715194e8effacb13ae (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 01:15:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-fee02b04a50e0871a24682e2cb4dad1df594b131e755b4eb756bfe4b4b77914a-merged.mount: Deactivated successfully.
Nov 26 01:15:20 compute-0 podman[201402]: 2025-11-26 01:15:20.835677691 +0000 UTC m=+2.421034174 container remove ee47d09f10ca6bca7500754638b2ef268e5ecec4e30556715194e8effacb13ae (image=quay.io/ceph/ceph:v18, name=dazzling_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:20 compute-0 systemd[1]: libpod-conmon-ee47d09f10ca6bca7500754638b2ef268e5ecec4e30556715194e8effacb13ae.scope: Deactivated successfully.
Nov 26 01:15:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:21 compute-0 ceph-mgr[193049]: [progress INFO root] Writing back 1 completed events
Nov 26 01:15:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 01:15:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 podman[201918]: 2025-11-26 01:15:21.076211437 +0000 UTC m=+0.094833479 container create dac73629eae609d54a18532b9c1bd2ffac6af1f5596adacf92e94c2c2cf64b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-zqtivt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 01:15:21 compute-0 podman[201918]: 2025-11-26 01:15:21.03833596 +0000 UTC m=+0.056958092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00e57b854c4b06c0c0a34360013167128ce6bb9e662c07ac7982a73cea6cd81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00e57b854c4b06c0c0a34360013167128ce6bb9e662c07ac7982a73cea6cd81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00e57b854c4b06c0c0a34360013167128ce6bb9e662c07ac7982a73cea6cd81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00e57b854c4b06c0c0a34360013167128ce6bb9e662c07ac7982a73cea6cd81/merged/var/lib/ceph/mgr/ceph-compute-0.zqtivt supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:21 compute-0 podman[201918]: 2025-11-26 01:15:21.174739409 +0000 UTC m=+0.193361471 container init dac73629eae609d54a18532b9c1bd2ffac6af1f5596adacf92e94c2c2cf64b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-zqtivt, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:15:21 compute-0 podman[201918]: 2025-11-26 01:15:21.208062759 +0000 UTC m=+0.226684781 container start dac73629eae609d54a18532b9c1bd2ffac6af1f5596adacf92e94c2c2cf64b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-zqtivt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 01:15:21 compute-0 bash[201918]: dac73629eae609d54a18532b9c1bd2ffac6af1f5596adacf92e94c2c2cf64b9e
Nov 26 01:15:21 compute-0 systemd[1]: Started Ceph mgr.compute-0.zqtivt for 36901f64-240e-5c29-a2e2-29b56f2c329c.
Nov 26 01:15:21 compute-0 ceph-mgr[201962]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 01:15:21 compute-0 ceph-mgr[201962]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 26 01:15:21 compute-0 ceph-mgr[201962]: pidfile_write: ignore empty --pid-file
Nov 26 01:15:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:15:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:15:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 01:15:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 ceph-mgr[193049]: [progress INFO root] complete: finished ev 41c0c30b-a21e-47e9-a72e-dce51067b525 (Updating mgr deployment (+1 -> 2))
Nov 26 01:15:21 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event 41c0c30b-a21e-47e9-a72e-dce51067b525 (Updating mgr deployment (+1 -> 2)) in 3 seconds
Nov 26 01:15:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 01:15:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 python3[201961]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:15:21 compute-0 ceph-mgr[201962]: mgr[py] Loading python module 'alerts'
Nov 26 01:15:21 compute-0 podman[202006]: 2025-11-26 01:15:21.526422179 +0000 UTC m=+0.103763849 container create cdb75859579a7a27597fcb6175ad930b9ecb3c869008b15e3f9b2fdf64ec9ca1 (image=quay.io/ceph/ceph:v18, name=awesome_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:21 compute-0 podman[202006]: 2025-11-26 01:15:21.482448761 +0000 UTC m=+0.059790501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:15:21 compute-0 systemd[1]: Started libpod-conmon-cdb75859579a7a27597fcb6175ad930b9ecb3c869008b15e3f9b2fdf64ec9ca1.scope.
Nov 26 01:15:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 ceph-mon[192746]: Added host compute-0
Nov 26 01:15:21 compute-0 ceph-mon[192746]: Saving service mon spec with placement compute-0
Nov 26 01:15:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 ceph-mon[192746]: Saving service mgr spec with placement compute-0
Nov 26 01:15:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 ceph-mon[192746]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 26 01:15:21 compute-0 ceph-mon[192746]: Saving service osd.default_drive_group spec with placement compute-0
Nov 26 01:15:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2808c66dd11643c6cf880ffdabf068e28b84ae8e40efbedfe17089462948a37/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2808c66dd11643c6cf880ffdabf068e28b84ae8e40efbedfe17089462948a37/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2808c66dd11643c6cf880ffdabf068e28b84ae8e40efbedfe17089462948a37/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:21 compute-0 podman[202006]: 2025-11-26 01:15:21.672771985 +0000 UTC m=+0.250113665 container init cdb75859579a7a27597fcb6175ad930b9ecb3c869008b15e3f9b2fdf64ec9ca1 (image=quay.io/ceph/ceph:v18, name=awesome_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 01:15:21 compute-0 podman[202006]: 2025-11-26 01:15:21.69300279 +0000 UTC m=+0.270344480 container start cdb75859579a7a27597fcb6175ad930b9ecb3c869008b15e3f9b2fdf64ec9ca1 (image=quay.io/ceph/ceph:v18, name=awesome_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 01:15:21 compute-0 podman[202006]: 2025-11-26 01:15:21.700250353 +0000 UTC m=+0.277592083 container attach cdb75859579a7a27597fcb6175ad930b9ecb3c869008b15e3f9b2fdf64ec9ca1 (image=quay.io/ceph/ceph:v18, name=awesome_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 01:15:21 compute-0 ceph-mgr[201962]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 01:15:21 compute-0 ceph-mgr[201962]: mgr[py] Loading python module 'balancer'
Nov 26 01:15:21 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-zqtivt[201940]: 2025-11-26T01:15:21.715+0000 7f287dc6d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 26 01:15:21 compute-0 ceph-mgr[201962]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 01:15:21 compute-0 ceph-mgr[201962]: mgr[py] Loading python module 'cephadm'
Nov 26 01:15:21 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-zqtivt[201940]: 2025-11-26T01:15:21.964+0000 7f287dc6d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 26 01:15:22 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 01:15:22 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1143498141' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 01:15:22 compute-0 awesome_hellman[202051]: 
Nov 26 01:15:22 compute-0 awesome_hellman[202051]: {"fsid":"36901f64-240e-5c29-a2e2-29b56f2c329c","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":88,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-26T01:13:49.405054+0000","services":{}},"progress_events":{"41c0c30b-a21e-47e9-a72e-dce51067b525":{"message":"Updating mgr deployment (+1 -> 2) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Nov 26 01:15:22 compute-0 systemd[1]: libpod-cdb75859579a7a27597fcb6175ad930b9ecb3c869008b15e3f9b2fdf64ec9ca1.scope: Deactivated successfully.
Nov 26 01:15:22 compute-0 podman[202006]: 2025-11-26 01:15:22.345428477 +0000 UTC m=+0.922770157 container died cdb75859579a7a27597fcb6175ad930b9ecb3c869008b15e3f9b2fdf64ec9ca1 (image=quay.io/ceph/ceph:v18, name=awesome_hellman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:15:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2808c66dd11643c6cf880ffdabf068e28b84ae8e40efbedfe17089462948a37-merged.mount: Deactivated successfully.
Nov 26 01:15:22 compute-0 podman[202006]: 2025-11-26 01:15:22.412735267 +0000 UTC m=+0.990076907 container remove cdb75859579a7a27597fcb6175ad930b9ecb3c869008b15e3f9b2fdf64ec9ca1 (image=quay.io/ceph/ceph:v18, name=awesome_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:15:22 compute-0 systemd[1]: libpod-conmon-cdb75859579a7a27597fcb6175ad930b9ecb3c869008b15e3f9b2fdf64ec9ca1.scope: Deactivated successfully.
Nov 26 01:15:22 compute-0 podman[202257]: 2025-11-26 01:15:22.88234942 +0000 UTC m=+0.115674151 container exec 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:15:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:23 compute-0 podman[202257]: 2025-11-26 01:15:23.00090244 +0000 UTC m=+0.234227181 container exec_died 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:15:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:15:23 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:15:23 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:15:23 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:15:23 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:15:23 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:15:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:15:23 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:15:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:15:23 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:23 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 2009e4e5-45ae-435a-bb56-ea2c2979e318 does not exist
Nov 26 01:15:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 26 01:15:23 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:23 compute-0 ceph-mgr[193049]: [progress INFO root] update: starting ev cbf60f78-fbb8-4b04-bb0d-54c4a28d8af3 (Updating mgr deployment (-1 -> 1))
Nov 26 01:15:23 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.zqtivt from compute-0 -- ports [8765]
Nov 26 01:15:23 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.zqtivt from compute-0 -- ports [8765]
Nov 26 01:15:23 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:23 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:23 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:23 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:23 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:15:23 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:23 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:15:23 compute-0 podman[202372]: 2025-11-26 01:15:23.947109362 +0000 UTC m=+0.164116774 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:15:23 compute-0 ceph-mgr[201962]: mgr[py] Loading python module 'crash'
Nov 26 01:15:24 compute-0 ceph-mgr[201962]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 01:15:24 compute-0 ceph-mgr[201962]: mgr[py] Loading python module 'dashboard'
Nov 26 01:15:24 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-zqtivt[201940]: 2025-11-26T01:15:24.249+0000 7f287dc6d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 26 01:15:24 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.zqtivt for 36901f64-240e-5c29-a2e2-29b56f2c329c...
Nov 26 01:15:24 compute-0 ceph-mon[192746]: Removing daemon mgr.compute-0.zqtivt from compute-0 -- ports [8765]
Nov 26 01:15:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:25 compute-0 podman[202531]: 2025-11-26 01:15:25.012410659 +0000 UTC m=+0.097383711 container died dac73629eae609d54a18532b9c1bd2ffac6af1f5596adacf92e94c2c2cf64b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-zqtivt, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 01:15:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d00e57b854c4b06c0c0a34360013167128ce6bb9e662c07ac7982a73cea6cd81-merged.mount: Deactivated successfully.
Nov 26 01:15:25 compute-0 podman[202531]: 2025-11-26 01:15:25.089308956 +0000 UTC m=+0.174282018 container remove dac73629eae609d54a18532b9c1bd2ffac6af1f5596adacf92e94c2c2cf64b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-zqtivt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:15:25 compute-0 bash[202531]: ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-zqtivt
Nov 26 01:15:25 compute-0 systemd[1]: ceph-36901f64-240e-5c29-a2e2-29b56f2c329c@mgr.compute-0.zqtivt.service: Main process exited, code=exited, status=143/n/a
Nov 26 01:15:25 compute-0 podman[202556]: 2025-11-26 01:15:25.219008058 +0000 UTC m=+0.133770317 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, release=1214.1726694543)
Nov 26 01:15:25 compute-0 systemd[1]: ceph-36901f64-240e-5c29-a2e2-29b56f2c329c@mgr.compute-0.zqtivt.service: Failed with result 'exit-code'.
Nov 26 01:15:25 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.zqtivt for 36901f64-240e-5c29-a2e2-29b56f2c329c.
Nov 26 01:15:25 compute-0 systemd[1]: ceph-36901f64-240e-5c29-a2e2-29b56f2c329c@mgr.compute-0.zqtivt.service: Consumed 5.504s CPU time.
Nov 26 01:15:25 compute-0 systemd[1]: Reloading.
Nov 26 01:15:25 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:15:25 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:15:25 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.zqtivt
Nov 26 01:15:25 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.zqtivt
Nov 26 01:15:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.zqtivt"} v 0) v1
Nov 26 01:15:25 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.zqtivt"}]: dispatch
Nov 26 01:15:25 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.zqtivt"}]': finished
Nov 26 01:15:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 01:15:25 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:25 compute-0 ceph-mgr[193049]: [progress INFO root] complete: finished ev cbf60f78-fbb8-4b04-bb0d-54c4a28d8af3 (Updating mgr deployment (-1 -> 1))
Nov 26 01:15:25 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event cbf60f78-fbb8-4b04-bb0d-54c4a28d8af3 (Updating mgr deployment (-1 -> 1)) in 2 seconds
Nov 26 01:15:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 26 01:15:25 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:25 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 1918e2b4-3219-4341-bb56-9fb717b19420 does not exist
Nov 26 01:15:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:15:25 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:15:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:15:25 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:15:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:15:25 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:15:26 compute-0 ceph-mgr[193049]: [progress INFO root] Writing back 3 completed events
Nov 26 01:15:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 01:15:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:26 compute-0 ceph-mon[192746]: Removing key for mgr.compute-0.zqtivt
Nov 26 01:15:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.zqtivt"}]: dispatch
Nov 26 01:15:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.zqtivt"}]': finished
Nov 26 01:15:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:15:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:26 compute-0 podman[202772]: 2025-11-26 01:15:26.838237661 +0000 UTC m=+0.072954278 container create ad51d84bf3fa813e64bd7b75ef82388cba23b508f132fe2e46e4d81380559f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:15:26 compute-0 podman[202772]: 2025-11-26 01:15:26.815057954 +0000 UTC m=+0.049774661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:26 compute-0 systemd[1]: Started libpod-conmon-ad51d84bf3fa813e64bd7b75ef82388cba23b508f132fe2e46e4d81380559f8e.scope.
Nov 26 01:15:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:26 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:26 compute-0 podman[202772]: 2025-11-26 01:15:26.994238217 +0000 UTC m=+0.228954934 container init ad51d84bf3fa813e64bd7b75ef82388cba23b508f132fe2e46e4d81380559f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 01:15:27 compute-0 podman[202772]: 2025-11-26 01:15:27.011801838 +0000 UTC m=+0.246518455 container start ad51d84bf3fa813e64bd7b75ef82388cba23b508f132fe2e46e4d81380559f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 01:15:27 compute-0 podman[202772]: 2025-11-26 01:15:27.017505317 +0000 UTC m=+0.252222034 container attach ad51d84bf3fa813e64bd7b75ef82388cba23b508f132fe2e46e4d81380559f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:15:27 compute-0 vigilant_tesla[202788]: 167 167
Nov 26 01:15:27 compute-0 systemd[1]: libpod-ad51d84bf3fa813e64bd7b75ef82388cba23b508f132fe2e46e4d81380559f8e.scope: Deactivated successfully.
Nov 26 01:15:27 compute-0 podman[202772]: 2025-11-26 01:15:27.039143991 +0000 UTC m=+0.273860648 container died ad51d84bf3fa813e64bd7b75ef82388cba23b508f132fe2e46e4d81380559f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:15:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-65c085ab6afea466740a893b57055407be5ce818610db921b43fd73b38d06f61-merged.mount: Deactivated successfully.
Nov 26 01:15:27 compute-0 podman[202772]: 2025-11-26 01:15:27.14369171 +0000 UTC m=+0.378408357 container remove ad51d84bf3fa813e64bd7b75ef82388cba23b508f132fe2e46e4d81380559f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 01:15:27 compute-0 systemd[1]: libpod-conmon-ad51d84bf3fa813e64bd7b75ef82388cba23b508f132fe2e46e4d81380559f8e.scope: Deactivated successfully.
Nov 26 01:15:27 compute-0 podman[202810]: 2025-11-26 01:15:27.384534296 +0000 UTC m=+0.073134984 container create a1f454b6f9a8806bd6cbb15a2388c7bbbec696ccda4961dd6c2b199efecab43c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 01:15:27 compute-0 podman[202810]: 2025-11-26 01:15:27.36320518 +0000 UTC m=+0.051805908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:27 compute-0 systemd[1]: Started libpod-conmon-a1f454b6f9a8806bd6cbb15a2388c7bbbec696ccda4961dd6c2b199efecab43c.scope.
Nov 26 01:15:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef45a035750fb0aae978edcd63fdffb26ff4e23afbddae69ba81e00a1db4432f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef45a035750fb0aae978edcd63fdffb26ff4e23afbddae69ba81e00a1db4432f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef45a035750fb0aae978edcd63fdffb26ff4e23afbddae69ba81e00a1db4432f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef45a035750fb0aae978edcd63fdffb26ff4e23afbddae69ba81e00a1db4432f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef45a035750fb0aae978edcd63fdffb26ff4e23afbddae69ba81e00a1db4432f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:27 compute-0 podman[202810]: 2025-11-26 01:15:27.529494203 +0000 UTC m=+0.218094961 container init a1f454b6f9a8806bd6cbb15a2388c7bbbec696ccda4961dd6c2b199efecab43c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:15:27 compute-0 podman[202810]: 2025-11-26 01:15:27.549715738 +0000 UTC m=+0.238316426 container start a1f454b6f9a8806bd6cbb15a2388c7bbbec696ccda4961dd6c2b199efecab43c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:15:27 compute-0 podman[202810]: 2025-11-26 01:15:27.555303694 +0000 UTC m=+0.243904472 container attach a1f454b6f9a8806bd6cbb15a2388c7bbbec696ccda4961dd6c2b199efecab43c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 01:15:28 compute-0 nifty_hoover[202825]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:15:28 compute-0 nifty_hoover[202825]: --> relative data size: 1.0
Nov 26 01:15:28 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 01:15:28 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 835781ef-644a-4834-abb3-029e5bcba0ff
Nov 26 01:15:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:15:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "835781ef-644a-4834-abb3-029e5bcba0ff"} v 0) v1
Nov 26 01:15:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3384010160' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "835781ef-644a-4834-abb3-029e5bcba0ff"}]: dispatch
Nov 26 01:15:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 26 01:15:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 01:15:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3384010160' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "835781ef-644a-4834-abb3-029e5bcba0ff"}]': finished
Nov 26 01:15:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 26 01:15:29 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 26 01:15:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 01:15:29 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 01:15:29 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 01:15:29 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 01:15:29 compute-0 lvm[202887]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 01:15:29 compute-0 lvm[202887]: VG ceph_vg0 finished
Nov 26 01:15:29 compute-0 nifty_hoover[202825]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 26 01:15:29 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 26 01:15:29 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 01:15:29 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 01:15:29 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 26 01:15:29 compute-0 podman[158021]: time="2025-11-26T01:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:15:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 25437 "" "Go-http-client/1.1"
Nov 26 01:15:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4839 "" "Go-http-client/1.1"
Nov 26 01:15:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 26 01:15:29 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918689319' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 01:15:30 compute-0 nifty_hoover[202825]: stderr: got monmap epoch 1
Nov 26 01:15:30 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/3384010160' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "835781ef-644a-4834-abb3-029e5bcba0ff"}]: dispatch
Nov 26 01:15:30 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/3384010160' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "835781ef-644a-4834-abb3-029e5bcba0ff"}]': finished
Nov 26 01:15:30 compute-0 nifty_hoover[202825]: --> Creating keyring file for osd.0
Nov 26 01:15:30 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 26 01:15:30 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 26 01:15:30 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 835781ef-644a-4834-abb3-029e5bcba0ff --setuser ceph --setgroup ceph
Nov 26 01:15:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:31 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 26 01:15:31 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 26 01:15:31 compute-0 ceph-mon[192746]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 26 01:15:31 compute-0 ceph-mon[192746]: Cluster is now healthy
Nov 26 01:15:31 compute-0 openstack_network_exporter[160178]: ERROR   01:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:15:31 compute-0 openstack_network_exporter[160178]: ERROR   01:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:15:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:15:31 compute-0 openstack_network_exporter[160178]: ERROR   01:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:15:31 compute-0 openstack_network_exporter[160178]: ERROR   01:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:15:31 compute-0 openstack_network_exporter[160178]: ERROR   01:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:15:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:15:32 compute-0 nifty_hoover[202825]: stderr: 2025-11-26T01:15:30.132+0000 7f1fcc297740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 01:15:32 compute-0 nifty_hoover[202825]: stderr: 2025-11-26T01:15:30.133+0000 7f1fcc297740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 01:15:32 compute-0 nifty_hoover[202825]: stderr: 2025-11-26T01:15:30.133+0000 7f1fcc297740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 01:15:32 compute-0 nifty_hoover[202825]: stderr: 2025-11-26T01:15:30.133+0000 7f1fcc297740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 26 01:15:32 compute-0 nifty_hoover[202825]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 26 01:15:32 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 01:15:32 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 26 01:15:32 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 01:15:32 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 26 01:15:32 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 01:15:32 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 01:15:32 compute-0 nifty_hoover[202825]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 26 01:15:32 compute-0 nifty_hoover[202825]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 26 01:15:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:33 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 01:15:33 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a345f9b0-19f1-464f-95c4-9c68bb202f1e
Nov 26 01:15:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e"} v 0) v1
Nov 26 01:15:33 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2459988974' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e"}]: dispatch
Nov 26 01:15:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 26 01:15:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 01:15:33 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2459988974' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e"}]': finished
Nov 26 01:15:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 26 01:15:33 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 26 01:15:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 01:15:33 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 01:15:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 01:15:33 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 01:15:33 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 01:15:33 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 01:15:33 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 01:15:33 compute-0 lvm[203844]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 01:15:33 compute-0 lvm[203844]: VG ceph_vg1 finished
Nov 26 01:15:33 compute-0 nifty_hoover[202825]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 26 01:15:33 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 26 01:15:33 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 01:15:33 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 01:15:33 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 26 01:15:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:15:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 26 01:15:34 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1578201599' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 01:15:34 compute-0 nifty_hoover[202825]: stderr: got monmap epoch 1
Nov 26 01:15:34 compute-0 nifty_hoover[202825]: --> Creating keyring file for osd.1
Nov 26 01:15:34 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 26 01:15:34 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 26 01:15:34 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid a345f9b0-19f1-464f-95c4-9c68bb202f1e --setuser ceph --setgroup ceph
Nov 26 01:15:34 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/2459988974' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e"}]: dispatch
Nov 26 01:15:34 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/2459988974' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e"}]': finished
Nov 26 01:15:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:36 compute-0 nifty_hoover[202825]: stderr: 2025-11-26T01:15:34.335+0000 7f62ed7b1740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 01:15:36 compute-0 nifty_hoover[202825]: stderr: 2025-11-26T01:15:34.336+0000 7f62ed7b1740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 01:15:36 compute-0 nifty_hoover[202825]: stderr: 2025-11-26T01:15:34.336+0000 7f62ed7b1740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 01:15:36 compute-0 nifty_hoover[202825]: stderr: 2025-11-26T01:15:34.337+0000 7f62ed7b1740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 26 01:15:36 compute-0 nifty_hoover[202825]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 26 01:15:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8f697525-afad-4f38-820d-80587338cf3b
Nov 26 01:15:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "8f697525-afad-4f38-820d-80587338cf3b"} v 0) v1
Nov 26 01:15:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3017110498' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8f697525-afad-4f38-820d-80587338cf3b"}]: dispatch
Nov 26 01:15:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 26 01:15:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 01:15:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3017110498' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8f697525-afad-4f38-820d-80587338cf3b"}]': finished
Nov 26 01:15:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 26 01:15:37 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 26 01:15:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 01:15:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 01:15:37 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 01:15:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 01:15:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 01:15:37 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 01:15:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 01:15:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 01:15:37 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 01:15:37 compute-0 lvm[204802]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 01:15:37 compute-0 lvm[204802]: VG ceph_vg2 finished
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 01:15:37 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 26 01:15:38 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/3017110498' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8f697525-afad-4f38-820d-80587338cf3b"}]: dispatch
Nov 26 01:15:38 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/3017110498' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8f697525-afad-4f38-820d-80587338cf3b"}]': finished
Nov 26 01:15:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 26 01:15:38 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2580331485' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 26 01:15:38 compute-0 nifty_hoover[202825]: stderr: got monmap epoch 1
Nov 26 01:15:38 compute-0 nifty_hoover[202825]: --> Creating keyring file for osd.2
Nov 26 01:15:38 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 26 01:15:38 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 26 01:15:38 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 8f697525-afad-4f38-820d-80587338cf3b --setuser ceph --setgroup ceph
Nov 26 01:15:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:15:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:15:40
Nov 26 01:15:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:15:40 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:15:40 compute-0 ceph-mgr[193049]: [balancer INFO root] No pools available
Nov 26 01:15:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:15:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:15:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:15:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:15:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:15:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:15:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:15:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:15:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:15:41 compute-0 nifty_hoover[202825]: stderr: 2025-11-26T01:15:38.552+0000 7f5e51044740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 01:15:41 compute-0 nifty_hoover[202825]: stderr: 2025-11-26T01:15:38.552+0000 7f5e51044740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 01:15:41 compute-0 nifty_hoover[202825]: stderr: 2025-11-26T01:15:38.553+0000 7f5e51044740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 26 01:15:41 compute-0 nifty_hoover[202825]: stderr: 2025-11-26T01:15:38.553+0000 7f5e51044740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 26 01:15:41 compute-0 nifty_hoover[202825]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 26 01:15:41 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 01:15:41 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 26 01:15:41 compute-0 nifty_hoover[202825]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 01:15:41 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 26 01:15:41 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 01:15:41 compute-0 nifty_hoover[202825]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 01:15:41 compute-0 nifty_hoover[202825]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 26 01:15:41 compute-0 nifty_hoover[202825]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 26 01:15:41 compute-0 systemd[1]: libpod-a1f454b6f9a8806bd6cbb15a2388c7bbbec696ccda4961dd6c2b199efecab43c.scope: Deactivated successfully.
Nov 26 01:15:41 compute-0 systemd[1]: libpod-a1f454b6f9a8806bd6cbb15a2388c7bbbec696ccda4961dd6c2b199efecab43c.scope: Consumed 8.254s CPU time.
Nov 26 01:15:41 compute-0 podman[202810]: 2025-11-26 01:15:41.625606634 +0000 UTC m=+14.314207372 container died a1f454b6f9a8806bd6cbb15a2388c7bbbec696ccda4961dd6c2b199efecab43c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 01:15:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef45a035750fb0aae978edcd63fdffb26ff4e23afbddae69ba81e00a1db4432f-merged.mount: Deactivated successfully.
Nov 26 01:15:41 compute-0 podman[202810]: 2025-11-26 01:15:41.730665468 +0000 UTC m=+14.419266186 container remove a1f454b6f9a8806bd6cbb15a2388c7bbbec696ccda4961dd6c2b199efecab43c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:15:41 compute-0 systemd[1]: libpod-conmon-a1f454b6f9a8806bd6cbb15a2388c7bbbec696ccda4961dd6c2b199efecab43c.scope: Deactivated successfully.
Nov 26 01:15:42 compute-0 podman[205878]: 2025-11-26 01:15:42.784007861 +0000 UTC m=+0.064098361 container create d84327478b1cec1f3376f9989bd8924cf1fd5674fa7e90953d360a9227324269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:15:42 compute-0 systemd[1]: Started libpod-conmon-d84327478b1cec1f3376f9989bd8924cf1fd5674fa7e90953d360a9227324269.scope.
Nov 26 01:15:42 compute-0 podman[205878]: 2025-11-26 01:15:42.756577265 +0000 UTC m=+0.036667775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:42 compute-0 podman[205878]: 2025-11-26 01:15:42.918010093 +0000 UTC m=+0.198100563 container init d84327478b1cec1f3376f9989bd8924cf1fd5674fa7e90953d360a9227324269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_zhukovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:15:42 compute-0 podman[205878]: 2025-11-26 01:15:42.926226232 +0000 UTC m=+0.206316712 container start d84327478b1cec1f3376f9989bd8924cf1fd5674fa7e90953d360a9227324269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:15:42 compute-0 podman[205878]: 2025-11-26 01:15:42.930208223 +0000 UTC m=+0.210298773 container attach d84327478b1cec1f3376f9989bd8924cf1fd5674fa7e90953d360a9227324269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_zhukovsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:15:42 compute-0 condescending_zhukovsky[205893]: 167 167
Nov 26 01:15:42 compute-0 systemd[1]: libpod-d84327478b1cec1f3376f9989bd8924cf1fd5674fa7e90953d360a9227324269.scope: Deactivated successfully.
Nov 26 01:15:42 compute-0 podman[205878]: 2025-11-26 01:15:42.934802501 +0000 UTC m=+0.214892971 container died d84327478b1cec1f3376f9989bd8924cf1fd5674fa7e90953d360a9227324269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 01:15:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-61bb6076455674e07bae1b5a1ab6533ddabf64b6ce3f6eb51be7feaa5324624b-merged.mount: Deactivated successfully.
Nov 26 01:15:42 compute-0 podman[205878]: 2025-11-26 01:15:42.985430065 +0000 UTC m=+0.265520535 container remove d84327478b1cec1f3376f9989bd8924cf1fd5674fa7e90953d360a9227324269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 01:15:43 compute-0 systemd[1]: libpod-conmon-d84327478b1cec1f3376f9989bd8924cf1fd5674fa7e90953d360a9227324269.scope: Deactivated successfully.
Nov 26 01:15:43 compute-0 podman[205916]: 2025-11-26 01:15:43.250910278 +0000 UTC m=+0.071874138 container create dc4fec5814eec863714cd189a5350aacda444fb3a785c779926cca3b35d17934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gates, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:15:43 compute-0 podman[205916]: 2025-11-26 01:15:43.222267779 +0000 UTC m=+0.043231659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:43 compute-0 systemd[1]: Started libpod-conmon-dc4fec5814eec863714cd189a5350aacda444fb3a785c779926cca3b35d17934.scope.
Nov 26 01:15:43 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a24a61ff2e1515b2f20f4da65a172ca335cd0e30fd979aa49e2b0f18bbc6f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a24a61ff2e1515b2f20f4da65a172ca335cd0e30fd979aa49e2b0f18bbc6f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a24a61ff2e1515b2f20f4da65a172ca335cd0e30fd979aa49e2b0f18bbc6f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a24a61ff2e1515b2f20f4da65a172ca335cd0e30fd979aa49e2b0f18bbc6f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:43 compute-0 podman[205916]: 2025-11-26 01:15:43.406983326 +0000 UTC m=+0.227947236 container init dc4fec5814eec863714cd189a5350aacda444fb3a785c779926cca3b35d17934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:43 compute-0 podman[205916]: 2025-11-26 01:15:43.441655465 +0000 UTC m=+0.262619325 container start dc4fec5814eec863714cd189a5350aacda444fb3a785c779926cca3b35d17934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gates, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 26 01:15:43 compute-0 podman[205916]: 2025-11-26 01:15:43.448132125 +0000 UTC m=+0.269096035 container attach dc4fec5814eec863714cd189a5350aacda444fb3a785c779926cca3b35d17934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 01:15:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:15:44 compute-0 cranky_gates[205930]: {
Nov 26 01:15:44 compute-0 cranky_gates[205930]:    "0": [
Nov 26 01:15:44 compute-0 cranky_gates[205930]:        {
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "devices": [
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "/dev/loop3"
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            ],
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "lv_name": "ceph_lv0",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "lv_size": "21470642176",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "name": "ceph_lv0",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "tags": {
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.cluster_name": "ceph",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.crush_device_class": "",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.encrypted": "0",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.osd_id": "0",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.type": "block",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.vdo": "0"
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            },
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "type": "block",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "vg_name": "ceph_vg0"
Nov 26 01:15:44 compute-0 cranky_gates[205930]:        }
Nov 26 01:15:44 compute-0 cranky_gates[205930]:    ],
Nov 26 01:15:44 compute-0 cranky_gates[205930]:    "1": [
Nov 26 01:15:44 compute-0 cranky_gates[205930]:        {
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "devices": [
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "/dev/loop4"
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            ],
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "lv_name": "ceph_lv1",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "lv_size": "21470642176",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "name": "ceph_lv1",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "tags": {
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.cluster_name": "ceph",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.crush_device_class": "",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.encrypted": "0",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.osd_id": "1",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.type": "block",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.vdo": "0"
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            },
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "type": "block",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "vg_name": "ceph_vg1"
Nov 26 01:15:44 compute-0 cranky_gates[205930]:        }
Nov 26 01:15:44 compute-0 cranky_gates[205930]:    ],
Nov 26 01:15:44 compute-0 cranky_gates[205930]:    "2": [
Nov 26 01:15:44 compute-0 cranky_gates[205930]:        {
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "devices": [
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "/dev/loop5"
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            ],
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "lv_name": "ceph_lv2",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "lv_size": "21470642176",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "name": "ceph_lv2",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "tags": {
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.cluster_name": "ceph",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.crush_device_class": "",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.encrypted": "0",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.osd_id": "2",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.type": "block",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:                "ceph.vdo": "0"
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            },
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "type": "block",
Nov 26 01:15:44 compute-0 cranky_gates[205930]:            "vg_name": "ceph_vg2"
Nov 26 01:15:44 compute-0 cranky_gates[205930]:        }
Nov 26 01:15:44 compute-0 cranky_gates[205930]:    ]
Nov 26 01:15:44 compute-0 cranky_gates[205930]: }
Nov 26 01:15:44 compute-0 systemd[1]: libpod-dc4fec5814eec863714cd189a5350aacda444fb3a785c779926cca3b35d17934.scope: Deactivated successfully.
Nov 26 01:15:44 compute-0 podman[205916]: 2025-11-26 01:15:44.252603658 +0000 UTC m=+1.073567488 container died dc4fec5814eec863714cd189a5350aacda444fb3a785c779926cca3b35d17934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 01:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9a24a61ff2e1515b2f20f4da65a172ca335cd0e30fd979aa49e2b0f18bbc6f5-merged.mount: Deactivated successfully.
Nov 26 01:15:44 compute-0 podman[205916]: 2025-11-26 01:15:44.343061454 +0000 UTC m=+1.164025294 container remove dc4fec5814eec863714cd189a5350aacda444fb3a785c779926cca3b35d17934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gates, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:15:44 compute-0 systemd[1]: libpod-conmon-dc4fec5814eec863714cd189a5350aacda444fb3a785c779926cca3b35d17934.scope: Deactivated successfully.
Nov 26 01:15:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 26 01:15:44 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 26 01:15:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:15:44 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:15:44 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 26 01:15:44 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 26 01:15:44 compute-0 podman[205977]: 2025-11-26 01:15:44.694809236 +0000 UTC m=+0.121540275 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 01:15:44 compute-0 podman[205976]: 2025-11-26 01:15:44.710408352 +0000 UTC m=+0.138303173 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 26 01:15:44 compute-0 podman[205978]: 2025-11-26 01:15:44.732653713 +0000 UTC m=+0.151688107 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 01:15:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:45 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 26 01:15:45 compute-0 podman[206153]: 2025-11-26 01:15:45.474524059 +0000 UTC m=+0.087602567 container create 760572aea9eccbd8da51be0f09caf029caa06a8dde78fc2ec27adbad3efd0ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:15:45 compute-0 podman[206153]: 2025-11-26 01:15:45.434130471 +0000 UTC m=+0.047209029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:45 compute-0 systemd[1]: Started libpod-conmon-760572aea9eccbd8da51be0f09caf029caa06a8dde78fc2ec27adbad3efd0ef5.scope.
Nov 26 01:15:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:45 compute-0 podman[206153]: 2025-11-26 01:15:45.608725527 +0000 UTC m=+0.221804105 container init 760572aea9eccbd8da51be0f09caf029caa06a8dde78fc2ec27adbad3efd0ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:15:45 compute-0 podman[206153]: 2025-11-26 01:15:45.626420371 +0000 UTC m=+0.239498889 container start 760572aea9eccbd8da51be0f09caf029caa06a8dde78fc2ec27adbad3efd0ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_meitner, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:15:45 compute-0 podman[206153]: 2025-11-26 01:15:45.63317882 +0000 UTC m=+0.246257378 container attach 760572aea9eccbd8da51be0f09caf029caa06a8dde78fc2ec27adbad3efd0ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 01:15:45 compute-0 reverent_meitner[206168]: 167 167
Nov 26 01:15:45 compute-0 systemd[1]: libpod-760572aea9eccbd8da51be0f09caf029caa06a8dde78fc2ec27adbad3efd0ef5.scope: Deactivated successfully.
Nov 26 01:15:45 compute-0 conmon[206168]: conmon 760572aea9eccbd8da51 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-760572aea9eccbd8da51be0f09caf029caa06a8dde78fc2ec27adbad3efd0ef5.scope/container/memory.events
Nov 26 01:15:45 compute-0 podman[206153]: 2025-11-26 01:15:45.639512037 +0000 UTC m=+0.252590555 container died 760572aea9eccbd8da51be0f09caf029caa06a8dde78fc2ec27adbad3efd0ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 01:15:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5c73d48fd4ed8ba8d69ebab2486f7d7ee3ce3db20692fbcd09cbeab228a93a5-merged.mount: Deactivated successfully.
Nov 26 01:15:45 compute-0 podman[206153]: 2025-11-26 01:15:45.712916506 +0000 UTC m=+0.325995014 container remove 760572aea9eccbd8da51be0f09caf029caa06a8dde78fc2ec27adbad3efd0ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_meitner, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 26 01:15:45 compute-0 systemd[1]: libpod-conmon-760572aea9eccbd8da51be0f09caf029caa06a8dde78fc2ec27adbad3efd0ef5.scope: Deactivated successfully.
Nov 26 01:15:46 compute-0 podman[206200]: 2025-11-26 01:15:46.117060372 +0000 UTC m=+0.066891869 container create 62a8d16860dcbef8781f43a7bb2e3f2fac087cbcb67cac2cef4315065d72dfd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 01:15:46 compute-0 ceph-mon[192746]: Deploying daemon osd.0 on compute-0
Nov 26 01:15:46 compute-0 systemd[1]: Started libpod-conmon-62a8d16860dcbef8781f43a7bb2e3f2fac087cbcb67cac2cef4315065d72dfd0.scope.
Nov 26 01:15:46 compute-0 podman[206200]: 2025-11-26 01:15:46.092043183 +0000 UTC m=+0.041874750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c15402feb11495f5eeb929aab301a0c3f380323d2c0f32bdb821fb9803c46d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c15402feb11495f5eeb929aab301a0c3f380323d2c0f32bdb821fb9803c46d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c15402feb11495f5eeb929aab301a0c3f380323d2c0f32bdb821fb9803c46d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c15402feb11495f5eeb929aab301a0c3f380323d2c0f32bdb821fb9803c46d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c15402feb11495f5eeb929aab301a0c3f380323d2c0f32bdb821fb9803c46d/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:46 compute-0 podman[206200]: 2025-11-26 01:15:46.306619885 +0000 UTC m=+0.256451412 container init 62a8d16860dcbef8781f43a7bb2e3f2fac087cbcb67cac2cef4315065d72dfd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate-test, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:15:46 compute-0 podman[206200]: 2025-11-26 01:15:46.331064218 +0000 UTC m=+0.280895725 container start 62a8d16860dcbef8781f43a7bb2e3f2fac087cbcb67cac2cef4315065d72dfd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate-test, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 01:15:46 compute-0 podman[206200]: 2025-11-26 01:15:46.336957113 +0000 UTC m=+0.286788640 container attach 62a8d16860dcbef8781f43a7bb2e3f2fac087cbcb67cac2cef4315065d72dfd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate-test, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:15:46 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate-test[206217]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 26 01:15:46 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate-test[206217]:                            [--no-systemd] [--no-tmpfs]
Nov 26 01:15:46 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate-test[206217]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 26 01:15:46 compute-0 systemd[1]: libpod-62a8d16860dcbef8781f43a7bb2e3f2fac087cbcb67cac2cef4315065d72dfd0.scope: Deactivated successfully.
Nov 26 01:15:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:47 compute-0 podman[206222]: 2025-11-26 01:15:47.028553666 +0000 UTC m=+0.059424221 container died 62a8d16860dcbef8781f43a7bb2e3f2fac087cbcb67cac2cef4315065d72dfd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate-test, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-49c15402feb11495f5eeb929aab301a0c3f380323d2c0f32bdb821fb9803c46d-merged.mount: Deactivated successfully.
Nov 26 01:15:47 compute-0 podman[206222]: 2025-11-26 01:15:47.143407203 +0000 UTC m=+0.174277758 container remove 62a8d16860dcbef8781f43a7bb2e3f2fac087cbcb67cac2cef4315065d72dfd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:47 compute-0 systemd[1]: libpod-conmon-62a8d16860dcbef8781f43a7bb2e3f2fac087cbcb67cac2cef4315065d72dfd0.scope: Deactivated successfully.
Nov 26 01:15:47 compute-0 systemd[1]: Reloading.
Nov 26 01:15:47 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:15:47 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:15:48 compute-0 systemd[1]: Reloading.
Nov 26 01:15:48 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:15:48 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:15:48 compute-0 systemd[1]: Starting Ceph osd.0 for 36901f64-240e-5c29-a2e2-29b56f2c329c...
Nov 26 01:15:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:15:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:49 compute-0 podman[206376]: 2025-11-26 01:15:49.048936644 +0000 UTC m=+0.115705792 container create 86c54fe3fd4ac9c3f7a313bf6954c55ffdea4654fd1fe5b913208c03b43c002c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 01:15:49 compute-0 podman[206376]: 2025-11-26 01:15:49.012162887 +0000 UTC m=+0.078932075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:49 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83164dcd20e8c75eaac7e4fe140305b90da9bf9450f5da5cb24a576e1d2ac671/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83164dcd20e8c75eaac7e4fe140305b90da9bf9450f5da5cb24a576e1d2ac671/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83164dcd20e8c75eaac7e4fe140305b90da9bf9450f5da5cb24a576e1d2ac671/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83164dcd20e8c75eaac7e4fe140305b90da9bf9450f5da5cb24a576e1d2ac671/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83164dcd20e8c75eaac7e4fe140305b90da9bf9450f5da5cb24a576e1d2ac671/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:49 compute-0 podman[206388]: 2025-11-26 01:15:49.241750288 +0000 UTC m=+0.107723239 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:15:49 compute-0 podman[206387]: 2025-11-26 01:15:49.248289051 +0000 UTC m=+0.122889953 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, vendor=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 01:15:49 compute-0 podman[206376]: 2025-11-26 01:15:49.31916323 +0000 UTC m=+0.385932418 container init 86c54fe3fd4ac9c3f7a313bf6954c55ffdea4654fd1fe5b913208c03b43c002c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:15:49 compute-0 podman[206376]: 2025-11-26 01:15:49.329817428 +0000 UTC m=+0.396586546 container start 86c54fe3fd4ac9c3f7a313bf6954c55ffdea4654fd1fe5b913208c03b43c002c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 01:15:49 compute-0 podman[206376]: 2025-11-26 01:15:49.395929984 +0000 UTC m=+0.462699122 container attach 86c54fe3fd4ac9c3f7a313bf6954c55ffdea4654fd1fe5b913208c03b43c002c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:15:50 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate[206408]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 01:15:50 compute-0 bash[206376]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 01:15:50 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate[206408]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 01:15:50 compute-0 bash[206376]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 01:15:50 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate[206408]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 01:15:50 compute-0 bash[206376]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 26 01:15:50 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate[206408]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 01:15:50 compute-0 bash[206376]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 26 01:15:50 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate[206408]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 01:15:50 compute-0 bash[206376]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 26 01:15:50 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate[206408]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 01:15:50 compute-0 bash[206376]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 26 01:15:50 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate[206408]: --> ceph-volume raw activate successful for osd ID: 0
Nov 26 01:15:50 compute-0 bash[206376]: --> ceph-volume raw activate successful for osd ID: 0
Nov 26 01:15:50 compute-0 systemd[1]: libpod-86c54fe3fd4ac9c3f7a313bf6954c55ffdea4654fd1fe5b913208c03b43c002c.scope: Deactivated successfully.
Nov 26 01:15:50 compute-0 systemd[1]: libpod-86c54fe3fd4ac9c3f7a313bf6954c55ffdea4654fd1fe5b913208c03b43c002c.scope: Consumed 1.354s CPU time.
Nov 26 01:15:50 compute-0 podman[206573]: 2025-11-26 01:15:50.729762511 +0000 UTC m=+0.040235544 container died 86c54fe3fd4ac9c3f7a313bf6954c55ffdea4654fd1fe5b913208c03b43c002c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:15:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-83164dcd20e8c75eaac7e4fe140305b90da9bf9450f5da5cb24a576e1d2ac671-merged.mount: Deactivated successfully.
Nov 26 01:15:50 compute-0 podman[206573]: 2025-11-26 01:15:50.822224493 +0000 UTC m=+0.132697536 container remove 86c54fe3fd4ac9c3f7a313bf6954c55ffdea4654fd1fe5b913208c03b43c002c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 01:15:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:51 compute-0 podman[206626]: 2025-11-26 01:15:51.254522374 +0000 UTC m=+0.080622422 container create fd4f624ba4ccb0ddbb0265222a83ee766b3069801a0d6aee586a0b6187c4d1d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:15:51 compute-0 podman[206626]: 2025-11-26 01:15:51.225791482 +0000 UTC m=+0.051891600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0fa9509d90a97a950cf91c8bc0744c4ff2541a266ee8c736f4f74eb66f3a51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0fa9509d90a97a950cf91c8bc0744c4ff2541a266ee8c736f4f74eb66f3a51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0fa9509d90a97a950cf91c8bc0744c4ff2541a266ee8c736f4f74eb66f3a51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0fa9509d90a97a950cf91c8bc0744c4ff2541a266ee8c736f4f74eb66f3a51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0fa9509d90a97a950cf91c8bc0744c4ff2541a266ee8c736f4f74eb66f3a51/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:51 compute-0 podman[206626]: 2025-11-26 01:15:51.362581862 +0000 UTC m=+0.188681930 container init fd4f624ba4ccb0ddbb0265222a83ee766b3069801a0d6aee586a0b6187c4d1d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 01:15:51 compute-0 podman[206626]: 2025-11-26 01:15:51.395574983 +0000 UTC m=+0.221675031 container start fd4f624ba4ccb0ddbb0265222a83ee766b3069801a0d6aee586a0b6187c4d1d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:15:51 compute-0 bash[206626]: fd4f624ba4ccb0ddbb0265222a83ee766b3069801a0d6aee586a0b6187c4d1d1
Nov 26 01:15:51 compute-0 systemd[1]: Started Ceph osd.0 for 36901f64-240e-5c29-a2e2-29b56f2c329c.
Nov 26 01:15:51 compute-0 ceph-osd[206645]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 01:15:51 compute-0 ceph-osd[206645]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 26 01:15:51 compute-0 ceph-osd[206645]: pidfile_write: ignore empty --pid-file
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bdev(0x55a132df5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bdev(0x55a132df5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bdev(0x55a132df5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bdev(0x55a132df5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bdev(0x55a133c2d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bdev(0x55a133c2d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bdev(0x55a133c2d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:15:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bdev(0x55a133c2d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bdev(0x55a133c2d800 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 01:15:51 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:15:51 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 26 01:15:51 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 26 01:15:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:15:51 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bdev(0x55a132df5800 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 01:15:51 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 26 01:15:51 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 26 01:15:51 compute-0 ceph-osd[206645]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 26 01:15:51 compute-0 ceph-osd[206645]: load: jerasure load: lrc 
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bdev(0x55a132fbec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bdev(0x55a132fbec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bdev(0x55a132fbec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bdev(0x55a132fbec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 01:15:51 compute-0 ceph-osd[206645]: bdev(0x55a132fbec00 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbec00 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 01:15:52 compute-0 ceph-osd[206645]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 26 01:15:52 compute-0 ceph-osd[206645]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbf400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbf400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbf400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbf400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluefs mount
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluefs mount shared_bdev_used = 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: RocksDB version: 7.9.2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Git sha 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: DB SUMMARY
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: DB Session ID:  D7MYUU0SMOLSTMZV9AH1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: CURRENT file:  CURRENT
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                         Options.error_if_exists: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.create_if_missing: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                                     Options.env: 0x55a133c7fe30
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                                Options.info_log: 0x55a132e81680
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                              Options.statistics: (nil)
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.use_fsync: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                              Options.db_log_dir: 
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.write_buffer_manager: 0x55a132eae460
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.unordered_write: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.row_cache: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                              Options.wal_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.two_write_queues: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.wal_compression: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.atomic_flush: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.max_background_jobs: 4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.max_background_compactions: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.max_subcompactions: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.max_open_files: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Compression algorithms supported:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kZSTD supported: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kXpressCompression supported: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kZlibCompression supported: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e81ce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e81ce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e81ce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e81ce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e81ce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e81ce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e81ce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e81cc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e81cc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e81cc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 82352945-dba3-4ac3-9a0c-aff18c08a451
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119752408714, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119752409405, "job": 1, "event": "recovery_finished"}
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: freelist init
Nov 26 01:15:52 compute-0 ceph-osd[206645]: freelist _read_cfg
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluefs umount
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbf400 /var/lib/ceph/osd/ceph-0/block) close
Nov 26 01:15:52 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:52 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:52 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 26 01:15:52 compute-0 ceph-mon[192746]: Deploying daemon osd.1 on compute-0
Nov 26 01:15:52 compute-0 podman[207000]: 2025-11-26 01:15:52.607370393 +0000 UTC m=+0.089693356 container create 6d7a29fabd7515bfcf808e9c468ac3a5deb405c4cd567e08eebca8b549bfc7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbf400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbf400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbf400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bdev(0x55a132fbf400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluefs mount
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluefs mount shared_bdev_used = 4718592
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: RocksDB version: 7.9.2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Git sha 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: DB SUMMARY
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: DB Session ID:  D7MYUU0SMOLSTMZV9AH0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: CURRENT file:  CURRENT
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                         Options.error_if_exists: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.create_if_missing: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                                     Options.env: 0x55a133de6230
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                                Options.info_log: 0x55a132e81440
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                              Options.statistics: (nil)
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.use_fsync: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                              Options.db_log_dir: 
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.write_buffer_manager: 0x55a132eae460
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.unordered_write: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.row_cache: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                              Options.wal_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.two_write_queues: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.wal_compression: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.atomic_flush: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.max_background_jobs: 4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.max_background_compactions: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.max_subcompactions: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.max_open_files: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Compression algorithms supported:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kZSTD supported: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kXpressCompression supported: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kZlibCompression supported: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e80bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e80bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e80bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e80bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e80bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e80bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e80bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 podman[207000]: 2025-11-26 01:15:52.578777494 +0000 UTC m=+0.061100497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e811c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e811c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a132e811c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a132e68430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 01:15:52 compute-0 systemd[1]: Started libpod-conmon-6d7a29fabd7515bfcf808e9c468ac3a5deb405c4cd567e08eebca8b549bfc7f0.scope.
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 82352945-dba3-4ac3-9a0c-aff18c08a451
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119752693892, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119752699671, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119752, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "82352945-dba3-4ac3-9a0c-aff18c08a451", "db_session_id": "D7MYUU0SMOLSTMZV9AH0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:15:52 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119752705156, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119752, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "82352945-dba3-4ac3-9a0c-aff18c08a451", "db_session_id": "D7MYUU0SMOLSTMZV9AH0", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119752711476, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119752, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "82352945-dba3-4ac3-9a0c-aff18c08a451", "db_session_id": "D7MYUU0SMOLSTMZV9AH0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119752715635, "job": 1, "event": "recovery_finished"}
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 26 01:15:52 compute-0 podman[207000]: 2025-11-26 01:15:52.722017094 +0000 UTC m=+0.204340087 container init 6d7a29fabd7515bfcf808e9c468ac3a5deb405c4cd567e08eebca8b549bfc7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_poitras, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:15:52 compute-0 podman[207000]: 2025-11-26 01:15:52.733747492 +0000 UTC m=+0.216070445 container start 6d7a29fabd7515bfcf808e9c468ac3a5deb405c4cd567e08eebca8b549bfc7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 01:15:52 compute-0 podman[207000]: 2025-11-26 01:15:52.738523265 +0000 UTC m=+0.220846258 container attach 6d7a29fabd7515bfcf808e9c468ac3a5deb405c4cd567e08eebca8b549bfc7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_poitras, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 01:15:52 compute-0 sharp_poitras[207159]: 167 167
Nov 26 01:15:52 compute-0 systemd[1]: libpod-6d7a29fabd7515bfcf808e9c468ac3a5deb405c4cd567e08eebca8b549bfc7f0.scope: Deactivated successfully.
Nov 26 01:15:52 compute-0 conmon[207159]: conmon 6d7a29fabd7515bfcf80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d7a29fabd7515bfcf808e9c468ac3a5deb405c4cd567e08eebca8b549bfc7f0.scope/container/memory.events
Nov 26 01:15:52 compute-0 podman[207000]: 2025-11-26 01:15:52.746666992 +0000 UTC m=+0.228989975 container died 6d7a29fabd7515bfcf808e9c468ac3a5deb405c4cd567e08eebca8b549bfc7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_poitras, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a133e4c000
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: DB pointer 0x55a132e9da00
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 26 01:15:52 compute-0 ceph-osd[206645]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a132e68dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a132e68dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Nov 26 01:15:52 compute-0 ceph-osd[206645]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 26 01:15:52 compute-0 ceph-osd[206645]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 26 01:15:52 compute-0 ceph-osd[206645]: _get_class not permitted to load lua
Nov 26 01:15:52 compute-0 ceph-osd[206645]: _get_class not permitted to load sdk
Nov 26 01:15:52 compute-0 ceph-osd[206645]: _get_class not permitted to load test_remote_reads
Nov 26 01:15:52 compute-0 ceph-osd[206645]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 26 01:15:52 compute-0 ceph-osd[206645]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 26 01:15:52 compute-0 ceph-osd[206645]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 26 01:15:52 compute-0 ceph-osd[206645]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 26 01:15:52 compute-0 ceph-osd[206645]: osd.0 0 load_pgs
Nov 26 01:15:52 compute-0 ceph-osd[206645]: osd.0 0 load_pgs opened 0 pgs
Nov 26 01:15:52 compute-0 ceph-osd[206645]: osd.0 0 log_to_monitors true
Nov 26 01:15:52 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0[206641]: 2025-11-26T01:15:52.758+0000 7f6f65f9a740 -1 osd.0 0 log_to_monitors true
Nov 26 01:15:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 26 01:15:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2157094075,v1:192.168.122.100:6803/2157094075]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 26 01:15:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf6c461ce51478f5dc6576e9c48b8a41888f114dbf86928a6307d608126c5aaf-merged.mount: Deactivated successfully.
Nov 26 01:15:52 compute-0 python3[207040]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:15:52 compute-0 podman[207000]: 2025-11-26 01:15:52.807021528 +0000 UTC m=+0.289344481 container remove 6d7a29fabd7515bfcf808e9c468ac3a5deb405c4cd567e08eebca8b549bfc7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_poitras, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:52 compute-0 systemd[1]: libpod-conmon-6d7a29fabd7515bfcf808e9c468ac3a5deb405c4cd567e08eebca8b549bfc7f0.scope: Deactivated successfully.
Nov 26 01:15:52 compute-0 podman[207272]: 2025-11-26 01:15:52.870288845 +0000 UTC m=+0.043847036 container create 2ce1ca38e0be9e49bda3c75e91ec6d12d415ce59cd6ebbaa568b5694add00670 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 26 01:15:52 compute-0 systemd[1]: Started libpod-conmon-2ce1ca38e0be9e49bda3c75e91ec6d12d415ce59cd6ebbaa568b5694add00670.scope.
Nov 26 01:15:52 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:52 compute-0 podman[207272]: 2025-11-26 01:15:52.853082474 +0000 UTC m=+0.026640665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8011d74beaf5606328d1451280f017f86ebaa43db8ab49f4f95af0d7849ea851/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8011d74beaf5606328d1451280f017f86ebaa43db8ab49f4f95af0d7849ea851/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8011d74beaf5606328d1451280f017f86ebaa43db8ab49f4f95af0d7849ea851/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:52 compute-0 podman[207272]: 2025-11-26 01:15:52.99081481 +0000 UTC m=+0.164373091 container init 2ce1ca38e0be9e49bda3c75e91ec6d12d415ce59cd6ebbaa568b5694add00670 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:15:53 compute-0 podman[207272]: 2025-11-26 01:15:53.0036915 +0000 UTC m=+0.177249731 container start 2ce1ca38e0be9e49bda3c75e91ec6d12d415ce59cd6ebbaa568b5694add00670 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:15:53 compute-0 podman[207272]: 2025-11-26 01:15:53.01014335 +0000 UTC m=+0.183701631 container attach 2ce1ca38e0be9e49bda3c75e91ec6d12d415ce59cd6ebbaa568b5694add00670 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:53 compute-0 podman[207308]: 2025-11-26 01:15:53.155613352 +0000 UTC m=+0.088659317 container create 56799aac0cbbc61e034bd061bb83984dd0f906b4672ced771502ce9e8d027d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:15:53 compute-0 podman[207308]: 2025-11-26 01:15:53.115390539 +0000 UTC m=+0.048436544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:53 compute-0 systemd[1]: Started libpod-conmon-56799aac0cbbc61e034bd061bb83984dd0f906b4672ced771502ce9e8d027d7e.scope.
Nov 26 01:15:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e51ebb7d16f9135c9a71c4ddd5e5058063ec2ae2e61bbdd970477dde822215f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e51ebb7d16f9135c9a71c4ddd5e5058063ec2ae2e61bbdd970477dde822215f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e51ebb7d16f9135c9a71c4ddd5e5058063ec2ae2e61bbdd970477dde822215f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e51ebb7d16f9135c9a71c4ddd5e5058063ec2ae2e61bbdd970477dde822215f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e51ebb7d16f9135c9a71c4ddd5e5058063ec2ae2e61bbdd970477dde822215f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:53 compute-0 podman[207308]: 2025-11-26 01:15:53.333420167 +0000 UTC m=+0.266466112 container init 56799aac0cbbc61e034bd061bb83984dd0f906b4672ced771502ce9e8d027d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 01:15:53 compute-0 podman[207308]: 2025-11-26 01:15:53.346853873 +0000 UTC m=+0.279899798 container start 56799aac0cbbc61e034bd061bb83984dd0f906b4672ced771502ce9e8d027d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate-test, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 01:15:53 compute-0 podman[207308]: 2025-11-26 01:15:53.353115807 +0000 UTC m=+0.286161732 container attach 56799aac0cbbc61e034bd061bb83984dd0f906b4672ced771502ce9e8d027d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate-test, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:15:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 26 01:15:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 01:15:53 compute-0 ceph-mon[192746]: from='osd.0 [v2:192.168.122.100:6802/2157094075,v1:192.168.122.100:6803/2157094075]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 26 01:15:53 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2157094075,v1:192.168.122.100:6803/2157094075]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 26 01:15:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 26 01:15:53 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 26 01:15:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 26 01:15:53 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2157094075,v1:192.168.122.100:6803/2157094075]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 01:15:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 26 01:15:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 01:15:53 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 01:15:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 01:15:53 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 01:15:53 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 01:15:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 01:15:53 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 01:15:53 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 01:15:53 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 01:15:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 01:15:53 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/641767601' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 01:15:53 compute-0 flamboyant_ramanujan[207297]: 
Nov 26 01:15:53 compute-0 flamboyant_ramanujan[207297]: {"fsid":"36901f64-240e-5c29-a2e2-29b56f2c329c","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":119,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":7,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1764119737,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-26T01:15:42.962021+0000","services":{}},"progress_events":{}}
Nov 26 01:15:53 compute-0 systemd[1]: libpod-2ce1ca38e0be9e49bda3c75e91ec6d12d415ce59cd6ebbaa568b5694add00670.scope: Deactivated successfully.
Nov 26 01:15:53 compute-0 podman[207272]: 2025-11-26 01:15:53.695523969 +0000 UTC m=+0.869082160 container died 2ce1ca38e0be9e49bda3c75e91ec6d12d415ce59cd6ebbaa568b5694add00670 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:53 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 26 01:15:53 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 26 01:15:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8011d74beaf5606328d1451280f017f86ebaa43db8ab49f4f95af0d7849ea851-merged.mount: Deactivated successfully.
Nov 26 01:15:53 compute-0 podman[207272]: 2025-11-26 01:15:53.759699221 +0000 UTC m=+0.933257412 container remove 2ce1ca38e0be9e49bda3c75e91ec6d12d415ce59cd6ebbaa568b5694add00670 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 01:15:53 compute-0 systemd[1]: libpod-conmon-2ce1ca38e0be9e49bda3c75e91ec6d12d415ce59cd6ebbaa568b5694add00670.scope: Deactivated successfully.
Nov 26 01:15:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:15:54 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate-test[207324]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 26 01:15:54 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate-test[207324]:                            [--no-systemd] [--no-tmpfs]
Nov 26 01:15:54 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate-test[207324]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 26 01:15:54 compute-0 systemd[1]: libpod-56799aac0cbbc61e034bd061bb83984dd0f906b4672ced771502ce9e8d027d7e.scope: Deactivated successfully.
Nov 26 01:15:54 compute-0 podman[207308]: 2025-11-26 01:15:54.031748718 +0000 UTC m=+0.964794683 container died 56799aac0cbbc61e034bd061bb83984dd0f906b4672ced771502ce9e8d027d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:15:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e51ebb7d16f9135c9a71c4ddd5e5058063ec2ae2e61bbdd970477dde822215f-merged.mount: Deactivated successfully.
Nov 26 01:15:54 compute-0 podman[207308]: 2025-11-26 01:15:54.127529763 +0000 UTC m=+1.060575728 container remove 56799aac0cbbc61e034bd061bb83984dd0f906b4672ced771502ce9e8d027d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate-test, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 01:15:54 compute-0 systemd[1]: libpod-conmon-56799aac0cbbc61e034bd061bb83984dd0f906b4672ced771502ce9e8d027d7e.scope: Deactivated successfully.
Nov 26 01:15:54 compute-0 podman[207364]: 2025-11-26 01:15:54.226686762 +0000 UTC m=+0.149679861 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 01:15:54 compute-0 systemd[1]: Reloading.
Nov 26 01:15:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 26 01:15:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 01:15:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2157094075,v1:192.168.122.100:6803/2157094075]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 01:15:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 26 01:15:54 compute-0 ceph-osd[206645]: osd.0 0 done with init, starting boot process
Nov 26 01:15:54 compute-0 ceph-osd[206645]: osd.0 0 start_boot
Nov 26 01:15:54 compute-0 ceph-osd[206645]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 26 01:15:54 compute-0 ceph-osd[206645]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 26 01:15:54 compute-0 ceph-osd[206645]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 26 01:15:54 compute-0 ceph-osd[206645]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 26 01:15:54 compute-0 ceph-osd[206645]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 26 01:15:54 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 26 01:15:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 01:15:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 01:15:54 compute-0 ceph-mon[192746]: from='osd.0 [v2:192.168.122.100:6802/2157094075,v1:192.168.122.100:6803/2157094075]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 26 01:15:54 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 01:15:54 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 01:15:54 compute-0 ceph-mon[192746]: from='osd.0 [v2:192.168.122.100:6802/2157094075,v1:192.168.122.100:6803/2157094075]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 01:15:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 01:15:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 01:15:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 01:15:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 01:15:54 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 01:15:54 compute-0 ceph-mgr[193049]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2157094075; not ready for session (expect reconnect)
Nov 26 01:15:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 01:15:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 01:15:54 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 01:15:54 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:15:54 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:15:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:55 compute-0 systemd[1]: Reloading.
Nov 26 01:15:55 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:15:55 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:15:55 compute-0 systemd[1]: Starting Ceph osd.1 for 36901f64-240e-5c29-a2e2-29b56f2c329c...
Nov 26 01:15:55 compute-0 ceph-mgr[193049]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2157094075; not ready for session (expect reconnect)
Nov 26 01:15:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 01:15:55 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 01:15:55 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 01:15:55 compute-0 ceph-mon[192746]: from='osd.0 [v2:192.168.122.100:6802/2157094075,v1:192.168.122.100:6803/2157094075]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 01:15:55 compute-0 podman[207490]: 2025-11-26 01:15:55.584407445 +0000 UTC m=+0.132417548 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, managed_by=edpm_ansible, io.openshift.tags=base rhel9, name=ubi9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 01:15:55 compute-0 podman[207555]: 2025-11-26 01:15:55.887208531 +0000 UTC m=+0.097777621 container create 202215d10962f24a323503cda34f5d5b343fb727bc197930ef3bfd4f86606c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:15:55 compute-0 podman[207555]: 2025-11-26 01:15:55.853974193 +0000 UTC m=+0.064543313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:55 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b9387df4f932cdcb1c6265d84fa93a1daa1b6ab04d9416ab3b190a3815375a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b9387df4f932cdcb1c6265d84fa93a1daa1b6ab04d9416ab3b190a3815375a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b9387df4f932cdcb1c6265d84fa93a1daa1b6ab04d9416ab3b190a3815375a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b9387df4f932cdcb1c6265d84fa93a1daa1b6ab04d9416ab3b190a3815375a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b9387df4f932cdcb1c6265d84fa93a1daa1b6ab04d9416ab3b190a3815375a/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:56 compute-0 podman[207555]: 2025-11-26 01:15:56.051270673 +0000 UTC m=+0.261839853 container init 202215d10962f24a323503cda34f5d5b343fb727bc197930ef3bfd4f86606c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 01:15:56 compute-0 podman[207555]: 2025-11-26 01:15:56.059142142 +0000 UTC m=+0.269711262 container start 202215d10962f24a323503cda34f5d5b343fb727bc197930ef3bfd4f86606c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:15:56 compute-0 podman[207555]: 2025-11-26 01:15:56.079269374 +0000 UTC m=+0.289838494 container attach 202215d10962f24a323503cda34f5d5b343fb727bc197930ef3bfd4f86606c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 01:15:56 compute-0 ceph-mgr[193049]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2157094075; not ready for session (expect reconnect)
Nov 26 01:15:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 01:15:56 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 01:15:56 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 01:15:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:57 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate[207569]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 01:15:57 compute-0 bash[207555]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 01:15:57 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate[207569]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 01:15:57 compute-0 bash[207555]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 01:15:57 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate[207569]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 01:15:57 compute-0 bash[207555]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 26 01:15:57 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate[207569]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 01:15:57 compute-0 bash[207555]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 26 01:15:57 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate[207569]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 01:15:57 compute-0 bash[207555]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 26 01:15:57 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate[207569]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 01:15:57 compute-0 bash[207555]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 26 01:15:57 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate[207569]: --> ceph-volume raw activate successful for osd ID: 1
Nov 26 01:15:57 compute-0 bash[207555]: --> ceph-volume raw activate successful for osd ID: 1
Nov 26 01:15:57 compute-0 systemd[1]: libpod-202215d10962f24a323503cda34f5d5b343fb727bc197930ef3bfd4f86606c5b.scope: Deactivated successfully.
Nov 26 01:15:57 compute-0 systemd[1]: libpod-202215d10962f24a323503cda34f5d5b343fb727bc197930ef3bfd4f86606c5b.scope: Consumed 1.374s CPU time.
Nov 26 01:15:57 compute-0 podman[207555]: 2025-11-26 01:15:57.417456883 +0000 UTC m=+1.628026043 container died 202215d10962f24a323503cda34f5d5b343fb727bc197930ef3bfd4f86606c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:15:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3b9387df4f932cdcb1c6265d84fa93a1daa1b6ab04d9416ab3b190a3815375a-merged.mount: Deactivated successfully.
Nov 26 01:15:57 compute-0 ceph-mgr[193049]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2157094075; not ready for session (expect reconnect)
Nov 26 01:15:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 01:15:57 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 01:15:57 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 01:15:57 compute-0 podman[207555]: 2025-11-26 01:15:57.541303982 +0000 UTC m=+1.751873102 container remove 202215d10962f24a323503cda34f5d5b343fb727bc197930ef3bfd4f86606c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1-activate, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:15:57 compute-0 podman[207755]: 2025-11-26 01:15:57.903752383 +0000 UTC m=+0.054706118 container create 538a4fcc44e5bd823ae44e70947f0075f8f729f47f5df99ed5394a53f3c42b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:15:57 compute-0 ceph-osd[206645]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 16.878 iops: 4320.843 elapsed_sec: 0.694
Nov 26 01:15:57 compute-0 ceph-osd[206645]: log_channel(cluster) log [WRN] : OSD bench result of 4320.843123 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 01:15:57 compute-0 ceph-osd[206645]: osd.0 0 waiting for initial osdmap
Nov 26 01:15:57 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0[206641]: 2025-11-26T01:15:57.923+0000 7f6f61f1a640 -1 osd.0 0 waiting for initial osdmap
Nov 26 01:15:57 compute-0 ceph-osd[206645]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 26 01:15:57 compute-0 ceph-osd[206645]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 26 01:15:57 compute-0 ceph-osd[206645]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 26 01:15:57 compute-0 ceph-osd[206645]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Nov 26 01:15:57 compute-0 ceph-osd[206645]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 01:15:57 compute-0 ceph-osd[206645]: osd.0 8 set_numa_affinity not setting numa affinity
Nov 26 01:15:57 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-0[206641]: 2025-11-26T01:15:57.949+0000 7f6f5d542640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 01:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6408a9a0a31a7accdffd6049d3e0017645d7f0bcdf3fbfa06370012ba667d9f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:57 compute-0 ceph-osd[206645]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 26 01:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6408a9a0a31a7accdffd6049d3e0017645d7f0bcdf3fbfa06370012ba667d9f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6408a9a0a31a7accdffd6049d3e0017645d7f0bcdf3fbfa06370012ba667d9f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6408a9a0a31a7accdffd6049d3e0017645d7f0bcdf3fbfa06370012ba667d9f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6408a9a0a31a7accdffd6049d3e0017645d7f0bcdf3fbfa06370012ba667d9f3/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:57 compute-0 podman[207755]: 2025-11-26 01:15:57.969579412 +0000 UTC m=+0.120533137 container init 538a4fcc44e5bd823ae44e70947f0075f8f729f47f5df99ed5394a53f3c42b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:15:57 compute-0 podman[207755]: 2025-11-26 01:15:57.886563173 +0000 UTC m=+0.037516938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:57 compute-0 podman[207755]: 2025-11-26 01:15:57.983399107 +0000 UTC m=+0.134352832 container start 538a4fcc44e5bd823ae44e70947f0075f8f729f47f5df99ed5394a53f3c42b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:15:57 compute-0 bash[207755]: 538a4fcc44e5bd823ae44e70947f0075f8f729f47f5df99ed5394a53f3c42b25
Nov 26 01:15:58 compute-0 systemd[1]: Started Ceph osd.1 for 36901f64-240e-5c29-a2e2-29b56f2c329c.
Nov 26 01:15:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 01:15:58 compute-0 ceph-osd[207774]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: pidfile_write: ignore empty --pid-file
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a115800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a115800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a115800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a115800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56af4d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56af4d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56af4d800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56af4d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56af4d800 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 01:15:58 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:15:58 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 26 01:15:58 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 26 01:15:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:15:58 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:15:58 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 26 01:15:58 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a115800 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 01:15:58 compute-0 ceph-osd[207774]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 26 01:15:58 compute-0 ceph-osd[207774]: load: jerasure load: lrc 
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2dec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2dec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2dec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2dec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2dec00 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2dec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2dec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2dec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2dec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2dec00 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 01:15:58 compute-0 ceph-mgr[193049]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2157094075; not ready for session (expect reconnect)
Nov 26 01:15:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 01:15:58 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 01:15:58 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 26 01:15:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 26 01:15:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 01:15:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Nov 26 01:15:58 compute-0 ceph-osd[206645]: osd.0 9 state: booting -> active
Nov 26 01:15:58 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2157094075,v1:192.168.122.100:6803/2157094075] boot
Nov 26 01:15:58 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Nov 26 01:15:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 26 01:15:58 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 26 01:15:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 01:15:58 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 01:15:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 01:15:58 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 01:15:58 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 01:15:58 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 26 01:15:58 compute-0 ceph-osd[207774]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2dec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2dec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2dec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2dec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2df400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2df400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2df400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2df400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluefs mount
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluefs mount shared_bdev_used = 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: RocksDB version: 7.9.2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Git sha 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: DB SUMMARY
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: DB Session ID:  RYIWU6K0MVT7BMWZ80OJ
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: CURRENT file:  CURRENT
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                         Options.error_if_exists: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.create_if_missing: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                                     Options.env: 0x55a56af9fe30
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                                Options.info_log: 0x55a56a1a0bc0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                              Options.statistics: (nil)
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.use_fsync: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                              Options.db_log_dir: 
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.write_buffer_manager: 0x55a56b0ac460
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.unordered_write: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.row_cache: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                              Options.wal_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.two_write_queues: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.wal_compression: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.atomic_flush: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.max_background_jobs: 4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.max_background_compactions: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.max_subcompactions: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.max_open_files: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Compression algorithms supported:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kZSTD supported: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kXpressCompression supported: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kZlibCompression supported: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a1280)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a1280)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a1280)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a1280)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a1280)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a1280)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a1280)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a1260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a1260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a1260)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5103654f-9e8a-4faa-9dc4-d24a49b6c8a7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119758700060, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119758700679, "job": 1, "event": "recovery_finished"}
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: freelist init
Nov 26 01:15:58 compute-0 ceph-osd[207774]: freelist _read_cfg
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluefs umount
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2df400 /var/lib/ceph/osd/ceph-1/block) close
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2df400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2df400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2df400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bdev(0x55a56a2df400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluefs mount
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluefs mount shared_bdev_used = 4718592
Nov 26 01:15:58 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: RocksDB version: 7.9.2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Git sha 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: DB SUMMARY
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: DB Session ID:  RYIWU6K0MVT7BMWZ80OI
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: CURRENT file:  CURRENT
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                         Options.error_if_exists: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.create_if_missing: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                                     Options.env: 0x55a56b13c230
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                                Options.info_log: 0x55a56a1a09c0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                              Options.statistics: (nil)
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.use_fsync: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                              Options.db_log_dir: 
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.write_buffer_manager: 0x55a56b0ac460
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.unordered_write: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.row_cache: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                              Options.wal_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.two_write_queues: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.wal_compression: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.atomic_flush: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.max_background_jobs: 4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.max_background_compactions: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.max_subcompactions: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.max_open_files: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Compression algorithms supported:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kZSTD supported: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kXpressCompression supported: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kZlibCompression supported: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a0dc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a0dc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a0dc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a0dc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a0dc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a0dc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a0dc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a1380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a1380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:           Options.merge_operator: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a56a1a1380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a56a188430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.compression: LZ4
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.num_levels: 7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 01:15:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5103654f-9e8a-4faa-9dc4-d24a49b6c8a7
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119758979086, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119758986499, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119758, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5103654f-9e8a-4faa-9dc4-d24a49b6c8a7", "db_session_id": "RYIWU6K0MVT7BMWZ80OI", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:15:58 compute-0 ceph-osd[207774]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119758994679, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119758, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5103654f-9e8a-4faa-9dc4-d24a49b6c8a7", "db_session_id": "RYIWU6K0MVT7BMWZ80OI", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:15:59 compute-0 ceph-osd[207774]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119759000785, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119758, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5103654f-9e8a-4faa-9dc4-d24a49b6c8a7", "db_session_id": "RYIWU6K0MVT7BMWZ80OI", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:15:59 compute-0 podman[208162]: 2025-11-26 01:15:59.001665422 +0000 UTC m=+0.057361333 container create 95ab2dac81f1784f9ce60ea9c40cd43487df3a6adf852ca70efd0f3059ca39e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 26 01:15:59 compute-0 ceph-osd[207774]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119759004132, "job": 1, "event": "recovery_finished"}
Nov 26 01:15:59 compute-0 ceph-osd[207774]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 26 01:15:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a56b16c000
Nov 26 01:15:59 compute-0 ceph-osd[207774]: rocksdb: DB pointer 0x55a56a1c7a00
Nov 26 01:15:59 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 01:15:59 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 26 01:15:59 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 26 01:15:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:15:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a56a188dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a56a188dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Nov 26 01:15:59 compute-0 ceph-osd[207774]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 26 01:15:59 compute-0 ceph-osd[207774]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 26 01:15:59 compute-0 ceph-osd[207774]: _get_class not permitted to load lua
Nov 26 01:15:59 compute-0 ceph-osd[207774]: _get_class not permitted to load sdk
Nov 26 01:15:59 compute-0 ceph-osd[207774]: _get_class not permitted to load test_remote_reads
Nov 26 01:15:59 compute-0 ceph-osd[207774]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 26 01:15:59 compute-0 ceph-osd[207774]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 26 01:15:59 compute-0 ceph-osd[207774]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 26 01:15:59 compute-0 ceph-osd[207774]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 26 01:15:59 compute-0 ceph-osd[207774]: osd.1 0 load_pgs
Nov 26 01:15:59 compute-0 ceph-osd[207774]: osd.1 0 load_pgs opened 0 pgs
Nov 26 01:15:59 compute-0 ceph-mon[192746]: OSD bench result of 4320.843123 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 01:15:59 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:59 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:15:59 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 26 01:15:59 compute-0 ceph-mon[192746]: Deploying daemon osd.2 on compute-0
Nov 26 01:15:59 compute-0 ceph-mon[192746]: osd.0 [v2:192.168.122.100:6802/2157094075,v1:192.168.122.100:6803/2157094075] boot
Nov 26 01:15:59 compute-0 ceph-osd[207774]: osd.1 0 log_to_monitors true
Nov 26 01:15:59 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1[207770]: 2025-11-26T01:15:59.053+0000 7f40970b1740 -1 osd.1 0 log_to_monitors true
Nov 26 01:15:59 compute-0 systemd[1]: Started libpod-conmon-95ab2dac81f1784f9ce60ea9c40cd43487df3a6adf852ca70efd0f3059ca39e4.scope.
Nov 26 01:15:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 26 01:15:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1777324384,v1:192.168.122.100:6807/1777324384]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 26 01:15:59 compute-0 ceph-mgr[193049]: [devicehealth INFO root] creating mgr pool
Nov 26 01:15:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 26 01:15:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 26 01:15:59 compute-0 podman[208162]: 2025-11-26 01:15:58.981039456 +0000 UTC m=+0.036735377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:59 compute-0 podman[208162]: 2025-11-26 01:15:59.104818502 +0000 UTC m=+0.160514443 container init 95ab2dac81f1784f9ce60ea9c40cd43487df3a6adf852ca70efd0f3059ca39e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_einstein, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 01:15:59 compute-0 podman[208162]: 2025-11-26 01:15:59.115228443 +0000 UTC m=+0.170924354 container start 95ab2dac81f1784f9ce60ea9c40cd43487df3a6adf852ca70efd0f3059ca39e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:15:59 compute-0 podman[208162]: 2025-11-26 01:15:59.120347566 +0000 UTC m=+0.176043577 container attach 95ab2dac81f1784f9ce60ea9c40cd43487df3a6adf852ca70efd0f3059ca39e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_einstein, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:15:59 compute-0 wizardly_einstein[208361]: 167 167
Nov 26 01:15:59 compute-0 systemd[1]: libpod-95ab2dac81f1784f9ce60ea9c40cd43487df3a6adf852ca70efd0f3059ca39e4.scope: Deactivated successfully.
Nov 26 01:15:59 compute-0 podman[208162]: 2025-11-26 01:15:59.124405059 +0000 UTC m=+0.180100960 container died 95ab2dac81f1784f9ce60ea9c40cd43487df3a6adf852ca70efd0f3059ca39e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:15:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f87b34dadd721de0c1be48d77a35bee491e5af8302956dde857526d99a058afc-merged.mount: Deactivated successfully.
Nov 26 01:15:59 compute-0 podman[208162]: 2025-11-26 01:15:59.194167377 +0000 UTC m=+0.249863298 container remove 95ab2dac81f1784f9ce60ea9c40cd43487df3a6adf852ca70efd0f3059ca39e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 01:15:59 compute-0 systemd[1]: libpod-conmon-95ab2dac81f1784f9ce60ea9c40cd43487df3a6adf852ca70efd0f3059ca39e4.scope: Deactivated successfully.
Nov 26 01:15:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 26 01:15:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 26 01:15:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1777324384,v1:192.168.122.100:6807/1777324384]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 26 01:15:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 26 01:15:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 26 01:15:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Nov 26 01:15:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 26 01:15:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 26 01:15:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 26 01:15:59 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 26 01:15:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 26 01:15:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1777324384,v1:192.168.122.100:6807/1777324384]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 01:15:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e10 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 26 01:15:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 01:15:59 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 01:15:59 compute-0 podman[208395]: 2025-11-26 01:15:59.619360001 +0000 UTC m=+0.084542242 container create 4309faf6df87dc4865a8bbd89934fb788ca65e2f5c53be8aa6f57b1f373d0194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate-test, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 01:15:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 01:15:59 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 01:15:59 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 01:15:59 compute-0 ceph-osd[206645]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 26 01:15:59 compute-0 ceph-osd[206645]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 26 01:15:59 compute-0 ceph-osd[206645]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 26 01:15:59 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 10 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:15:59 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 01:15:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 26 01:15:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 26 01:15:59 compute-0 podman[208395]: 2025-11-26 01:15:59.586965486 +0000 UTC m=+0.052147797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:15:59 compute-0 systemd[1]: Started libpod-conmon-4309faf6df87dc4865a8bbd89934fb788ca65e2f5c53be8aa6f57b1f373d0194.scope.
Nov 26 01:15:59 compute-0 podman[158021]: time="2025-11-26T01:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:15:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cdc8acea20848bcfd3dc3cca086546169f2b7501a886b651b53d7ef32a21e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cdc8acea20848bcfd3dc3cca086546169f2b7501a886b651b53d7ef32a21e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cdc8acea20848bcfd3dc3cca086546169f2b7501a886b651b53d7ef32a21e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cdc8acea20848bcfd3dc3cca086546169f2b7501a886b651b53d7ef32a21e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cdc8acea20848bcfd3dc3cca086546169f2b7501a886b651b53d7ef32a21e6/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.775 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.776 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feff248b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff25140e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248a270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff35fda90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff5310410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feff25140b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feff248b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feff248b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feff248b740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feff248b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feff248b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feff248b1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feff248ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feff248b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feff248b0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feff248ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feff248bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feff248bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feff24894f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff2489520>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff4ce75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feff248b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feff248bc20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feff248b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feff248bcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feff55e84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feff248bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feff248b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feff248bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feff248a2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feff248aea0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 podman[208395]: 2025-11-26 01:15:59.797784163 +0000 UTC m=+0.262966434 container init 4309faf6df87dc4865a8bbd89934fb788ca65e2f5c53be8aa6f57b1f373d0194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feff248afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:15:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:15:59 compute-0 podman[208395]: 2025-11-26 01:15:59.825198809 +0000 UTC m=+0.290381030 container start 4309faf6df87dc4865a8bbd89934fb788ca65e2f5c53be8aa6f57b1f373d0194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 01:15:59 compute-0 podman[208395]: 2025-11-26 01:15:59.832673168 +0000 UTC m=+0.297855389 container attach 4309faf6df87dc4865a8bbd89934fb788ca65e2f5c53be8aa6f57b1f373d0194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:15:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28997 "" "Go-http-client/1.1"
Nov 26 01:15:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5801 "" "Go-http-client/1.1"
Nov 26 01:16:00 compute-0 ceph-mon[192746]: from='osd.1 [v2:192.168.122.100:6806/1777324384,v1:192.168.122.100:6807/1777324384]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 26 01:16:00 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 26 01:16:00 compute-0 ceph-mon[192746]: from='osd.1 [v2:192.168.122.100:6806/1777324384,v1:192.168.122.100:6807/1777324384]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 26 01:16:00 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 26 01:16:00 compute-0 ceph-mon[192746]: from='osd.1 [v2:192.168.122.100:6806/1777324384,v1:192.168.122.100:6807/1777324384]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 01:16:00 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 26 01:16:00 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 26 01:16:00 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 26 01:16:00 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate-test[208410]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 26 01:16:00 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate-test[208410]:                            [--no-systemd] [--no-tmpfs]
Nov 26 01:16:00 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate-test[208410]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 26 01:16:00 compute-0 systemd[1]: libpod-4309faf6df87dc4865a8bbd89934fb788ca65e2f5c53be8aa6f57b1f373d0194.scope: Deactivated successfully.
Nov 26 01:16:00 compute-0 podman[208395]: 2025-11-26 01:16:00.446502419 +0000 UTC m=+0.911684670 container died 4309faf6df87dc4865a8bbd89934fb788ca65e2f5c53be8aa6f57b1f373d0194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:16:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-27cdc8acea20848bcfd3dc3cca086546169f2b7501a886b651b53d7ef32a21e6-merged.mount: Deactivated successfully.
Nov 26 01:16:00 compute-0 podman[208395]: 2025-11-26 01:16:00.54251238 +0000 UTC m=+1.007694581 container remove 4309faf6df87dc4865a8bbd89934fb788ca65e2f5c53be8aa6f57b1f373d0194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate-test, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:00 compute-0 systemd[1]: libpod-conmon-4309faf6df87dc4865a8bbd89934fb788ca65e2f5c53be8aa6f57b1f373d0194.scope: Deactivated successfully.
Nov 26 01:16:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 26 01:16:00 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1777324384,v1:192.168.122.100:6807/1777324384]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 01:16:00 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 26 01:16:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 26 01:16:00 compute-0 ceph-osd[207774]: osd.1 0 done with init, starting boot process
Nov 26 01:16:00 compute-0 ceph-osd[207774]: osd.1 0 start_boot
Nov 26 01:16:00 compute-0 ceph-osd[207774]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 26 01:16:00 compute-0 ceph-osd[207774]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 26 01:16:00 compute-0 ceph-osd[207774]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 26 01:16:00 compute-0 ceph-osd[207774]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 26 01:16:00 compute-0 ceph-osd[207774]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 26 01:16:00 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 26 01:16:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 01:16:00 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 01:16:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 01:16:00 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 01:16:00 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 01:16:00 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 01:16:00 compute-0 ceph-mgr[193049]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1777324384; not ready for session (expect reconnect)
Nov 26 01:16:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 01:16:00 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 01:16:00 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 01:16:00 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=11) [] r=-1 lpr=11 pi=[10,11)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:00 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=11) [] r=-1 lpr=11 pi=[10,11)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v42: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 26 01:16:01 compute-0 systemd[1]: Reloading.
Nov 26 01:16:01 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:16:01 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:16:01 compute-0 openstack_network_exporter[160178]: ERROR   01:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:16:01 compute-0 openstack_network_exporter[160178]: ERROR   01:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:16:01 compute-0 openstack_network_exporter[160178]: ERROR   01:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:16:01 compute-0 openstack_network_exporter[160178]: ERROR   01:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:16:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:16:01 compute-0 openstack_network_exporter[160178]: ERROR   01:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:16:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:16:01 compute-0 ceph-mon[192746]: from='osd.1 [v2:192.168.122.100:6806/1777324384,v1:192.168.122.100:6807/1777324384]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 01:16:01 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 26 01:16:01 compute-0 ceph-mgr[193049]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1777324384; not ready for session (expect reconnect)
Nov 26 01:16:01 compute-0 systemd[1]: Reloading.
Nov 26 01:16:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 01:16:01 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 01:16:01 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 01:16:01 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:16:01 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:16:02 compute-0 systemd[1]: Starting Ceph osd.2 for 36901f64-240e-5c29-a2e2-29b56f2c329c...
Nov 26 01:16:02 compute-0 podman[208566]: 2025-11-26 01:16:02.516304938 +0000 UTC m=+0.083233616 container create ef5ef34b6068674ea915718281e4881a69340e2b2a3d1de5854c9fe31d669fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 01:16:02 compute-0 podman[208566]: 2025-11-26 01:16:02.477102693 +0000 UTC m=+0.044031431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fc85dba48796cf5a32113f3ab52024c7c385757647aa8c9ab10425b877b7c93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fc85dba48796cf5a32113f3ab52024c7c385757647aa8c9ab10425b877b7c93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fc85dba48796cf5a32113f3ab52024c7c385757647aa8c9ab10425b877b7c93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fc85dba48796cf5a32113f3ab52024c7c385757647aa8c9ab10425b877b7c93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:02 compute-0 ceph-mgr[193049]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1777324384; not ready for session (expect reconnect)
Nov 26 01:16:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fc85dba48796cf5a32113f3ab52024c7c385757647aa8c9ab10425b877b7c93/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 01:16:02 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 01:16:02 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 01:16:02 compute-0 podman[208566]: 2025-11-26 01:16:02.660369841 +0000 UTC m=+0.227298559 container init ef5ef34b6068674ea915718281e4881a69340e2b2a3d1de5854c9fe31d669fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 01:16:02 compute-0 podman[208566]: 2025-11-26 01:16:02.686558232 +0000 UTC m=+0.253486870 container start ef5ef34b6068674ea915718281e4881a69340e2b2a3d1de5854c9fe31d669fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:02 compute-0 podman[208566]: 2025-11-26 01:16:02.698951428 +0000 UTC m=+0.265880146 container attach ef5ef34b6068674ea915718281e4881a69340e2b2a3d1de5854c9fe31d669fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 01:16:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 26 01:16:03 compute-0 ceph-mgr[193049]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1777324384; not ready for session (expect reconnect)
Nov 26 01:16:03 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 01:16:03 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 01:16:03 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 01:16:03 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate[208582]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 01:16:03 compute-0 bash[208566]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 01:16:03 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate[208582]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 01:16:03 compute-0 bash[208566]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 01:16:03 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:16:03 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate[208582]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 01:16:03 compute-0 bash[208566]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 26 01:16:03 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate[208582]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 01:16:03 compute-0 bash[208566]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 26 01:16:04 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate[208582]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 01:16:04 compute-0 bash[208566]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 26 01:16:04 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate[208582]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 01:16:04 compute-0 bash[208566]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 26 01:16:04 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate[208582]: --> ceph-volume raw activate successful for osd ID: 2
Nov 26 01:16:04 compute-0 bash[208566]: --> ceph-volume raw activate successful for osd ID: 2
Nov 26 01:16:04 compute-0 systemd[1]: libpod-ef5ef34b6068674ea915718281e4881a69340e2b2a3d1de5854c9fe31d669fd5.scope: Deactivated successfully.
Nov 26 01:16:04 compute-0 podman[208566]: 2025-11-26 01:16:04.060658334 +0000 UTC m=+1.627587012 container died ef5ef34b6068674ea915718281e4881a69340e2b2a3d1de5854c9fe31d669fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 01:16:04 compute-0 systemd[1]: libpod-ef5ef34b6068674ea915718281e4881a69340e2b2a3d1de5854c9fe31d669fd5.scope: Consumed 1.385s CPU time.
Nov 26 01:16:04 compute-0 ceph-osd[207774]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.255 iops: 4673.326 elapsed_sec: 0.642
Nov 26 01:16:04 compute-0 ceph-osd[207774]: log_channel(cluster) log [WRN] : OSD bench result of 4673.326116 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 01:16:04 compute-0 ceph-osd[207774]: osd.1 0 waiting for initial osdmap
Nov 26 01:16:04 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1[207770]: 2025-11-26T01:16:04.065+0000 7f4093031640 -1 osd.1 0 waiting for initial osdmap
Nov 26 01:16:04 compute-0 ceph-osd[207774]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 26 01:16:04 compute-0 ceph-osd[207774]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 26 01:16:04 compute-0 ceph-osd[207774]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 26 01:16:04 compute-0 ceph-osd[207774]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Nov 26 01:16:04 compute-0 ceph-osd[207774]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 01:16:04 compute-0 ceph-osd[207774]: osd.1 11 set_numa_affinity not setting numa affinity
Nov 26 01:16:04 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-1[207770]: 2025-11-26T01:16:04.107+0000 7f408e659640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 01:16:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fc85dba48796cf5a32113f3ab52024c7c385757647aa8c9ab10425b877b7c93-merged.mount: Deactivated successfully.
Nov 26 01:16:04 compute-0 ceph-osd[207774]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 26 01:16:04 compute-0 podman[208566]: 2025-11-26 01:16:04.170437759 +0000 UTC m=+1.737366407 container remove ef5ef34b6068674ea915718281e4881a69340e2b2a3d1de5854c9fe31d669fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:04 compute-0 podman[208775]: 2025-11-26 01:16:04.496142985 +0000 UTC m=+0.087263398 container create f57382b838490109a6cca4de8076b512198d595709b93ee5d84e049d84b5db2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 01:16:04 compute-0 podman[208775]: 2025-11-26 01:16:04.460605012 +0000 UTC m=+0.051725475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f238e05c7a6471a9c420e8f4f9455f43c206e4a4a4915da546b6ca3ecea41aa3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f238e05c7a6471a9c420e8f4f9455f43c206e4a4a4915da546b6ca3ecea41aa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f238e05c7a6471a9c420e8f4f9455f43c206e4a4a4915da546b6ca3ecea41aa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f238e05c7a6471a9c420e8f4f9455f43c206e4a4a4915da546b6ca3ecea41aa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f238e05c7a6471a9c420e8f4f9455f43c206e4a4a4915da546b6ca3ecea41aa3/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:04 compute-0 podman[208775]: 2025-11-26 01:16:04.627688088 +0000 UTC m=+0.218808551 container init f57382b838490109a6cca4de8076b512198d595709b93ee5d84e049d84b5db2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:16:04 compute-0 ceph-mgr[193049]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1777324384; not ready for session (expect reconnect)
Nov 26 01:16:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 01:16:04 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 01:16:04 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 26 01:16:04 compute-0 podman[208775]: 2025-11-26 01:16:04.661640336 +0000 UTC m=+0.252760739 container start f57382b838490109a6cca4de8076b512198d595709b93ee5d84e049d84b5db2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 26 01:16:04 compute-0 bash[208775]: f57382b838490109a6cca4de8076b512198d595709b93ee5d84e049d84b5db2d
Nov 26 01:16:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Nov 26 01:16:04 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1777324384,v1:192.168.122.100:6807/1777324384] boot
Nov 26 01:16:04 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Nov 26 01:16:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 26 01:16:04 compute-0 systemd[1]: Started Ceph osd.2 for 36901f64-240e-5c29-a2e2-29b56f2c329c.
Nov 26 01:16:04 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 26 01:16:04 compute-0 ceph-osd[207774]: osd.1 12 state: booting -> active
Nov 26 01:16:04 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 01:16:04 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 01:16:04 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 01:16:04 compute-0 ceph-osd[208794]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 01:16:04 compute-0 ceph-osd[208794]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 26 01:16:04 compute-0 ceph-osd[208794]: pidfile_write: ignore empty --pid-file
Nov 26 01:16:04 compute-0 ceph-osd[208794]: bdev(0x5573138ab800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 01:16:04 compute-0 ceph-osd[208794]: bdev(0x5573138ab800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 01:16:04 compute-0 ceph-osd[208794]: bdev(0x5573138ab800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:16:04 compute-0 ceph-osd[208794]: bdev(0x5573138ab800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:16:04 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 01:16:04 compute-0 ceph-osd[208794]: bdev(0x5573146ed800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 01:16:04 compute-0 ceph-osd[208794]: bdev(0x5573146ed800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 01:16:04 compute-0 ceph-osd[208794]: bdev(0x5573146ed800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:16:04 compute-0 ceph-osd[208794]: bdev(0x5573146ed800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:16:04 compute-0 ceph-osd[208794]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 26 01:16:04 compute-0 ceph-osd[208794]: bdev(0x5573146ed800 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 01:16:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:16:04 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:04 compute-0 ceph-osd[208794]: bdev(0x5573138ab800 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 01:16:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:16:04 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 26 01:16:05 compute-0 ceph-osd[208794]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 26 01:16:05 compute-0 ceph-osd[208794]: load: jerasure load: lrc 
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476ec00 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 01:16:05 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=-1 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [1], acting [] -> [1], acting_primary ? -> 1, up_primary ? -> 1, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:05 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=-1 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476ec00 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 01:16:05 compute-0 ceph-osd[208794]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 26 01:16:05 compute-0 ceph-osd[208794]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476f400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476f400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476f400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476f400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluefs mount
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluefs mount shared_bdev_used = 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: RocksDB version: 7.9.2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Git sha 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: DB SUMMARY
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: DB Session ID:  VEAXK2REGA6CYHHI28DB
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: CURRENT file:  CURRENT
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                         Options.error_if_exists: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.create_if_missing: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                                     Options.env: 0x55731473fd50
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                                Options.info_log: 0x5573139327e0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                              Options.statistics: (nil)
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.use_fsync: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                              Options.db_log_dir: 
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.write_buffer_manager: 0x557314844460
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.unordered_write: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.row_cache: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                              Options.wal_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.two_write_queues: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.wal_compression: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.atomic_flush: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.max_background_jobs: 4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.max_background_compactions: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.max_subcompactions: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.max_open_files: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Compression algorithms supported:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kZSTD supported: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kXpressCompression supported: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kZlibCompression supported: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313932200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313932200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313932200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313932200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313932200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313932200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313932200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313932180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313932180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313932180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f2e65049-4602-4a74-9769-c9bf81dff715
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119765432180, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119765432587, "job": 1, "event": "recovery_finished"}
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: freelist init
Nov 26 01:16:05 compute-0 ceph-osd[208794]: freelist _read_cfg
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluefs umount
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476f400 /var/lib/ceph/osd/ceph-2/block) close
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476f400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476f400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476f400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bdev(0x55731476f400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluefs mount
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluefs mount shared_bdev_used = 4718592
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: RocksDB version: 7.9.2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Git sha 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: DB SUMMARY
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: DB Session ID:  VEAXK2REGA6CYHHI28DA
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: CURRENT file:  CURRENT
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: IDENTITY file:  IDENTITY
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                         Options.error_if_exists: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.create_if_missing: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                         Options.paranoid_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                                     Options.env: 0x5573148d4310
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                                Options.info_log: 0x557313bf8f40
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_file_opening_threads: 16
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                              Options.statistics: (nil)
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.use_fsync: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.max_log_file_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                         Options.allow_fallocate: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.use_direct_reads: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.create_missing_column_families: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                              Options.db_log_dir: 
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                                 Options.wal_dir: db.wal
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.advise_random_on_open: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.write_buffer_manager: 0x5573148446e0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                            Options.rate_limiter: (nil)
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.unordered_write: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.row_cache: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                              Options.wal_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.allow_ingest_behind: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.two_write_queues: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.manual_wal_flush: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.wal_compression: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.atomic_flush: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.log_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.allow_data_in_errors: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.db_host_id: __hostname__
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.max_background_jobs: 4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.max_background_compactions: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.max_subcompactions: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.max_open_files: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.bytes_per_sync: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.max_background_flushes: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Compression algorithms supported:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kZSTD supported: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kXpressCompression supported: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kBZip2Compression supported: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kLZ4Compression supported: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kZlibCompression supported: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: #011kSnappyCompression supported: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313928740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313928740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313928740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313928740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313928740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313928740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557313928740)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5573139286c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391ef30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Nov 26 01:16:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 01:16:05 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5573139286c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391ef30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:           Options.merge_operator: None
Nov 26 01:16:05 compute-0 ceph-mon[192746]: OSD bench result of 4673.326116 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 01:16:05 compute-0 ceph-mon[192746]: osd.1 [v2:192.168.122.100:6806/1777324384,v1:192.168.122.100:6807/1777324384] boot
Nov 26 01:16:05 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:05 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.compaction_filter_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.sst_partitioner_factory: None
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5573139286c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55731391ef30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.write_buffer_size: 16777216
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.max_write_buffer_number: 64
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.compression: LZ4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.num_levels: 7
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.level: 32767
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.compression_opts.strategy: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                  Options.compression_opts.enabled: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.arena_block_size: 1048576
Nov 26 01:16:05 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.disable_auto_compactions: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.inplace_update_support: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.bloom_locality: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                    Options.max_successive_merges: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.paranoid_file_checks: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.force_consistency_checks: 1
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.report_bg_io_stats: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                               Options.ttl: 2592000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                       Options.enable_blob_files: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                           Options.min_blob_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                          Options.blob_file_size: 268435456
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb:                Options.blob_file_starting_level: 0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 26 01:16:05 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f2e65049-4602-4a74-9769-c9bf81dff715
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119765746539, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119765754582, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119765, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f2e65049-4602-4a74-9769-c9bf81dff715", "db_session_id": "VEAXK2REGA6CYHHI28DA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119765761046, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119765, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f2e65049-4602-4a74-9769-c9bf81dff715", "db_session_id": "VEAXK2REGA6CYHHI28DA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119765767208, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119765, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f2e65049-4602-4a74-9769-c9bf81dff715", "db_session_id": "VEAXK2REGA6CYHHI28DA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764119765770158, "job": 1, "event": "recovery_finished"}
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 26 01:16:05 compute-0 ceph-mgr[193049]: [devicehealth INFO root] creating main.db for devicehealth
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557313a8dc00
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: DB pointer 0x557314829a00
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 26 01:16:05 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55731391f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55731391f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55731391f1f0#2 capacity: 460.80 MB usag
Nov 26 01:16:05 compute-0 ceph-osd[208794]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 26 01:16:05 compute-0 ceph-osd[208794]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 26 01:16:05 compute-0 ceph-osd[208794]: _get_class not permitted to load lua
Nov 26 01:16:05 compute-0 ceph-osd[208794]: _get_class not permitted to load sdk
Nov 26 01:16:05 compute-0 ceph-osd[208794]: _get_class not permitted to load test_remote_reads
Nov 26 01:16:05 compute-0 ceph-osd[208794]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 26 01:16:05 compute-0 ceph-osd[208794]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 26 01:16:05 compute-0 ceph-osd[208794]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 26 01:16:05 compute-0 ceph-osd[208794]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 26 01:16:05 compute-0 podman[209332]: 2025-11-26 01:16:05.835446924 +0000 UTC m=+0.063504925 container create f55d04783e528c94960d1e224010421143a849c627cae1478342416b64900479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_snyder, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 01:16:05 compute-0 ceph-osd[208794]: osd.2 0 load_pgs
Nov 26 01:16:05 compute-0 ceph-osd[208794]: osd.2 0 load_pgs opened 0 pgs
Nov 26 01:16:05 compute-0 ceph-osd[208794]: osd.2 0 log_to_monitors true
Nov 26 01:16:05 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2[208790]: 2025-11-26T01:16:05.840+0000 7fc3fe175740 -1 osd.2 0 log_to_monitors true
Nov 26 01:16:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 26 01:16:05 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/263197169,v1:192.168.122.100:6811/263197169]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 26 01:16:05 compute-0 ceph-mgr[193049]: [devicehealth INFO root] Check health
Nov 26 01:16:05 compute-0 systemd[1]: Started libpod-conmon-f55d04783e528c94960d1e224010421143a849c627cae1478342416b64900479.scope.
Nov 26 01:16:05 compute-0 podman[209332]: 2025-11-26 01:16:05.807270527 +0000 UTC m=+0.035328578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:05 compute-0 ceph-mgr[193049]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 26 01:16:05 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 26 01:16:05 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:05 compute-0 podman[209332]: 2025-11-26 01:16:05.948541892 +0000 UTC m=+0.176599943 container init f55d04783e528c94960d1e224010421143a849c627cae1478342416b64900479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 01:16:05 compute-0 podman[209332]: 2025-11-26 01:16:05.962924054 +0000 UTC m=+0.190982055 container start f55d04783e528c94960d1e224010421143a849c627cae1478342416b64900479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_snyder, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:05 compute-0 podman[209332]: 2025-11-26 01:16:05.969007873 +0000 UTC m=+0.197065954 container attach f55d04783e528c94960d1e224010421143a849c627cae1478342416b64900479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_snyder, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:05 compute-0 beautiful_snyder[209391]: 167 167
Nov 26 01:16:05 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 26 01:16:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 26 01:16:05 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 01:16:05 compute-0 systemd[1]: libpod-f55d04783e528c94960d1e224010421143a849c627cae1478342416b64900479.scope: Deactivated successfully.
Nov 26 01:16:05 compute-0 podman[209332]: 2025-11-26 01:16:05.975409512 +0000 UTC m=+0.203467533 container died f55d04783e528c94960d1e224010421143a849c627cae1478342416b64900479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_snyder, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 01:16:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-be0ce80db3f4e51e39aca2394ecdd279dcb7c6d7f7b005ac01b5931b54dc28f1-merged.mount: Deactivated successfully.
Nov 26 01:16:06 compute-0 podman[209332]: 2025-11-26 01:16:06.025600464 +0000 UTC m=+0.253658475 container remove f55d04783e528c94960d1e224010421143a849c627cae1478342416b64900479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:06 compute-0 systemd[1]: libpod-conmon-f55d04783e528c94960d1e224010421143a849c627cae1478342416b64900479.scope: Deactivated successfully.
Nov 26 01:16:06 compute-0 podman[209418]: 2025-11-26 01:16:06.291764016 +0000 UTC m=+0.084352526 container create 18e152217e74d4bae56df08e1bb009d79d890cd6ed181069806a79a0abd7411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:06 compute-0 podman[209418]: 2025-11-26 01:16:06.264225687 +0000 UTC m=+0.056814227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:06 compute-0 systemd[1]: Started libpod-conmon-18e152217e74d4bae56df08e1bb009d79d890cd6ed181069806a79a0abd7411e.scope.
Nov 26 01:16:06 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17d2adbf2111dab9fda42b53a7a1f1e570b15a12cc175fdabd09bc37e86509c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17d2adbf2111dab9fda42b53a7a1f1e570b15a12cc175fdabd09bc37e86509c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17d2adbf2111dab9fda42b53a7a1f1e570b15a12cc175fdabd09bc37e86509c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17d2adbf2111dab9fda42b53a7a1f1e570b15a12cc175fdabd09bc37e86509c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:06 compute-0 podman[209418]: 2025-11-26 01:16:06.457282559 +0000 UTC m=+0.249871099 container init 18e152217e74d4bae56df08e1bb009d79d890cd6ed181069806a79a0abd7411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 01:16:06 compute-0 podman[209418]: 2025-11-26 01:16:06.483537752 +0000 UTC m=+0.276126262 container start 18e152217e74d4bae56df08e1bb009d79d890cd6ed181069806a79a0abd7411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 01:16:06 compute-0 podman[209418]: 2025-11-26 01:16:06.489707134 +0000 UTC m=+0.282295654 container attach 18e152217e74d4bae56df08e1bb009d79d890cd6ed181069806a79a0abd7411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:16:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 26 01:16:06 compute-0 ceph-mon[192746]: from='osd.2 [v2:192.168.122.100:6810/263197169,v1:192.168.122.100:6811/263197169]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 26 01:16:06 compute-0 ceph-mon[192746]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 26 01:16:06 compute-0 ceph-mon[192746]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 26 01:16:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/263197169,v1:192.168.122.100:6811/263197169]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 26 01:16:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Nov 26 01:16:06 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Nov 26 01:16:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 26 01:16:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/263197169,v1:192.168.122.100:6811/263197169]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 01:16:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 26 01:16:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 01:16:06 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 01:16:06 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 01:16:06 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 26 01:16:06 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 26 01:16:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 26 01:16:07 compute-0 distracted_napier[209433]: {
Nov 26 01:16:07 compute-0 distracted_napier[209433]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:16:07 compute-0 distracted_napier[209433]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:16:07 compute-0 distracted_napier[209433]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:16:07 compute-0 distracted_napier[209433]:        "osd_id": 0,
Nov 26 01:16:07 compute-0 distracted_napier[209433]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:16:07 compute-0 distracted_napier[209433]:        "type": "bluestore"
Nov 26 01:16:07 compute-0 distracted_napier[209433]:    },
Nov 26 01:16:07 compute-0 distracted_napier[209433]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:16:07 compute-0 distracted_napier[209433]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:16:07 compute-0 distracted_napier[209433]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:16:07 compute-0 distracted_napier[209433]:        "osd_id": 2,
Nov 26 01:16:07 compute-0 distracted_napier[209433]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:16:07 compute-0 distracted_napier[209433]:        "type": "bluestore"
Nov 26 01:16:07 compute-0 distracted_napier[209433]:    },
Nov 26 01:16:07 compute-0 distracted_napier[209433]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:16:07 compute-0 distracted_napier[209433]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:16:07 compute-0 distracted_napier[209433]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:16:07 compute-0 distracted_napier[209433]:        "osd_id": 1,
Nov 26 01:16:07 compute-0 distracted_napier[209433]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:16:07 compute-0 distracted_napier[209433]:        "type": "bluestore"
Nov 26 01:16:07 compute-0 distracted_napier[209433]:    }
Nov 26 01:16:07 compute-0 distracted_napier[209433]: }
Nov 26 01:16:07 compute-0 systemd[1]: libpod-18e152217e74d4bae56df08e1bb009d79d890cd6ed181069806a79a0abd7411e.scope: Deactivated successfully.
Nov 26 01:16:07 compute-0 podman[209418]: 2025-11-26 01:16:07.681660259 +0000 UTC m=+1.474248769 container died 18e152217e74d4bae56df08e1bb009d79d890cd6ed181069806a79a0abd7411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:07 compute-0 systemd[1]: libpod-18e152217e74d4bae56df08e1bb009d79d890cd6ed181069806a79a0abd7411e.scope: Consumed 1.199s CPU time.
Nov 26 01:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f17d2adbf2111dab9fda42b53a7a1f1e570b15a12cc175fdabd09bc37e86509c-merged.mount: Deactivated successfully.
Nov 26 01:16:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 26 01:16:07 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/263197169,v1:192.168.122.100:6811/263197169]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 01:16:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Nov 26 01:16:07 compute-0 ceph-osd[208794]: osd.2 0 done with init, starting boot process
Nov 26 01:16:07 compute-0 ceph-osd[208794]: osd.2 0 start_boot
Nov 26 01:16:07 compute-0 ceph-osd[208794]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 26 01:16:07 compute-0 ceph-osd[208794]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 26 01:16:07 compute-0 ceph-osd[208794]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 26 01:16:07 compute-0 ceph-osd[208794]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 26 01:16:07 compute-0 ceph-osd[208794]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 26 01:16:07 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Nov 26 01:16:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 01:16:07 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 01:16:07 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 01:16:07 compute-0 ceph-mgr[193049]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/263197169; not ready for session (expect reconnect)
Nov 26 01:16:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 01:16:07 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 01:16:07 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 01:16:07 compute-0 ceph-mon[192746]: from='osd.2 [v2:192.168.122.100:6810/263197169,v1:192.168.122.100:6811/263197169]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 26 01:16:07 compute-0 ceph-mon[192746]: from='osd.2 [v2:192.168.122.100:6810/263197169,v1:192.168.122.100:6811/263197169]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 26 01:16:07 compute-0 podman[209418]: 2025-11-26 01:16:07.791738383 +0000 UTC m=+1.584326903 container remove 18e152217e74d4bae56df08e1bb009d79d890cd6ed181069806a79a0abd7411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 01:16:07 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.vbisdw(active, since 86s)
Nov 26 01:16:07 compute-0 systemd[1]: libpod-conmon-18e152217e74d4bae56df08e1bb009d79d890cd6ed181069806a79a0abd7411e.scope: Deactivated successfully.
Nov 26 01:16:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:16:07 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:16:07 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:08 compute-0 ceph-mgr[193049]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/263197169; not ready for session (expect reconnect)
Nov 26 01:16:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 01:16:08 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 01:16:08 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 01:16:08 compute-0 ceph-mon[192746]: from='osd.2 [v2:192.168.122.100:6810/263197169,v1:192.168.122.100:6811/263197169]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 26 01:16:08 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:08 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:16:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 26 01:16:09 compute-0 podman[209694]: 2025-11-26 01:16:09.616288363 +0000 UTC m=+0.142067718 container exec 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 01:16:09 compute-0 podman[209694]: 2025-11-26 01:16:09.758410822 +0000 UTC m=+0.284190127 container exec_died 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:16:09 compute-0 ceph-mgr[193049]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/263197169; not ready for session (expect reconnect)
Nov 26 01:16:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 01:16:09 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 01:16:09 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 01:16:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:16:10 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:16:10 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:10 compute-0 ceph-mgr[193049]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/263197169; not ready for session (expect reconnect)
Nov 26 01:16:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 01:16:10 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 01:16:10 compute-0 ceph-mgr[193049]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 26 01:16:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 26 01:16:11 compute-0 ceph-osd[208794]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 20.845 iops: 5336.215 elapsed_sec: 0.562
Nov 26 01:16:11 compute-0 ceph-osd[208794]: log_channel(cluster) log [WRN] : OSD bench result of 5336.215165 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 01:16:11 compute-0 ceph-osd[208794]: osd.2 0 waiting for initial osdmap
Nov 26 01:16:11 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2[208790]: 2025-11-26T01:16:11.054+0000 7fc3fa0f5640 -1 osd.2 0 waiting for initial osdmap
Nov 26 01:16:11 compute-0 ceph-osd[208794]: osd.2 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 26 01:16:11 compute-0 ceph-osd[208794]: osd.2 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 26 01:16:11 compute-0 ceph-osd[208794]: osd.2 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 26 01:16:11 compute-0 ceph-osd[208794]: osd.2 15 check_osdmap_features require_osd_release unknown -> reef
Nov 26 01:16:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:16:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:16:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:16:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:16:11 compute-0 ceph-osd[208794]: osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 01:16:11 compute-0 ceph-osd[208794]: osd.2 15 set_numa_affinity not setting numa affinity
Nov 26 01:16:11 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-osd-2[208790]: 2025-11-26T01:16:11.091+0000 7fc3f571d640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 26 01:16:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:16:11 compute-0 ceph-osd[208794]: osd.2 15 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 26 01:16:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:16:11 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:11 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 26 01:16:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Nov 26 01:16:11 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/263197169,v1:192.168.122.100:6811/263197169] boot
Nov 26 01:16:11 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Nov 26 01:16:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 26 01:16:11 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 26 01:16:11 compute-0 ceph-osd[208794]: osd.2 16 state: booting -> active
Nov 26 01:16:12 compute-0 ceph-mon[192746]: OSD bench result of 5336.215165 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 26 01:16:12 compute-0 ceph-mon[192746]: osd.2 [v2:192.168.122.100:6810/263197169,v1:192.168.122.100:6811/263197169] boot
Nov 26 01:16:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 26 01:16:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Nov 26 01:16:12 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Nov 26 01:16:12 compute-0 podman[210075]: 2025-11-26 01:16:12.930143373 +0000 UTC m=+0.090364454 container create 600303e06de0c4409c28a432df9e61fb945c3e41ed88af6b9e281163222ce122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:12 compute-0 podman[210075]: 2025-11-26 01:16:12.896689599 +0000 UTC m=+0.056910710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:13 compute-0 systemd[1]: Started libpod-conmon-600303e06de0c4409c28a432df9e61fb945c3e41ed88af6b9e281163222ce122.scope.
Nov 26 01:16:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:13 compute-0 podman[210075]: 2025-11-26 01:16:13.062744187 +0000 UTC m=+0.222965268 container init 600303e06de0c4409c28a432df9e61fb945c3e41ed88af6b9e281163222ce122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 26 01:16:13 compute-0 podman[210075]: 2025-11-26 01:16:13.077991442 +0000 UTC m=+0.238212493 container start 600303e06de0c4409c28a432df9e61fb945c3e41ed88af6b9e281163222ce122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:16:13 compute-0 podman[210075]: 2025-11-26 01:16:13.082240631 +0000 UTC m=+0.242461682 container attach 600303e06de0c4409c28a432df9e61fb945c3e41ed88af6b9e281163222ce122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:13 compute-0 lucid_jennings[210091]: 167 167
Nov 26 01:16:13 compute-0 systemd[1]: libpod-600303e06de0c4409c28a432df9e61fb945c3e41ed88af6b9e281163222ce122.scope: Deactivated successfully.
Nov 26 01:16:13 compute-0 podman[210075]: 2025-11-26 01:16:13.090417459 +0000 UTC m=+0.250638540 container died 600303e06de0c4409c28a432df9e61fb945c3e41ed88af6b9e281163222ce122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-df6e187a1ec1fbbb01c70e74718595a330435c2dd08b64ccc2294385c423ff32-merged.mount: Deactivated successfully.
Nov 26 01:16:13 compute-0 podman[210075]: 2025-11-26 01:16:13.161203086 +0000 UTC m=+0.321424177 container remove 600303e06de0c4409c28a432df9e61fb945c3e41ed88af6b9e281163222ce122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:13 compute-0 systemd[1]: libpod-conmon-600303e06de0c4409c28a432df9e61fb945c3e41ed88af6b9e281163222ce122.scope: Deactivated successfully.
Nov 26 01:16:13 compute-0 podman[210114]: 2025-11-26 01:16:13.434999942 +0000 UTC m=+0.106586808 container create 8a1efeb9c5224a33bd208be7c9690e86bae1a1d0c0085d5da7e8d6bfe189585f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 01:16:13 compute-0 podman[210114]: 2025-11-26 01:16:13.383070012 +0000 UTC m=+0.054656918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:13 compute-0 systemd[1]: Started libpod-conmon-8a1efeb9c5224a33bd208be7c9690e86bae1a1d0c0085d5da7e8d6bfe189585f.scope.
Nov 26 01:16:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa83f22f25490fd57b36648ee63276d472e216b43db6aa6f6c91f6de58500e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa83f22f25490fd57b36648ee63276d472e216b43db6aa6f6c91f6de58500e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa83f22f25490fd57b36648ee63276d472e216b43db6aa6f6c91f6de58500e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa83f22f25490fd57b36648ee63276d472e216b43db6aa6f6c91f6de58500e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:13 compute-0 podman[210114]: 2025-11-26 01:16:13.596898883 +0000 UTC m=+0.268485779 container init 8a1efeb9c5224a33bd208be7c9690e86bae1a1d0c0085d5da7e8d6bfe189585f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_torvalds, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:16:13 compute-0 podman[210114]: 2025-11-26 01:16:13.632140377 +0000 UTC m=+0.303727243 container start 8a1efeb9c5224a33bd208be7c9690e86bae1a1d0c0085d5da7e8d6bfe189585f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_torvalds, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:16:13 compute-0 podman[210114]: 2025-11-26 01:16:13.639186384 +0000 UTC m=+0.310773300 container attach 8a1efeb9c5224a33bd208be7c9690e86bae1a1d0c0085d5da7e8d6bfe189585f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_torvalds, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 01:16:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:16:14 compute-0 podman[210154]: 2025-11-26 01:16:14.84976749 +0000 UTC m=+0.100771735 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:16:14 compute-0 podman[210156]: 2025-11-26 01:16:14.87844088 +0000 UTC m=+0.122013348 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251118)
Nov 26 01:16:14 compute-0 podman[210162]: 2025-11-26 01:16:14.935222146 +0000 UTC m=+0.159952218 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 01:16:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]: [
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:    {
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:        "available": false,
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:        "ceph_device": false,
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:        "lsm_data": {},
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:        "lvs": [],
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:        "path": "/dev/sr0",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:        "rejected_reasons": [
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "Has a FileSystem",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "Insufficient space (<5GB)"
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:        ],
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:        "sys_api": {
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "actuators": null,
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "device_nodes": "sr0",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "devname": "sr0",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "human_readable_size": "482.00 KB",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "id_bus": "ata",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "model": "QEMU DVD-ROM",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "nr_requests": "2",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "parent": "/dev/sr0",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "partitions": {},
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "path": "/dev/sr0",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "removable": "1",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "rev": "2.5+",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "ro": "0",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "rotational": "1",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "sas_address": "",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "sas_device_handle": "",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "scheduler_mode": "mq-deadline",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "sectors": 0,
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "sectorsize": "2048",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "size": 493568.0,
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "support_discard": "2048",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "type": "disk",
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:            "vendor": "QEMU"
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:        }
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]:    }
Nov 26 01:16:15 compute-0 hardcore_torvalds[210131]: ]
Nov 26 01:16:15 compute-0 systemd[1]: libpod-8a1efeb9c5224a33bd208be7c9690e86bae1a1d0c0085d5da7e8d6bfe189585f.scope: Deactivated successfully.
Nov 26 01:16:15 compute-0 systemd[1]: libpod-8a1efeb9c5224a33bd208be7c9690e86bae1a1d0c0085d5da7e8d6bfe189585f.scope: Consumed 2.376s CPU time.
Nov 26 01:16:15 compute-0 podman[212105]: 2025-11-26 01:16:15.996265965 +0000 UTC m=+0.047756374 container died 8a1efeb9c5224a33bd208be7c9690e86bae1a1d0c0085d5da7e8d6bfe189585f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_torvalds, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 01:16:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfa83f22f25490fd57b36648ee63276d472e216b43db6aa6f6c91f6de58500e3-merged.mount: Deactivated successfully.
Nov 26 01:16:16 compute-0 podman[212105]: 2025-11-26 01:16:16.107053649 +0000 UTC m=+0.158544008 container remove 8a1efeb9c5224a33bd208be7c9690e86bae1a1d0c0085d5da7e8d6bfe189585f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_torvalds, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:16 compute-0 systemd[1]: libpod-conmon-8a1efeb9c5224a33bd208be7c9690e86bae1a1d0c0085d5da7e8d6bfe189585f.scope: Deactivated successfully.
Nov 26 01:16:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:16:16 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:16:16 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 26 01:16:16 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 26 01:16:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 26 01:16:16 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 26 01:16:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 26 01:16:16 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 26 01:16:16 compute-0 ceph-mgr[193049]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43689k
Nov 26 01:16:16 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43689k
Nov 26 01:16:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 26 01:16:16 compute-0 ceph-mgr[193049]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44738286: error parsing value: Value '44738286' is below minimum 939524096
Nov 26 01:16:16 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44738286: error parsing value: Value '44738286' is below minimum 939524096
Nov 26 01:16:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:16:16 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:16:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:16:16 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:16:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:16:16 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:16 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev fd9ea991-b9a5-4957-b9d5-225987176743 does not exist
Nov 26 01:16:16 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 1b3c5e64-2bb4-4d6d-a589-a13ff326b437 does not exist
Nov 26 01:16:16 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev c5844d13-d1a5-4a99-a977-4f17fe79e6d2 does not exist
Nov 26 01:16:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:16:16 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:16:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:16:16 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:16:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:16:16 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:16:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:17 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:17 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:17 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 26 01:16:17 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 26 01:16:17 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 26 01:16:17 compute-0 ceph-mon[192746]: Adjusting osd_memory_target on compute-0 to 43689k
Nov 26 01:16:17 compute-0 ceph-mon[192746]: Unable to set osd_memory_target on compute-0 to 44738286: error parsing value: Value '44738286' is below minimum 939524096
Nov 26 01:16:17 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:16:17 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:17 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:16:17 compute-0 podman[212256]: 2025-11-26 01:16:17.479772292 +0000 UTC m=+0.084171702 container create 732428fd9b9b659eed189e57e5ec2155d8e272f5e053a2d9d1d18e1669dabb1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 01:16:17 compute-0 podman[212256]: 2025-11-26 01:16:17.447398348 +0000 UTC m=+0.051797808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:17 compute-0 systemd[1]: Started libpod-conmon-732428fd9b9b659eed189e57e5ec2155d8e272f5e053a2d9d1d18e1669dabb1e.scope.
Nov 26 01:16:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:17 compute-0 podman[212256]: 2025-11-26 01:16:17.627448876 +0000 UTC m=+0.231848336 container init 732428fd9b9b659eed189e57e5ec2155d8e272f5e053a2d9d1d18e1669dabb1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:16:17 compute-0 podman[212256]: 2025-11-26 01:16:17.644450691 +0000 UTC m=+0.248850111 container start 732428fd9b9b659eed189e57e5ec2155d8e272f5e053a2d9d1d18e1669dabb1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 01:16:17 compute-0 podman[212256]: 2025-11-26 01:16:17.650889881 +0000 UTC m=+0.255289291 container attach 732428fd9b9b659eed189e57e5ec2155d8e272f5e053a2d9d1d18e1669dabb1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 01:16:17 compute-0 distracted_villani[212272]: 167 167
Nov 26 01:16:17 compute-0 systemd[1]: libpod-732428fd9b9b659eed189e57e5ec2155d8e272f5e053a2d9d1d18e1669dabb1e.scope: Deactivated successfully.
Nov 26 01:16:17 compute-0 podman[212256]: 2025-11-26 01:16:17.655976623 +0000 UTC m=+0.260376043 container died 732428fd9b9b659eed189e57e5ec2155d8e272f5e053a2d9d1d18e1669dabb1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a11a5dcd41c21ae8abe25a2726631dbd808e2f60828f423635229b095fba5124-merged.mount: Deactivated successfully.
Nov 26 01:16:17 compute-0 podman[212256]: 2025-11-26 01:16:17.755215864 +0000 UTC m=+0.359615254 container remove 732428fd9b9b659eed189e57e5ec2155d8e272f5e053a2d9d1d18e1669dabb1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:17 compute-0 systemd[1]: libpod-conmon-732428fd9b9b659eed189e57e5ec2155d8e272f5e053a2d9d1d18e1669dabb1e.scope: Deactivated successfully.
Nov 26 01:16:18 compute-0 podman[212295]: 2025-11-26 01:16:18.031032656 +0000 UTC m=+0.095926840 container create a98db851b1f7b6560890671a40c82b51610a3764d23c93ed6bd42829ead939cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:18 compute-0 podman[212295]: 2025-11-26 01:16:17.995945896 +0000 UTC m=+0.060840130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:18 compute-0 systemd[1]: Started libpod-conmon-a98db851b1f7b6560890671a40c82b51610a3764d23c93ed6bd42829ead939cc.scope.
Nov 26 01:16:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92598784547cd2b99ef68f39c52257bc0e4fc25e65757785adffb7f651bbc40c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92598784547cd2b99ef68f39c52257bc0e4fc25e65757785adffb7f651bbc40c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92598784547cd2b99ef68f39c52257bc0e4fc25e65757785adffb7f651bbc40c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92598784547cd2b99ef68f39c52257bc0e4fc25e65757785adffb7f651bbc40c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92598784547cd2b99ef68f39c52257bc0e4fc25e65757785adffb7f651bbc40c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:18 compute-0 podman[212295]: 2025-11-26 01:16:18.240329701 +0000 UTC m=+0.305223885 container init a98db851b1f7b6560890671a40c82b51610a3764d23c93ed6bd42829ead939cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hodgkin, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 01:16:18 compute-0 podman[212295]: 2025-11-26 01:16:18.256289546 +0000 UTC m=+0.321183730 container start a98db851b1f7b6560890671a40c82b51610a3764d23c93ed6bd42829ead939cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:18 compute-0 podman[212295]: 2025-11-26 01:16:18.262738287 +0000 UTC m=+0.327632481 container attach a98db851b1f7b6560890671a40c82b51610a3764d23c93ed6bd42829ead939cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:16:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:19 compute-0 tender_hodgkin[212311]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:16:19 compute-0 tender_hodgkin[212311]: --> relative data size: 1.0
Nov 26 01:16:19 compute-0 tender_hodgkin[212311]: --> All data devices are unavailable
Nov 26 01:16:19 compute-0 systemd[1]: libpod-a98db851b1f7b6560890671a40c82b51610a3764d23c93ed6bd42829ead939cc.scope: Deactivated successfully.
Nov 26 01:16:19 compute-0 systemd[1]: libpod-a98db851b1f7b6560890671a40c82b51610a3764d23c93ed6bd42829ead939cc.scope: Consumed 1.249s CPU time.
Nov 26 01:16:19 compute-0 podman[212295]: 2025-11-26 01:16:19.550911208 +0000 UTC m=+1.615805392 container died a98db851b1f7b6560890671a40c82b51610a3764d23c93ed6bd42829ead939cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:19 compute-0 podman[212339]: 2025-11-26 01:16:19.573741786 +0000 UTC m=+0.111047801 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:16:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-92598784547cd2b99ef68f39c52257bc0e4fc25e65757785adffb7f651bbc40c-merged.mount: Deactivated successfully.
Nov 26 01:16:19 compute-0 podman[212336]: 2025-11-26 01:16:19.618516296 +0000 UTC m=+0.158428454 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64)
Nov 26 01:16:19 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 01:16:19 compute-0 podman[212295]: 2025-11-26 01:16:19.655514019 +0000 UTC m=+1.720408183 container remove a98db851b1f7b6560890671a40c82b51610a3764d23c93ed6bd42829ead939cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 01:16:19 compute-0 systemd[1]: libpod-conmon-a98db851b1f7b6560890671a40c82b51610a3764d23c93ed6bd42829ead939cc.scope: Deactivated successfully.
Nov 26 01:16:20 compute-0 podman[212527]: 2025-11-26 01:16:20.812234781 +0000 UTC m=+0.073039351 container create 7de9779740cd44cbd5481183cab51c9e1f30985260389ba4ab45ad9f20671eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hertz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 01:16:20 compute-0 podman[212527]: 2025-11-26 01:16:20.79288082 +0000 UTC m=+0.053685370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:20 compute-0 systemd[1]: Started libpod-conmon-7de9779740cd44cbd5481183cab51c9e1f30985260389ba4ab45ad9f20671eed.scope.
Nov 26 01:16:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:20 compute-0 podman[212527]: 2025-11-26 01:16:20.955573834 +0000 UTC m=+0.216378404 container init 7de9779740cd44cbd5481183cab51c9e1f30985260389ba4ab45ad9f20671eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hertz, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 01:16:20 compute-0 podman[212527]: 2025-11-26 01:16:20.972521917 +0000 UTC m=+0.233326497 container start 7de9779740cd44cbd5481183cab51c9e1f30985260389ba4ab45ad9f20671eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hertz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 01:16:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:20 compute-0 podman[212527]: 2025-11-26 01:16:20.980584452 +0000 UTC m=+0.241389092 container attach 7de9779740cd44cbd5481183cab51c9e1f30985260389ba4ab45ad9f20671eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hertz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:20 compute-0 compassionate_hertz[212542]: 167 167
Nov 26 01:16:20 compute-0 systemd[1]: libpod-7de9779740cd44cbd5481183cab51c9e1f30985260389ba4ab45ad9f20671eed.scope: Deactivated successfully.
Nov 26 01:16:20 compute-0 podman[212527]: 2025-11-26 01:16:20.986064275 +0000 UTC m=+0.246868845 container died 7de9779740cd44cbd5481183cab51c9e1f30985260389ba4ab45ad9f20671eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbf5ebec33862f9eafac2b97ac530d5ef100e6356a32f7e1bd1c404d19f713e8-merged.mount: Deactivated successfully.
Nov 26 01:16:21 compute-0 podman[212527]: 2025-11-26 01:16:21.062410577 +0000 UTC m=+0.323215127 container remove 7de9779740cd44cbd5481183cab51c9e1f30985260389ba4ab45ad9f20671eed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hertz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 01:16:21 compute-0 systemd[1]: libpod-conmon-7de9779740cd44cbd5481183cab51c9e1f30985260389ba4ab45ad9f20671eed.scope: Deactivated successfully.
Nov 26 01:16:21 compute-0 podman[212565]: 2025-11-26 01:16:21.332761576 +0000 UTC m=+0.078023289 container create b799edf0dae35e620622f0f8851f8a33a8c7c63815fdfff20356d82ff41be1a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_varahamihira, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 01:16:21 compute-0 podman[212565]: 2025-11-26 01:16:21.301072992 +0000 UTC m=+0.046334755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:21 compute-0 systemd[1]: Started libpod-conmon-b799edf0dae35e620622f0f8851f8a33a8c7c63815fdfff20356d82ff41be1a8.scope.
Nov 26 01:16:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46cb7e96b911ff65e2a00dace423ba01c236d3716438e907d7dc549e9d22dc90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46cb7e96b911ff65e2a00dace423ba01c236d3716438e907d7dc549e9d22dc90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46cb7e96b911ff65e2a00dace423ba01c236d3716438e907d7dc549e9d22dc90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46cb7e96b911ff65e2a00dace423ba01c236d3716438e907d7dc549e9d22dc90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:21 compute-0 podman[212565]: 2025-11-26 01:16:21.490979325 +0000 UTC m=+0.236241048 container init b799edf0dae35e620622f0f8851f8a33a8c7c63815fdfff20356d82ff41be1a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 01:16:21 compute-0 podman[212565]: 2025-11-26 01:16:21.526780385 +0000 UTC m=+0.272042098 container start b799edf0dae35e620622f0f8851f8a33a8c7c63815fdfff20356d82ff41be1a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_varahamihira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 01:16:21 compute-0 podman[212565]: 2025-11-26 01:16:21.53377492 +0000 UTC m=+0.279036633 container attach b799edf0dae35e620622f0f8851f8a33a8c7c63815fdfff20356d82ff41be1a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_varahamihira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]: {
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:    "0": [
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:        {
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "devices": [
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "/dev/loop3"
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            ],
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "lv_name": "ceph_lv0",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "lv_size": "21470642176",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "name": "ceph_lv0",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "tags": {
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.cluster_name": "ceph",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.crush_device_class": "",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.encrypted": "0",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.osd_id": "0",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.type": "block",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.vdo": "0"
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            },
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "type": "block",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "vg_name": "ceph_vg0"
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:        }
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:    ],
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:    "1": [
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:        {
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "devices": [
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "/dev/loop4"
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            ],
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "lv_name": "ceph_lv1",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "lv_size": "21470642176",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "name": "ceph_lv1",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "tags": {
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.cluster_name": "ceph",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.crush_device_class": "",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.encrypted": "0",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.osd_id": "1",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.type": "block",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.vdo": "0"
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            },
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "type": "block",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "vg_name": "ceph_vg1"
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:        }
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:    ],
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:    "2": [
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:        {
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "devices": [
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "/dev/loop5"
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            ],
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "lv_name": "ceph_lv2",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "lv_size": "21470642176",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "name": "ceph_lv2",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "tags": {
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.cluster_name": "ceph",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.crush_device_class": "",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.encrypted": "0",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.osd_id": "2",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.type": "block",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:                "ceph.vdo": "0"
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            },
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "type": "block",
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:            "vg_name": "ceph_vg2"
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:        }
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]:    ]
Nov 26 01:16:22 compute-0 cranky_varahamihira[212581]: }
Nov 26 01:16:22 compute-0 systemd[1]: libpod-b799edf0dae35e620622f0f8851f8a33a8c7c63815fdfff20356d82ff41be1a8.scope: Deactivated successfully.
Nov 26 01:16:22 compute-0 podman[212565]: 2025-11-26 01:16:22.325357735 +0000 UTC m=+1.070619458 container died b799edf0dae35e620622f0f8851f8a33a8c7c63815fdfff20356d82ff41be1a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:16:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-46cb7e96b911ff65e2a00dace423ba01c236d3716438e907d7dc549e9d22dc90-merged.mount: Deactivated successfully.
Nov 26 01:16:22 compute-0 podman[212565]: 2025-11-26 01:16:22.436153329 +0000 UTC m=+1.181415042 container remove b799edf0dae35e620622f0f8851f8a33a8c7c63815fdfff20356d82ff41be1a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_varahamihira, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:16:22 compute-0 systemd[1]: libpod-conmon-b799edf0dae35e620622f0f8851f8a33a8c7c63815fdfff20356d82ff41be1a8.scope: Deactivated successfully.
Nov 26 01:16:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:23 compute-0 podman[212738]: 2025-11-26 01:16:23.631692313 +0000 UTC m=+0.085020825 container create a9fdc5d016d18d839febd172a299ce3c8db7a2d793ba437b25a2455cd9bd4863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 01:16:23 compute-0 podman[212738]: 2025-11-26 01:16:23.600612795 +0000 UTC m=+0.053941357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:23 compute-0 systemd[1]: Started libpod-conmon-a9fdc5d016d18d839febd172a299ce3c8db7a2d793ba437b25a2455cd9bd4863.scope.
Nov 26 01:16:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:23 compute-0 podman[212738]: 2025-11-26 01:16:23.777707551 +0000 UTC m=+0.231036113 container init a9fdc5d016d18d839febd172a299ce3c8db7a2d793ba437b25a2455cd9bd4863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 01:16:23 compute-0 podman[212738]: 2025-11-26 01:16:23.793903913 +0000 UTC m=+0.247232415 container start a9fdc5d016d18d839febd172a299ce3c8db7a2d793ba437b25a2455cd9bd4863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 01:16:23 compute-0 podman[212738]: 2025-11-26 01:16:23.799658474 +0000 UTC m=+0.252987036 container attach a9fdc5d016d18d839febd172a299ce3c8db7a2d793ba437b25a2455cd9bd4863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:16:23 compute-0 tender_babbage[212753]: 167 167
Nov 26 01:16:23 compute-0 systemd[1]: libpod-a9fdc5d016d18d839febd172a299ce3c8db7a2d793ba437b25a2455cd9bd4863.scope: Deactivated successfully.
Nov 26 01:16:23 compute-0 podman[212738]: 2025-11-26 01:16:23.807978896 +0000 UTC m=+0.261307418 container died a9fdc5d016d18d839febd172a299ce3c8db7a2d793ba437b25a2455cd9bd4863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5fc3e1abbc336e74ab1e9b746e6b2fdaf7988081ac57d534dd19c4bad869ad4-merged.mount: Deactivated successfully.
Nov 26 01:16:23 compute-0 podman[212738]: 2025-11-26 01:16:23.883227727 +0000 UTC m=+0.336556239 container remove a9fdc5d016d18d839febd172a299ce3c8db7a2d793ba437b25a2455cd9bd4863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:16:23 compute-0 systemd[1]: libpod-conmon-a9fdc5d016d18d839febd172a299ce3c8db7a2d793ba437b25a2455cd9bd4863.scope: Deactivated successfully.
Nov 26 01:16:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:16:24 compute-0 python3[212797]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:24 compute-0 podman[212803]: 2025-11-26 01:16:24.138911617 +0000 UTC m=+0.072953758 container create a3c10a2ecbfd4760a5063544e9131d9f0fcefb5f7c33cb5fb3182362050d02c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_keller, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:24 compute-0 podman[212817]: 2025-11-26 01:16:24.212594475 +0000 UTC m=+0.074867762 container create efa5916a41f13dc59c2924bb5b1e7de66a9847b34417254acca3b45c052fbb51 (image=quay.io/ceph/ceph:v18, name=suspicious_cray, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 01:16:24 compute-0 podman[212803]: 2025-11-26 01:16:24.119590188 +0000 UTC m=+0.053632349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:24 compute-0 systemd[1]: Started libpod-conmon-a3c10a2ecbfd4760a5063544e9131d9f0fcefb5f7c33cb5fb3182362050d02c2.scope.
Nov 26 01:16:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:24 compute-0 systemd[1]: Started libpod-conmon-efa5916a41f13dc59c2924bb5b1e7de66a9847b34417254acca3b45c052fbb51.scope.
Nov 26 01:16:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef02f5f40c81af66382ecf85c5f9c8d6a051f943bf79f115d35a873611f88686/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef02f5f40c81af66382ecf85c5f9c8d6a051f943bf79f115d35a873611f88686/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef02f5f40c81af66382ecf85c5f9c8d6a051f943bf79f115d35a873611f88686/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef02f5f40c81af66382ecf85c5f9c8d6a051f943bf79f115d35a873611f88686/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:24 compute-0 podman[212817]: 2025-11-26 01:16:24.188673357 +0000 UTC m=+0.050946674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:24 compute-0 podman[212803]: 2025-11-26 01:16:24.342289707 +0000 UTC m=+0.276331918 container init a3c10a2ecbfd4760a5063544e9131d9f0fcefb5f7c33cb5fb3182362050d02c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:16:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70885988b7a95c20408eded0cfadc1dbfd60726024aefd6a12beb951b10976e9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70885988b7a95c20408eded0cfadc1dbfd60726024aefd6a12beb951b10976e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70885988b7a95c20408eded0cfadc1dbfd60726024aefd6a12beb951b10976e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:24 compute-0 podman[212817]: 2025-11-26 01:16:24.375021651 +0000 UTC m=+0.237295018 container init efa5916a41f13dc59c2924bb5b1e7de66a9847b34417254acca3b45c052fbb51 (image=quay.io/ceph/ceph:v18, name=suspicious_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:16:24 compute-0 podman[212803]: 2025-11-26 01:16:24.394778463 +0000 UTC m=+0.328820614 container start a3c10a2ecbfd4760a5063544e9131d9f0fcefb5f7c33cb5fb3182362050d02c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_keller, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:24 compute-0 podman[212817]: 2025-11-26 01:16:24.398037324 +0000 UTC m=+0.260310611 container start efa5916a41f13dc59c2924bb5b1e7de66a9847b34417254acca3b45c052fbb51 (image=quay.io/ceph/ceph:v18, name=suspicious_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 01:16:24 compute-0 podman[212803]: 2025-11-26 01:16:24.410144122 +0000 UTC m=+0.344186313 container attach a3c10a2ecbfd4760a5063544e9131d9f0fcefb5f7c33cb5fb3182362050d02c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_keller, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 01:16:24 compute-0 podman[212841]: 2025-11-26 01:16:24.413328581 +0000 UTC m=+0.105640881 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 26 01:16:24 compute-0 podman[212817]: 2025-11-26 01:16:24.421238091 +0000 UTC m=+0.283511408 container attach efa5916a41f13dc59c2924bb5b1e7de66a9847b34417254acca3b45c052fbb51 (image=quay.io/ceph/ceph:v18, name=suspicious_cray, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 26 01:16:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 01:16:25 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470124376' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 01:16:25 compute-0 suspicious_cray[212842]: 
Nov 26 01:16:25 compute-0 suspicious_cray[212842]: {"fsid":"36901f64-240e-5c29-a2e2-29b56f2c329c","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":151,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":17,"num_osds":3,"num_up_osds":3,"osd_up_since":1764119771,"num_in_osds":3,"osd_in_since":1764119737,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":83333120,"bytes_avail":64328593408,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-26T01:15:42.962021+0000","services":{}},"progress_events":{}}
Nov 26 01:16:25 compute-0 systemd[1]: libpod-efa5916a41f13dc59c2924bb5b1e7de66a9847b34417254acca3b45c052fbb51.scope: Deactivated successfully.
Nov 26 01:16:25 compute-0 podman[212889]: 2025-11-26 01:16:25.136142635 +0000 UTC m=+0.061695174 container died efa5916a41f13dc59c2924bb5b1e7de66a9847b34417254acca3b45c052fbb51 (image=quay.io/ceph/ceph:v18, name=suspicious_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 01:16:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-70885988b7a95c20408eded0cfadc1dbfd60726024aefd6a12beb951b10976e9-merged.mount: Deactivated successfully.
Nov 26 01:16:25 compute-0 podman[212889]: 2025-11-26 01:16:25.250620002 +0000 UTC m=+0.176172481 container remove efa5916a41f13dc59c2924bb5b1e7de66a9847b34417254acca3b45c052fbb51 (image=quay.io/ceph/ceph:v18, name=suspicious_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:25 compute-0 systemd[1]: libpod-conmon-efa5916a41f13dc59c2924bb5b1e7de66a9847b34417254acca3b45c052fbb51.scope: Deactivated successfully.
Nov 26 01:16:25 compute-0 competent_keller[212833]: {
Nov 26 01:16:25 compute-0 competent_keller[212833]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:16:25 compute-0 competent_keller[212833]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:16:25 compute-0 competent_keller[212833]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:16:25 compute-0 competent_keller[212833]:        "osd_id": 0,
Nov 26 01:16:25 compute-0 competent_keller[212833]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:16:25 compute-0 competent_keller[212833]:        "type": "bluestore"
Nov 26 01:16:25 compute-0 competent_keller[212833]:    },
Nov 26 01:16:25 compute-0 competent_keller[212833]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:16:25 compute-0 competent_keller[212833]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:16:25 compute-0 competent_keller[212833]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:16:25 compute-0 competent_keller[212833]:        "osd_id": 2,
Nov 26 01:16:25 compute-0 competent_keller[212833]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:16:25 compute-0 competent_keller[212833]:        "type": "bluestore"
Nov 26 01:16:25 compute-0 competent_keller[212833]:    },
Nov 26 01:16:25 compute-0 competent_keller[212833]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:16:25 compute-0 competent_keller[212833]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:16:25 compute-0 competent_keller[212833]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:16:25 compute-0 competent_keller[212833]:        "osd_id": 1,
Nov 26 01:16:25 compute-0 competent_keller[212833]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:16:25 compute-0 competent_keller[212833]:        "type": "bluestore"
Nov 26 01:16:25 compute-0 competent_keller[212833]:    }
Nov 26 01:16:25 compute-0 competent_keller[212833]: }
Nov 26 01:16:25 compute-0 systemd[1]: libpod-a3c10a2ecbfd4760a5063544e9131d9f0fcefb5f7c33cb5fb3182362050d02c2.scope: Deactivated successfully.
Nov 26 01:16:25 compute-0 systemd[1]: libpod-a3c10a2ecbfd4760a5063544e9131d9f0fcefb5f7c33cb5fb3182362050d02c2.scope: Consumed 1.134s CPU time.
Nov 26 01:16:25 compute-0 podman[212803]: 2025-11-26 01:16:25.539739156 +0000 UTC m=+1.473781337 container died a3c10a2ecbfd4760a5063544e9131d9f0fcefb5f7c33cb5fb3182362050d02c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_keller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef02f5f40c81af66382ecf85c5f9c8d6a051f943bf79f115d35a873611f88686-merged.mount: Deactivated successfully.
Nov 26 01:16:25 compute-0 podman[212803]: 2025-11-26 01:16:25.652133224 +0000 UTC m=+1.586175375 container remove a3c10a2ecbfd4760a5063544e9131d9f0fcefb5f7c33cb5fb3182362050d02c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_keller, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 01:16:25 compute-0 systemd[1]: libpod-conmon-a3c10a2ecbfd4760a5063544e9131d9f0fcefb5f7c33cb5fb3182362050d02c2.scope: Deactivated successfully.
Nov 26 01:16:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:16:25 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:16:25 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:25 compute-0 podman[212952]: 2025-11-26 01:16:25.735321397 +0000 UTC m=+0.094509300 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.component=ubi9-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 26 01:16:25 compute-0 python3[212979]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:25 compute-0 podman[213011]: 2025-11-26 01:16:25.965909297 +0000 UTC m=+0.097719270 container create 3ac5fc83ed0596594f6ad55d9d6623cf9e0715093fed3d77e5f457618b16c09d (image=quay.io/ceph/ceph:v18, name=amazing_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 26 01:16:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 26 01:16:26 compute-0 podman[213011]: 2025-11-26 01:16:25.930775235 +0000 UTC m=+0.062585258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 26 01:16:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 26 01:16:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:26 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 26 01:16:26 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 26 01:16:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 26 01:16:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 26 01:16:26 compute-0 systemd[1]: Started libpod-conmon-3ac5fc83ed0596594f6ad55d9d6623cf9e0715093fed3d77e5f457618b16c09d.scope.
Nov 26 01:16:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 26 01:16:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 26 01:16:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:16:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:16:26 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 26 01:16:26 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 26 01:16:26 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44dc3e6bc9aac18b4b1608dd327c2e04059ec919638231cea0cad6de674f9b26/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44dc3e6bc9aac18b4b1608dd327c2e04059ec919638231cea0cad6de674f9b26/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 26 01:16:26 compute-0 podman[213011]: 2025-11-26 01:16:26.136784978 +0000 UTC m=+0.268595011 container init 3ac5fc83ed0596594f6ad55d9d6623cf9e0715093fed3d77e5f457618b16c09d (image=quay.io/ceph/ceph:v18, name=amazing_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:16:26 compute-0 podman[213011]: 2025-11-26 01:16:26.157584299 +0000 UTC m=+0.289394252 container start 3ac5fc83ed0596594f6ad55d9d6623cf9e0715093fed3d77e5f457618b16c09d (image=quay.io/ceph/ceph:v18, name=amazing_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:26 compute-0 podman[213011]: 2025-11-26 01:16:26.163302049 +0000 UTC m=+0.295112102 container attach 3ac5fc83ed0596594f6ad55d9d6623cf9e0715093fed3d77e5f457618b16c09d (image=quay.io/ceph/ceph:v18, name=amazing_kare, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 01:16:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 01:16:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2453310766' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 01:16:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:27 compute-0 podman[213192]: 2025-11-26 01:16:27.004362084 +0000 UTC m=+0.076555139 container create bd84b2c8640f42913fdc2cd5d9f579242502ae26343ef23761e93a6c2a16a277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shtern, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:27 compute-0 systemd[1]: Started libpod-conmon-bd84b2c8640f42913fdc2cd5d9f579242502ae26343ef23761e93a6c2a16a277.scope.
Nov 26 01:16:27 compute-0 podman[213192]: 2025-11-26 01:16:26.977409282 +0000 UTC m=+0.049602327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 26 01:16:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2453310766' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 01:16:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Nov 26 01:16:27 compute-0 podman[213192]: 2025-11-26 01:16:27.138590613 +0000 UTC m=+0.210783718 container init bd84b2c8640f42913fdc2cd5d9f579242502ae26343ef23761e93a6c2a16a277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shtern, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 01:16:27 compute-0 amazing_kare[213052]: pool 'vms' created
Nov 26 01:16:27 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Nov 26 01:16:27 compute-0 ceph-mon[192746]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 26 01:16:27 compute-0 ceph-mon[192746]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 26 01:16:27 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/2453310766' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 01:16:27 compute-0 podman[213192]: 2025-11-26 01:16:27.161243105 +0000 UTC m=+0.233436150 container start bd84b2c8640f42913fdc2cd5d9f579242502ae26343ef23761e93a6c2a16a277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shtern, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 26 01:16:27 compute-0 pensive_shtern[213207]: 167 167
Nov 26 01:16:27 compute-0 podman[213192]: 2025-11-26 01:16:27.171118351 +0000 UTC m=+0.243311476 container attach bd84b2c8640f42913fdc2cd5d9f579242502ae26343ef23761e93a6c2a16a277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shtern, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:27 compute-0 systemd[1]: libpod-bd84b2c8640f42913fdc2cd5d9f579242502ae26343ef23761e93a6c2a16a277.scope: Deactivated successfully.
Nov 26 01:16:27 compute-0 podman[213192]: 2025-11-26 01:16:27.172403937 +0000 UTC m=+0.244597022 container died bd84b2c8640f42913fdc2cd5d9f579242502ae26343ef23761e93a6c2a16a277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shtern, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:27 compute-0 systemd[1]: libpod-3ac5fc83ed0596594f6ad55d9d6623cf9e0715093fed3d77e5f457618b16c09d.scope: Deactivated successfully.
Nov 26 01:16:27 compute-0 podman[213011]: 2025-11-26 01:16:27.192063766 +0000 UTC m=+1.323873739 container died 3ac5fc83ed0596594f6ad55d9d6623cf9e0715093fed3d77e5f457618b16c09d (image=quay.io/ceph/ceph:v18, name=amazing_kare, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 01:16:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5217f7b4bacedff58c610d996fc4c1913cf71e68f9632c77e13aa377dcf2105d-merged.mount: Deactivated successfully.
Nov 26 01:16:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-44dc3e6bc9aac18b4b1608dd327c2e04059ec919638231cea0cad6de674f9b26-merged.mount: Deactivated successfully.
Nov 26 01:16:27 compute-0 podman[213011]: 2025-11-26 01:16:27.283785247 +0000 UTC m=+1.415595190 container remove 3ac5fc83ed0596594f6ad55d9d6623cf9e0715093fed3d77e5f457618b16c09d (image=quay.io/ceph/ceph:v18, name=amazing_kare, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:16:27 compute-0 systemd[1]: libpod-conmon-3ac5fc83ed0596594f6ad55d9d6623cf9e0715093fed3d77e5f457618b16c09d.scope: Deactivated successfully.
Nov 26 01:16:27 compute-0 podman[213192]: 2025-11-26 01:16:27.30788309 +0000 UTC m=+0.380076145 container remove bd84b2c8640f42913fdc2cd5d9f579242502ae26343ef23761e93a6c2a16a277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 01:16:27 compute-0 systemd[1]: libpod-conmon-bd84b2c8640f42913fdc2cd5d9f579242502ae26343ef23761e93a6c2a16a277.scope: Deactivated successfully.
Nov 26 01:16:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:16:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:16:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:27 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.vbisdw (unknown last config time)...
Nov 26 01:16:27 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.vbisdw (unknown last config time)...
Nov 26 01:16:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.vbisdw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 26 01:16:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vbisdw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 01:16:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 01:16:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 01:16:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:16:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:16:27 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.vbisdw on compute-0
Nov 26 01:16:27 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.vbisdw on compute-0
Nov 26 01:16:27 compute-0 python3[213285]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:27 compute-0 podman[213312]: 2025-11-26 01:16:27.801597317 +0000 UTC m=+0.083330608 container create 7fd6bc271294a0be1a5a84f08bf969887ed70e54a25b1c83efeaeee20473940b (image=quay.io/ceph/ceph:v18, name=adoring_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 01:16:27 compute-0 podman[213312]: 2025-11-26 01:16:27.771083445 +0000 UTC m=+0.052816816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:27 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:27 compute-0 systemd[1]: Started libpod-conmon-7fd6bc271294a0be1a5a84f08bf969887ed70e54a25b1c83efeaeee20473940b.scope.
Nov 26 01:16:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02373dad35702d04616926833198a876ce22b062fe2fe2e63b675900ba845135/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02373dad35702d04616926833198a876ce22b062fe2fe2e63b675900ba845135/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:27 compute-0 podman[213312]: 2025-11-26 01:16:27.996797598 +0000 UTC m=+0.278530889 container init 7fd6bc271294a0be1a5a84f08bf969887ed70e54a25b1c83efeaeee20473940b (image=quay.io/ceph/ceph:v18, name=adoring_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 01:16:28 compute-0 podman[213312]: 2025-11-26 01:16:28.013671389 +0000 UTC m=+0.295404680 container start 7fd6bc271294a0be1a5a84f08bf969887ed70e54a25b1c83efeaeee20473940b (image=quay.io/ceph/ceph:v18, name=adoring_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 01:16:28 compute-0 podman[213312]: 2025-11-26 01:16:28.019670167 +0000 UTC m=+0.301403478 container attach 7fd6bc271294a0be1a5a84f08bf969887ed70e54a25b1c83efeaeee20473940b (image=quay.io/ceph/ceph:v18, name=adoring_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 01:16:28 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/2453310766' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 01:16:28 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:28 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:28 compute-0 ceph-mon[192746]: Reconfiguring mgr.compute-0.vbisdw (unknown last config time)...
Nov 26 01:16:28 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vbisdw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 26 01:16:28 compute-0 ceph-mon[192746]: Reconfiguring daemon mgr.compute-0.vbisdw on compute-0
Nov 26 01:16:28 compute-0 podman[213400]: 2025-11-26 01:16:28.369148586 +0000 UTC m=+0.071514128 container create a6af99432cbebca8f1a1dc04c31192e436069eba28ec5bdbfc63fa1b9f490db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:16:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 26 01:16:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 26 01:16:28 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 26 01:16:28 compute-0 podman[213400]: 2025-11-26 01:16:28.337559874 +0000 UTC m=+0.039925516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:28 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:28 compute-0 systemd[1]: Started libpod-conmon-a6af99432cbebca8f1a1dc04c31192e436069eba28ec5bdbfc63fa1b9f490db7.scope.
Nov 26 01:16:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:28 compute-0 podman[213400]: 2025-11-26 01:16:28.530212974 +0000 UTC m=+0.232578606 container init a6af99432cbebca8f1a1dc04c31192e436069eba28ec5bdbfc63fa1b9f490db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 01:16:28 compute-0 podman[213400]: 2025-11-26 01:16:28.545067899 +0000 UTC m=+0.247433471 container start a6af99432cbebca8f1a1dc04c31192e436069eba28ec5bdbfc63fa1b9f490db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:16:28 compute-0 podman[213400]: 2025-11-26 01:16:28.550800639 +0000 UTC m=+0.253166241 container attach a6af99432cbebca8f1a1dc04c31192e436069eba28ec5bdbfc63fa1b9f490db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 01:16:28 compute-0 angry_napier[213434]: 167 167
Nov 26 01:16:28 compute-0 podman[213400]: 2025-11-26 01:16:28.567489855 +0000 UTC m=+0.269855437 container died a6af99432cbebca8f1a1dc04c31192e436069eba28ec5bdbfc63fa1b9f490db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_napier, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:28 compute-0 systemd[1]: libpod-a6af99432cbebca8f1a1dc04c31192e436069eba28ec5bdbfc63fa1b9f490db7.scope: Deactivated successfully.
Nov 26 01:16:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 01:16:28 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/473765816' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 01:16:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb220eaca9eb71ebe9b1f8e7e9c64e73c8c81eaa498d65cc7b199b712823815e-merged.mount: Deactivated successfully.
Nov 26 01:16:28 compute-0 podman[213400]: 2025-11-26 01:16:28.640433072 +0000 UTC m=+0.342798614 container remove a6af99432cbebca8f1a1dc04c31192e436069eba28ec5bdbfc63fa1b9f490db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_napier, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:16:28 compute-0 systemd[1]: libpod-conmon-a6af99432cbebca8f1a1dc04c31192e436069eba28ec5bdbfc63fa1b9f490db7.scope: Deactivated successfully.
Nov 26 01:16:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:16:28 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:16:28 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:16:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v64: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 26 01:16:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/473765816' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 01:16:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 26 01:16:29 compute-0 adoring_tesla[213356]: pool 'volumes' created
Nov 26 01:16:29 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 26 01:16:29 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/473765816' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 01:16:29 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:29 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:29 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:29 compute-0 systemd[1]: libpod-7fd6bc271294a0be1a5a84f08bf969887ed70e54a25b1c83efeaeee20473940b.scope: Deactivated successfully.
Nov 26 01:16:29 compute-0 podman[213312]: 2025-11-26 01:16:29.467783895 +0000 UTC m=+1.749517216 container died 7fd6bc271294a0be1a5a84f08bf969887ed70e54a25b1c83efeaeee20473940b (image=quay.io/ceph/ceph:v18, name=adoring_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-02373dad35702d04616926833198a876ce22b062fe2fe2e63b675900ba845135-merged.mount: Deactivated successfully.
Nov 26 01:16:29 compute-0 podman[213312]: 2025-11-26 01:16:29.571491601 +0000 UTC m=+1.853224892 container remove 7fd6bc271294a0be1a5a84f08bf969887ed70e54a25b1c83efeaeee20473940b (image=quay.io/ceph/ceph:v18, name=adoring_tesla, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:29 compute-0 systemd[1]: libpod-conmon-7fd6bc271294a0be1a5a84f08bf969887ed70e54a25b1c83efeaeee20473940b.scope: Deactivated successfully.
Nov 26 01:16:29 compute-0 ceph-mon[192746]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 01:16:29 compute-0 podman[158021]: time="2025-11-26T01:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:16:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29176 "" "Go-http-client/1.1"
Nov 26 01:16:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5846 "" "Go-http-client/1.1"
Nov 26 01:16:30 compute-0 python3[213662]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:30 compute-0 podman[213663]: 2025-11-26 01:16:30.164006667 +0000 UTC m=+0.136576384 container exec 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:16:30 compute-0 podman[213680]: 2025-11-26 01:16:30.239771563 +0000 UTC m=+0.072091864 container create 57e8d1e7ca83ff832c1ca14b6dfeb4b0eff553710f42925488b1dfcd94e6dd26 (image=quay.io/ceph/ceph:v18, name=trusting_payne, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:16:30 compute-0 podman[213663]: 2025-11-26 01:16:30.295756086 +0000 UTC m=+0.268325743 container exec_died 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:16:30 compute-0 podman[213680]: 2025-11-26 01:16:30.212287616 +0000 UTC m=+0.044607977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:30 compute-0 systemd[1]: Started libpod-conmon-57e8d1e7ca83ff832c1ca14b6dfeb4b0eff553710f42925488b1dfcd94e6dd26.scope.
Nov 26 01:16:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a9d10d10279e9f58608e3055f6dfefe0694e0a025f74977d0f3b9fa1395d46/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a9d10d10279e9f58608e3055f6dfefe0694e0a025f74977d0f3b9fa1395d46/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:30 compute-0 podman[213680]: 2025-11-26 01:16:30.407143226 +0000 UTC m=+0.239463617 container init 57e8d1e7ca83ff832c1ca14b6dfeb4b0eff553710f42925488b1dfcd94e6dd26 (image=quay.io/ceph/ceph:v18, name=trusting_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 01:16:30 compute-0 podman[213680]: 2025-11-26 01:16:30.423785541 +0000 UTC m=+0.256105852 container start 57e8d1e7ca83ff832c1ca14b6dfeb4b0eff553710f42925488b1dfcd94e6dd26 (image=quay.io/ceph/ceph:v18, name=trusting_payne, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:30 compute-0 podman[213680]: 2025-11-26 01:16:30.429261094 +0000 UTC m=+0.261581435 container attach 57e8d1e7ca83ff832c1ca14b6dfeb4b0eff553710f42925488b1dfcd94e6dd26 (image=quay.io/ceph/ceph:v18, name=trusting_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:16:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 26 01:16:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 26 01:16:30 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 26 01:16:30 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/473765816' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 01:16:30 compute-0 ceph-mon[192746]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 01:16:30 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v67: 3 pgs: 2 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 01:16:31 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3417081046' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 01:16:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:16:31 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:16:31 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:16:31 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:16:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:16:31 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:16:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:16:31 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:31 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 35d471f8-416e-419b-b446-1a2a85eea9c1 does not exist
Nov 26 01:16:31 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5b1eea09-f68d-41e5-88ba-6f7dd943a7a6 does not exist
Nov 26 01:16:31 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 379dc9be-7cbf-4671-9e99-ef6ce9604442 does not exist
Nov 26 01:16:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:16:31 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:16:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:16:31 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:16:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:16:31 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:16:31 compute-0 openstack_network_exporter[160178]: ERROR   01:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:16:31 compute-0 openstack_network_exporter[160178]: ERROR   01:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:16:31 compute-0 openstack_network_exporter[160178]: ERROR   01:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:16:31 compute-0 openstack_network_exporter[160178]: ERROR   01:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:16:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:16:31 compute-0 openstack_network_exporter[160178]: ERROR   01:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:16:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:16:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 26 01:16:31 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3417081046' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 01:16:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 26 01:16:31 compute-0 trusting_payne[213705]: pool 'backups' created
Nov 26 01:16:31 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 26 01:16:31 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/3417081046' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 01:16:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:16:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:16:31 compute-0 systemd[1]: libpod-57e8d1e7ca83ff832c1ca14b6dfeb4b0eff553710f42925488b1dfcd94e6dd26.scope: Deactivated successfully.
Nov 26 01:16:31 compute-0 podman[213680]: 2025-11-26 01:16:31.527674707 +0000 UTC m=+1.359995038 container died 57e8d1e7ca83ff832c1ca14b6dfeb4b0eff553710f42925488b1dfcd94e6dd26 (image=quay.io/ceph/ceph:v18, name=trusting_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 01:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-12a9d10d10279e9f58608e3055f6dfefe0694e0a025f74977d0f3b9fa1395d46-merged.mount: Deactivated successfully.
Nov 26 01:16:31 compute-0 podman[213680]: 2025-11-26 01:16:31.616069845 +0000 UTC m=+1.448390156 container remove 57e8d1e7ca83ff832c1ca14b6dfeb4b0eff553710f42925488b1dfcd94e6dd26 (image=quay.io/ceph/ceph:v18, name=trusting_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 01:16:31 compute-0 systemd[1]: libpod-conmon-57e8d1e7ca83ff832c1ca14b6dfeb4b0eff553710f42925488b1dfcd94e6dd26.scope: Deactivated successfully.
Nov 26 01:16:31 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:32 compute-0 python3[213953]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:32 compute-0 podman[213962]: 2025-11-26 01:16:32.186180156 +0000 UTC m=+0.100946050 container create 65497456c7afb104e98bfc4fe439185a6c08658b4d6e2f603d43cae549ee1e66 (image=quay.io/ceph/ceph:v18, name=hungry_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:16:32 compute-0 podman[213962]: 2025-11-26 01:16:32.154529382 +0000 UTC m=+0.069295336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:32 compute-0 systemd[1]: Started libpod-conmon-65497456c7afb104e98bfc4fe439185a6c08658b4d6e2f603d43cae549ee1e66.scope.
Nov 26 01:16:32 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815c722b47cd3bb743fb332aaefe42af16e5b9985ea727815fafc935b4eb8569/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/815c722b47cd3bb743fb332aaefe42af16e5b9985ea727815fafc935b4eb8569/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:32 compute-0 podman[213962]: 2025-11-26 01:16:32.366312786 +0000 UTC m=+0.281078730 container init 65497456c7afb104e98bfc4fe439185a6c08658b4d6e2f603d43cae549ee1e66 (image=quay.io/ceph/ceph:v18, name=hungry_hawking, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:32 compute-0 podman[213962]: 2025-11-26 01:16:32.382953511 +0000 UTC m=+0.297719405 container start 65497456c7afb104e98bfc4fe439185a6c08658b4d6e2f603d43cae549ee1e66 (image=quay.io/ceph/ceph:v18, name=hungry_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:16:32 compute-0 podman[213962]: 2025-11-26 01:16:32.389401471 +0000 UTC m=+0.304167425 container attach 65497456c7afb104e98bfc4fe439185a6c08658b4d6e2f603d43cae549ee1e66 (image=quay.io/ceph/ceph:v18, name=hungry_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 26 01:16:32 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/3417081046' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 01:16:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 26 01:16:32 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 26 01:16:32 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:32 compute-0 podman[214009]: 2025-11-26 01:16:32.688623437 +0000 UTC m=+0.091231319 container create 701c56da0e55023053691572834fae41a43c28766cda920be29457d8ef8e20aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:16:32 compute-0 systemd[1]: Started libpod-conmon-701c56da0e55023053691572834fae41a43c28766cda920be29457d8ef8e20aa.scope.
Nov 26 01:16:32 compute-0 podman[214009]: 2025-11-26 01:16:32.65506789 +0000 UTC m=+0.057675782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:32 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:32 compute-0 podman[214009]: 2025-11-26 01:16:32.798551856 +0000 UTC m=+0.201159738 container init 701c56da0e55023053691572834fae41a43c28766cda920be29457d8ef8e20aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:32 compute-0 podman[214009]: 2025-11-26 01:16:32.808493084 +0000 UTC m=+0.211100936 container start 701c56da0e55023053691572834fae41a43c28766cda920be29457d8ef8e20aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:16:32 compute-0 podman[214009]: 2025-11-26 01:16:32.813182415 +0000 UTC m=+0.215790267 container attach 701c56da0e55023053691572834fae41a43c28766cda920be29457d8ef8e20aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:32 compute-0 elegant_thompson[214044]: 167 167
Nov 26 01:16:32 compute-0 systemd[1]: libpod-701c56da0e55023053691572834fae41a43c28766cda920be29457d8ef8e20aa.scope: Deactivated successfully.
Nov 26 01:16:32 compute-0 podman[214009]: 2025-11-26 01:16:32.819969694 +0000 UTC m=+0.222577566 container died 701c56da0e55023053691572834fae41a43c28766cda920be29457d8ef8e20aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 01:16:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-63eaa892d3e660cfbf6d023ef26475923f9e38a780c27430f01c766b55a0113f-merged.mount: Deactivated successfully.
Nov 26 01:16:32 compute-0 podman[214009]: 2025-11-26 01:16:32.886036079 +0000 UTC m=+0.288643951 container remove 701c56da0e55023053691572834fae41a43c28766cda920be29457d8ef8e20aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:16:32 compute-0 systemd[1]: libpod-conmon-701c56da0e55023053691572834fae41a43c28766cda920be29457d8ef8e20aa.scope: Deactivated successfully.
Nov 26 01:16:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 01:16:32 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3746353013' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 01:16:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v70: 4 pgs: 3 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:33 compute-0 podman[214070]: 2025-11-26 01:16:33.103714508 +0000 UTC m=+0.077032722 container create f0e86560a4421e643bfc2f026ea839be8a293cb549c88e6e4fa59f3ff744b98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:33 compute-0 podman[214070]: 2025-11-26 01:16:33.077348252 +0000 UTC m=+0.050666446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:33 compute-0 systemd[1]: Started libpod-conmon-f0e86560a4421e643bfc2f026ea839be8a293cb549c88e6e4fa59f3ff744b98a.scope.
Nov 26 01:16:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca20a523cfafbb1163ae405050fc54357c56cfb4a9083becde0e6210b48301b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca20a523cfafbb1163ae405050fc54357c56cfb4a9083becde0e6210b48301b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca20a523cfafbb1163ae405050fc54357c56cfb4a9083becde0e6210b48301b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca20a523cfafbb1163ae405050fc54357c56cfb4a9083becde0e6210b48301b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca20a523cfafbb1163ae405050fc54357c56cfb4a9083becde0e6210b48301b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:33 compute-0 podman[214070]: 2025-11-26 01:16:33.295440412 +0000 UTC m=+0.268758666 container init f0e86560a4421e643bfc2f026ea839be8a293cb549c88e6e4fa59f3ff744b98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:33 compute-0 podman[214070]: 2025-11-26 01:16:33.313155777 +0000 UTC m=+0.286474001 container start f0e86560a4421e643bfc2f026ea839be8a293cb549c88e6e4fa59f3ff744b98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:16:33 compute-0 podman[214070]: 2025-11-26 01:16:33.319255777 +0000 UTC m=+0.292574001 container attach f0e86560a4421e643bfc2f026ea839be8a293cb549c88e6e4fa59f3ff744b98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 01:16:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 26 01:16:33 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/3746353013' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 01:16:33 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3746353013' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 01:16:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 26 01:16:33 compute-0 hungry_hawking[213991]: pool 'images' created
Nov 26 01:16:33 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 26 01:16:33 compute-0 systemd[1]: libpod-65497456c7afb104e98bfc4fe439185a6c08658b4d6e2f603d43cae549ee1e66.scope: Deactivated successfully.
Nov 26 01:16:33 compute-0 podman[213962]: 2025-11-26 01:16:33.606232351 +0000 UTC m=+1.520998235 container died 65497456c7afb104e98bfc4fe439185a6c08658b4d6e2f603d43cae549ee1e66 (image=quay.io/ceph/ceph:v18, name=hungry_hawking, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-815c722b47cd3bb743fb332aaefe42af16e5b9985ea727815fafc935b4eb8569-merged.mount: Deactivated successfully.
Nov 26 01:16:33 compute-0 podman[213962]: 2025-11-26 01:16:33.692216542 +0000 UTC m=+1.606982426 container remove 65497456c7afb104e98bfc4fe439185a6c08658b4d6e2f603d43cae549ee1e66 (image=quay.io/ceph/ceph:v18, name=hungry_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:16:33 compute-0 systemd[1]: libpod-conmon-65497456c7afb104e98bfc4fe439185a6c08658b4d6e2f603d43cae549ee1e66.scope: Deactivated successfully.
Nov 26 01:16:33 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:16:34 compute-0 python3[214130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:34 compute-0 podman[214136]: 2025-11-26 01:16:34.249602726 +0000 UTC m=+0.084444289 container create d270a623c9aa9317cd26feff27d7d82f2d2c5c83f84ced127702fa038fd86e98 (image=quay.io/ceph/ceph:v18, name=tender_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 26 01:16:34 compute-0 podman[214136]: 2025-11-26 01:16:34.215762201 +0000 UTC m=+0.050603824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:34 compute-0 systemd[1]: Started libpod-conmon-d270a623c9aa9317cd26feff27d7d82f2d2c5c83f84ced127702fa038fd86e98.scope.
Nov 26 01:16:34 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf9a4c1231a0577e94c0abdfe061ceb3bc9d990b2cf56fe50266eb487bbd2357/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf9a4c1231a0577e94c0abdfe061ceb3bc9d990b2cf56fe50266eb487bbd2357/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:34 compute-0 podman[214136]: 2025-11-26 01:16:34.383644999 +0000 UTC m=+0.218486632 container init d270a623c9aa9317cd26feff27d7d82f2d2c5c83f84ced127702fa038fd86e98 (image=quay.io/ceph/ceph:v18, name=tender_allen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 01:16:34 compute-0 podman[214136]: 2025-11-26 01:16:34.400972843 +0000 UTC m=+0.235814386 container start d270a623c9aa9317cd26feff27d7d82f2d2c5c83f84ced127702fa038fd86e98 (image=quay.io/ceph/ceph:v18, name=tender_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 01:16:34 compute-0 podman[214136]: 2025-11-26 01:16:34.409617324 +0000 UTC m=+0.244458867 container attach d270a623c9aa9317cd26feff27d7d82f2d2c5c83f84ced127702fa038fd86e98 (image=quay.io/ceph/ceph:v18, name=tender_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 26 01:16:34 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/3746353013' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 01:16:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 26 01:16:34 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 26 01:16:34 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:34 compute-0 xenodochial_mclaren[214085]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:16:34 compute-0 xenodochial_mclaren[214085]: --> relative data size: 1.0
Nov 26 01:16:34 compute-0 xenodochial_mclaren[214085]: --> All data devices are unavailable
Nov 26 01:16:34 compute-0 systemd[1]: libpod-f0e86560a4421e643bfc2f026ea839be8a293cb549c88e6e4fa59f3ff744b98a.scope: Deactivated successfully.
Nov 26 01:16:34 compute-0 systemd[1]: libpod-f0e86560a4421e643bfc2f026ea839be8a293cb549c88e6e4fa59f3ff744b98a.scope: Consumed 1.286s CPU time.
Nov 26 01:16:34 compute-0 podman[214174]: 2025-11-26 01:16:34.791397326 +0000 UTC m=+0.054263967 container died f0e86560a4421e643bfc2f026ea839be8a293cb549c88e6e4fa59f3ff744b98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca20a523cfafbb1163ae405050fc54357c56cfb4a9083becde0e6210b48301b3-merged.mount: Deactivated successfully.
Nov 26 01:16:34 compute-0 podman[214174]: 2025-11-26 01:16:34.893084625 +0000 UTC m=+0.155951256 container remove f0e86560a4421e643bfc2f026ea839be8a293cb549c88e6e4fa59f3ff744b98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:16:34 compute-0 systemd[1]: libpod-conmon-f0e86560a4421e643bfc2f026ea839be8a293cb549c88e6e4fa59f3ff744b98a.scope: Deactivated successfully.
Nov 26 01:16:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v73: 5 pgs: 4 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 01:16:35 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4082631945' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 01:16:35 compute-0 ceph-mon[192746]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 01:16:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 26 01:16:35 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4082631945' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 01:16:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 26 01:16:35 compute-0 tender_allen[214155]: pool 'cephfs.cephfs.meta' created
Nov 26 01:16:35 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/4082631945' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 01:16:35 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 26 01:16:35 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:35 compute-0 systemd[1]: libpod-d270a623c9aa9317cd26feff27d7d82f2d2c5c83f84ced127702fa038fd86e98.scope: Deactivated successfully.
Nov 26 01:16:35 compute-0 podman[214136]: 2025-11-26 01:16:35.638576313 +0000 UTC m=+1.473417896 container died d270a623c9aa9317cd26feff27d7d82f2d2c5c83f84ced127702fa038fd86e98 (image=quay.io/ceph/ceph:v18, name=tender_allen, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 01:16:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf9a4c1231a0577e94c0abdfe061ceb3bc9d990b2cf56fe50266eb487bbd2357-merged.mount: Deactivated successfully.
Nov 26 01:16:35 compute-0 podman[214136]: 2025-11-26 01:16:35.731611211 +0000 UTC m=+1.566452784 container remove d270a623c9aa9317cd26feff27d7d82f2d2c5c83f84ced127702fa038fd86e98 (image=quay.io/ceph/ceph:v18, name=tender_allen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 01:16:35 compute-0 systemd[1]: libpod-conmon-d270a623c9aa9317cd26feff27d7d82f2d2c5c83f84ced127702fa038fd86e98.scope: Deactivated successfully.
Nov 26 01:16:36 compute-0 python3[214382]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:36 compute-0 podman[214389]: 2025-11-26 01:16:36.163180763 +0000 UTC m=+0.094244763 container create c4fb40dce33ae02c37316a2c883eabdd17ee4141603115127e757625a0eee28f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatelet, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 01:16:36 compute-0 podman[214389]: 2025-11-26 01:16:36.129323847 +0000 UTC m=+0.060387897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:36 compute-0 systemd[1]: Started libpod-conmon-c4fb40dce33ae02c37316a2c883eabdd17ee4141603115127e757625a0eee28f.scope.
Nov 26 01:16:36 compute-0 podman[214402]: 2025-11-26 01:16:36.262597199 +0000 UTC m=+0.077524696 container create 60471d7fd96792e95e1ef8decaa0c1f059cd5d8ed649da3fab8377ec2fcbc3cc (image=quay.io/ceph/ceph:v18, name=tender_neumann, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 01:16:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:36 compute-0 podman[214389]: 2025-11-26 01:16:36.305624351 +0000 UTC m=+0.236688371 container init c4fb40dce33ae02c37316a2c883eabdd17ee4141603115127e757625a0eee28f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatelet, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:16:36 compute-0 podman[214389]: 2025-11-26 01:16:36.316420602 +0000 UTC m=+0.247484582 container start c4fb40dce33ae02c37316a2c883eabdd17ee4141603115127e757625a0eee28f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 01:16:36 compute-0 podman[214389]: 2025-11-26 01:16:36.32135789 +0000 UTC m=+0.252421910 container attach c4fb40dce33ae02c37316a2c883eabdd17ee4141603115127e757625a0eee28f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatelet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 01:16:36 compute-0 sharp_chatelet[214414]: 167 167
Nov 26 01:16:36 compute-0 podman[214389]: 2025-11-26 01:16:36.32530602 +0000 UTC m=+0.256369990 container died c4fb40dce33ae02c37316a2c883eabdd17ee4141603115127e757625a0eee28f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatelet, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 01:16:36 compute-0 systemd[1]: Started libpod-conmon-60471d7fd96792e95e1ef8decaa0c1f059cd5d8ed649da3fab8377ec2fcbc3cc.scope.
Nov 26 01:16:36 compute-0 systemd[1]: libpod-c4fb40dce33ae02c37316a2c883eabdd17ee4141603115127e757625a0eee28f.scope: Deactivated successfully.
Nov 26 01:16:36 compute-0 podman[214402]: 2025-11-26 01:16:36.240383479 +0000 UTC m=+0.055310986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e49fdcaad8594ee974b937f71f2025c253d442c8f60a4938fe801528292a267/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e49fdcaad8594ee974b937f71f2025c253d442c8f60a4938fe801528292a267/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c4db2e724691f3404e8f4faf0dbbb8f191495971330627687eff56ceb73777f-merged.mount: Deactivated successfully.
Nov 26 01:16:36 compute-0 podman[214402]: 2025-11-26 01:16:36.384702509 +0000 UTC m=+0.199630046 container init 60471d7fd96792e95e1ef8decaa0c1f059cd5d8ed649da3fab8377ec2fcbc3cc (image=quay.io/ceph/ceph:v18, name=tender_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:36 compute-0 podman[214402]: 2025-11-26 01:16:36.395264304 +0000 UTC m=+0.210191801 container start 60471d7fd96792e95e1ef8decaa0c1f059cd5d8ed649da3fab8377ec2fcbc3cc (image=quay.io/ceph/ceph:v18, name=tender_neumann, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:16:36 compute-0 podman[214402]: 2025-11-26 01:16:36.403413961 +0000 UTC m=+0.218341458 container attach 60471d7fd96792e95e1ef8decaa0c1f059cd5d8ed649da3fab8377ec2fcbc3cc (image=quay.io/ceph/ceph:v18, name=tender_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 01:16:36 compute-0 podman[214389]: 2025-11-26 01:16:36.409663866 +0000 UTC m=+0.340727846 container remove c4fb40dce33ae02c37316a2c883eabdd17ee4141603115127e757625a0eee28f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 01:16:36 compute-0 systemd[1]: libpod-conmon-c4fb40dce33ae02c37316a2c883eabdd17ee4141603115127e757625a0eee28f.scope: Deactivated successfully.
Nov 26 01:16:36 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 26 01:16:36 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 26 01:16:36 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 26 01:16:36 compute-0 ceph-mon[192746]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 01:16:36 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/4082631945' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 01:16:36 compute-0 podman[214447]: 2025-11-26 01:16:36.645921733 +0000 UTC m=+0.090640242 container create d498cb535204295d3f564aed5251f723b9a8d9b0d0dd66258bc9931be4e8cbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 01:16:36 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:36 compute-0 podman[214447]: 2025-11-26 01:16:36.612980914 +0000 UTC m=+0.057699493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:36 compute-0 systemd[1]: Started libpod-conmon-d498cb535204295d3f564aed5251f723b9a8d9b0d0dd66258bc9931be4e8cbd0.scope.
Nov 26 01:16:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b764f7d8be3daf23f9c9d409fb32da0e8894db0965365048bc53bdfcac895929/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b764f7d8be3daf23f9c9d409fb32da0e8894db0965365048bc53bdfcac895929/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b764f7d8be3daf23f9c9d409fb32da0e8894db0965365048bc53bdfcac895929/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b764f7d8be3daf23f9c9d409fb32da0e8894db0965365048bc53bdfcac895929/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:36 compute-0 podman[214447]: 2025-11-26 01:16:36.834796468 +0000 UTC m=+0.279515007 container init d498cb535204295d3f564aed5251f723b9a8d9b0d0dd66258bc9931be4e8cbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mayer, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:36 compute-0 podman[214447]: 2025-11-26 01:16:36.859625081 +0000 UTC m=+0.304343610 container start d498cb535204295d3f564aed5251f723b9a8d9b0d0dd66258bc9931be4e8cbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mayer, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 01:16:36 compute-0 podman[214447]: 2025-11-26 01:16:36.866668868 +0000 UTC m=+0.311387437 container attach d498cb535204295d3f564aed5251f723b9a8d9b0d0dd66258bc9931be4e8cbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:36 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 26 01:16:36 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2348584392' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 01:16:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v76: 6 pgs: 5 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:37 compute-0 naughty_mayer[214479]: {
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:    "0": [
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:        {
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "devices": [
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "/dev/loop3"
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            ],
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "lv_name": "ceph_lv0",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "lv_size": "21470642176",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "name": "ceph_lv0",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "tags": {
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.cluster_name": "ceph",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.crush_device_class": "",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.encrypted": "0",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.osd_id": "0",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.type": "block",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.vdo": "0"
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            },
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "type": "block",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "vg_name": "ceph_vg0"
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:        }
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:    ],
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:    "1": [
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:        {
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "devices": [
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "/dev/loop4"
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            ],
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "lv_name": "ceph_lv1",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "lv_size": "21470642176",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "name": "ceph_lv1",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "tags": {
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.cluster_name": "ceph",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.crush_device_class": "",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.encrypted": "0",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.osd_id": "1",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.type": "block",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.vdo": "0"
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            },
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "type": "block",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "vg_name": "ceph_vg1"
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:        }
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:    ],
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:    "2": [
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:        {
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "devices": [
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "/dev/loop5"
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            ],
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "lv_name": "ceph_lv2",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "lv_size": "21470642176",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "name": "ceph_lv2",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "tags": {
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.cluster_name": "ceph",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.crush_device_class": "",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.encrypted": "0",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.osd_id": "2",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.type": "block",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:                "ceph.vdo": "0"
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            },
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "type": "block",
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:            "vg_name": "ceph_vg2"
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:        }
Nov 26 01:16:37 compute-0 naughty_mayer[214479]:    ]
Nov 26 01:16:37 compute-0 naughty_mayer[214479]: }
Nov 26 01:16:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 26 01:16:37 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/2348584392' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 26 01:16:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2348584392' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 01:16:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 26 01:16:37 compute-0 tender_neumann[214426]: pool 'cephfs.cephfs.data' created
Nov 26 01:16:37 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 26 01:16:37 compute-0 systemd[1]: libpod-d498cb535204295d3f564aed5251f723b9a8d9b0d0dd66258bc9931be4e8cbd0.scope: Deactivated successfully.
Nov 26 01:16:37 compute-0 podman[214447]: 2025-11-26 01:16:37.675405831 +0000 UTC m=+1.120124390 container died d498cb535204295d3f564aed5251f723b9a8d9b0d0dd66258bc9931be4e8cbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:37 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:37 compute-0 systemd[1]: libpod-60471d7fd96792e95e1ef8decaa0c1f059cd5d8ed649da3fab8377ec2fcbc3cc.scope: Deactivated successfully.
Nov 26 01:16:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b764f7d8be3daf23f9c9d409fb32da0e8894db0965365048bc53bdfcac895929-merged.mount: Deactivated successfully.
Nov 26 01:16:37 compute-0 podman[214402]: 2025-11-26 01:16:37.728244686 +0000 UTC m=+1.543172213 container died 60471d7fd96792e95e1ef8decaa0c1f059cd5d8ed649da3fab8377ec2fcbc3cc (image=quay.io/ceph/ceph:v18, name=tender_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 01:16:37 compute-0 podman[214447]: 2025-11-26 01:16:37.788716455 +0000 UTC m=+1.233434954 container remove d498cb535204295d3f564aed5251f723b9a8d9b0d0dd66258bc9931be4e8cbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 01:16:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e49fdcaad8594ee974b937f71f2025c253d442c8f60a4938fe801528292a267-merged.mount: Deactivated successfully.
Nov 26 01:16:37 compute-0 systemd[1]: libpod-conmon-d498cb535204295d3f564aed5251f723b9a8d9b0d0dd66258bc9931be4e8cbd0.scope: Deactivated successfully.
Nov 26 01:16:37 compute-0 podman[214402]: 2025-11-26 01:16:37.842373033 +0000 UTC m=+1.657300530 container remove 60471d7fd96792e95e1ef8decaa0c1f059cd5d8ed649da3fab8377ec2fcbc3cc (image=quay.io/ceph/ceph:v18, name=tender_neumann, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 01:16:37 compute-0 systemd[1]: libpod-conmon-60471d7fd96792e95e1ef8decaa0c1f059cd5d8ed649da3fab8377ec2fcbc3cc.scope: Deactivated successfully.
Nov 26 01:16:38 compute-0 python3[214614]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:38 compute-0 podman[214641]: 2025-11-26 01:16:38.519814431 +0000 UTC m=+0.099273893 container create eaa0ef4d2a31408b93da8e70675a321b99c038c314f20c64f47984126d4e85e7 (image=quay.io/ceph/ceph:v18, name=epic_noether, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:38 compute-0 podman[214641]: 2025-11-26 01:16:38.485261146 +0000 UTC m=+0.064720658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:38 compute-0 systemd[1]: Started libpod-conmon-eaa0ef4d2a31408b93da8e70675a321b99c038c314f20c64f47984126d4e85e7.scope.
Nov 26 01:16:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/463d9ae5af655920a171339bdcdd9435bb66647048d84fbfa8dd453b9b4a1f99/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/463d9ae5af655920a171339bdcdd9435bb66647048d84fbfa8dd453b9b4a1f99/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 26 01:16:38 compute-0 podman[214641]: 2025-11-26 01:16:38.662762803 +0000 UTC m=+0.242222295 container init eaa0ef4d2a31408b93da8e70675a321b99c038c314f20c64f47984126d4e85e7 (image=quay.io/ceph/ceph:v18, name=epic_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 26 01:16:38 compute-0 podman[214641]: 2025-11-26 01:16:38.673789951 +0000 UTC m=+0.253249403 container start eaa0ef4d2a31408b93da8e70675a321b99c038c314f20c64f47984126d4e85e7 (image=quay.io/ceph/ceph:v18, name=epic_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:16:38 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 26 01:16:38 compute-0 podman[214641]: 2025-11-26 01:16:38.68236134 +0000 UTC m=+0.261820802 container attach eaa0ef4d2a31408b93da8e70675a321b99c038c314f20c64f47984126d4e85e7 (image=quay.io/ceph/ceph:v18, name=epic_noether, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 01:16:38 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/2348584392' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 26 01:16:38 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:16:38 compute-0 podman[214699]: 2025-11-26 01:16:38.966541856 +0000 UTC m=+0.066936420 container create 93d0db718665fb3ba7a682be32dee123c7dfcd8d1ed2543f55c63fe18cd9ede6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:16:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 5 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:39 compute-0 systemd[1]: Started libpod-conmon-93d0db718665fb3ba7a682be32dee123c7dfcd8d1ed2543f55c63fe18cd9ede6.scope.
Nov 26 01:16:39 compute-0 podman[214699]: 2025-11-26 01:16:38.93875845 +0000 UTC m=+0.039153074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:39 compute-0 podman[214699]: 2025-11-26 01:16:39.104214371 +0000 UTC m=+0.204608975 container init 93d0db718665fb3ba7a682be32dee123c7dfcd8d1ed2543f55c63fe18cd9ede6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 01:16:39 compute-0 podman[214699]: 2025-11-26 01:16:39.117981515 +0000 UTC m=+0.218376069 container start 93d0db718665fb3ba7a682be32dee123c7dfcd8d1ed2543f55c63fe18cd9ede6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:39 compute-0 podman[214699]: 2025-11-26 01:16:39.122778289 +0000 UTC m=+0.223172873 container attach 93d0db718665fb3ba7a682be32dee123c7dfcd8d1ed2543f55c63fe18cd9ede6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:39 compute-0 youthful_hamilton[214734]: 167 167
Nov 26 01:16:39 compute-0 systemd[1]: libpod-93d0db718665fb3ba7a682be32dee123c7dfcd8d1ed2543f55c63fe18cd9ede6.scope: Deactivated successfully.
Nov 26 01:16:39 compute-0 podman[214699]: 2025-11-26 01:16:39.12819808 +0000 UTC m=+0.228592644 container died 93d0db718665fb3ba7a682be32dee123c7dfcd8d1ed2543f55c63fe18cd9ede6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 26 01:16:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f61d9b9bb5d19deea2b7293a8f81e4225f1064e5f5db7c61b40dd725b85d7fe5-merged.mount: Deactivated successfully.
Nov 26 01:16:39 compute-0 podman[214699]: 2025-11-26 01:16:39.192108265 +0000 UTC m=+0.292502809 container remove 93d0db718665fb3ba7a682be32dee123c7dfcd8d1ed2543f55c63fe18cd9ede6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:16:39 compute-0 systemd[1]: libpod-conmon-93d0db718665fb3ba7a682be32dee123c7dfcd8d1ed2543f55c63fe18cd9ede6.scope: Deactivated successfully.
Nov 26 01:16:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 26 01:16:39 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1261046793' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 26 01:16:39 compute-0 podman[214759]: 2025-11-26 01:16:39.45228343 +0000 UTC m=+0.083475762 container create b489cdcbf29f2bb71bcf243ae6a837c7d2ec29e5e6d06b1318c32228f070eaf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 01:16:39 compute-0 podman[214759]: 2025-11-26 01:16:39.420277327 +0000 UTC m=+0.051469709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:39 compute-0 systemd[1]: Started libpod-conmon-b489cdcbf29f2bb71bcf243ae6a837c7d2ec29e5e6d06b1318c32228f070eaf0.scope.
Nov 26 01:16:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/843a2c9c5b8af05326ea57970021912bc7a5f4f8f7ef7e2059e1685e121e1625/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/843a2c9c5b8af05326ea57970021912bc7a5f4f8f7ef7e2059e1685e121e1625/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/843a2c9c5b8af05326ea57970021912bc7a5f4f8f7ef7e2059e1685e121e1625/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/843a2c9c5b8af05326ea57970021912bc7a5f4f8f7ef7e2059e1685e121e1625/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:39 compute-0 podman[214759]: 2025-11-26 01:16:39.622662508 +0000 UTC m=+0.253854900 container init b489cdcbf29f2bb71bcf243ae6a837c7d2ec29e5e6d06b1318c32228f070eaf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 26 01:16:39 compute-0 podman[214759]: 2025-11-26 01:16:39.646415062 +0000 UTC m=+0.277607394 container start b489cdcbf29f2bb71bcf243ae6a837c7d2ec29e5e6d06b1318c32228f070eaf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:39 compute-0 podman[214759]: 2025-11-26 01:16:39.653057177 +0000 UTC m=+0.284249559 container attach b489cdcbf29f2bb71bcf243ae6a837c7d2ec29e5e6d06b1318c32228f070eaf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 01:16:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 26 01:16:39 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1261046793' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 26 01:16:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 26 01:16:39 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/1261046793' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 26 01:16:39 compute-0 epic_noether[214665]: enabled application 'rbd' on pool 'vms'
Nov 26 01:16:39 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 26 01:16:39 compute-0 systemd[1]: libpod-eaa0ef4d2a31408b93da8e70675a321b99c038c314f20c64f47984126d4e85e7.scope: Deactivated successfully.
Nov 26 01:16:39 compute-0 podman[214641]: 2025-11-26 01:16:39.745404316 +0000 UTC m=+1.324863778 container died eaa0ef4d2a31408b93da8e70675a321b99c038c314f20c64f47984126d4e85e7 (image=quay.io/ceph/ceph:v18, name=epic_noether, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-463d9ae5af655920a171339bdcdd9435bb66647048d84fbfa8dd453b9b4a1f99-merged.mount: Deactivated successfully.
Nov 26 01:16:39 compute-0 podman[214641]: 2025-11-26 01:16:39.837019564 +0000 UTC m=+1.416478996 container remove eaa0ef4d2a31408b93da8e70675a321b99c038c314f20c64f47984126d4e85e7 (image=quay.io/ceph/ceph:v18, name=epic_noether, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:16:39 compute-0 systemd[1]: libpod-conmon-eaa0ef4d2a31408b93da8e70675a321b99c038c314f20c64f47984126d4e85e7.scope: Deactivated successfully.
Nov 26 01:16:40 compute-0 python3[214821]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:40 compute-0 podman[214822]: 2025-11-26 01:16:40.398943566 +0000 UTC m=+0.088577325 container create a62bf1e38274504ddfae7c9adc5ae8f0050f466ea1c79a7f7b21742840b38d0a (image=quay.io/ceph/ceph:v18, name=lucid_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 01:16:40 compute-0 podman[214822]: 2025-11-26 01:16:40.366463319 +0000 UTC m=+0.056097098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:40 compute-0 systemd[1]: Started libpod-conmon-a62bf1e38274504ddfae7c9adc5ae8f0050f466ea1c79a7f7b21742840b38d0a.scope.
Nov 26 01:16:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe804cdbcc1f981093ef19dec6f28cf94175109dcacff7c3d280ee97c9712e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe804cdbcc1f981093ef19dec6f28cf94175109dcacff7c3d280ee97c9712e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:40 compute-0 podman[214822]: 2025-11-26 01:16:40.545105568 +0000 UTC m=+0.234739377 container init a62bf1e38274504ddfae7c9adc5ae8f0050f466ea1c79a7f7b21742840b38d0a (image=quay.io/ceph/ceph:v18, name=lucid_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 01:16:40 compute-0 podman[214822]: 2025-11-26 01:16:40.561472695 +0000 UTC m=+0.251106454 container start a62bf1e38274504ddfae7c9adc5ae8f0050f466ea1c79a7f7b21742840b38d0a (image=quay.io/ceph/ceph:v18, name=lucid_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 01:16:40 compute-0 podman[214822]: 2025-11-26 01:16:40.572282956 +0000 UTC m=+0.261916765 container attach a62bf1e38274504ddfae7c9adc5ae8f0050f466ea1c79a7f7b21742840b38d0a (image=quay.io/ceph/ceph:v18, name=lucid_nightingale, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:16:40 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/1261046793' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 26 01:16:40 compute-0 zen_khayyam[214776]: {
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:        "osd_id": 0,
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:        "type": "bluestore"
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:    },
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:        "osd_id": 2,
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:        "type": "bluestore"
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:    },
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:        "osd_id": 1,
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:        "type": "bluestore"
Nov 26 01:16:40 compute-0 zen_khayyam[214776]:    }
Nov 26 01:16:40 compute-0 zen_khayyam[214776]: }
Nov 26 01:16:40 compute-0 systemd[1]: libpod-b489cdcbf29f2bb71bcf243ae6a837c7d2ec29e5e6d06b1318c32228f070eaf0.scope: Deactivated successfully.
Nov 26 01:16:40 compute-0 systemd[1]: libpod-b489cdcbf29f2bb71bcf243ae6a837c7d2ec29e5e6d06b1318c32228f070eaf0.scope: Consumed 1.149s CPU time.
Nov 26 01:16:40 compute-0 podman[214759]: 2025-11-26 01:16:40.804521442 +0000 UTC m=+1.435713774 container died b489cdcbf29f2bb71bcf243ae6a837c7d2ec29e5e6d06b1318c32228f070eaf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 01:16:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-843a2c9c5b8af05326ea57970021912bc7a5f4f8f7ef7e2059e1685e121e1625-merged.mount: Deactivated successfully.
Nov 26 01:16:40 compute-0 podman[214759]: 2025-11-26 01:16:40.9022155 +0000 UTC m=+1.533407812 container remove b489cdcbf29f2bb71bcf243ae6a837c7d2ec29e5e6d06b1318c32228f070eaf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:16:40 compute-0 systemd[1]: libpod-conmon-b489cdcbf29f2bb71bcf243ae6a837c7d2ec29e5e6d06b1318c32228f070eaf0.scope: Deactivated successfully.
Nov 26 01:16:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:16:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:16:40
Nov 26 01:16:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:16:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Some PGs (0.142857) are unknown; try again later
Nov 26 01:16:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:40 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:16:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 01:16:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 01:16:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:16:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:16:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 26 01:16:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/591664368' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 26 01:16:41 compute-0 ceph-mon[192746]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 01:16:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:16:42 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/591664368' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 26 01:16:42 compute-0 ceph-mon[192746]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 01:16:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 26 01:16:42 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:16:42 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/591664368' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 26 01:16:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 26 01:16:42 compute-0 lucid_nightingale[214844]: enabled application 'rbd' on pool 'volumes'
Nov 26 01:16:42 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 26 01:16:42 compute-0 ceph-mgr[193049]: [progress INFO root] update: starting ev 6c6c07cc-cdc8-4afb-bc63-03e4178c60b3 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 26 01:16:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 01:16:42 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:16:42 compute-0 systemd[1]: libpod-a62bf1e38274504ddfae7c9adc5ae8f0050f466ea1c79a7f7b21742840b38d0a.scope: Deactivated successfully.
Nov 26 01:16:42 compute-0 podman[214822]: 2025-11-26 01:16:42.06251766 +0000 UTC m=+1.752151449 container died a62bf1e38274504ddfae7c9adc5ae8f0050f466ea1c79a7f7b21742840b38d0a (image=quay.io/ceph/ceph:v18, name=lucid_nightingale, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:16:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fe804cdbcc1f981093ef19dec6f28cf94175109dcacff7c3d280ee97c9712e4-merged.mount: Deactivated successfully.
Nov 26 01:16:42 compute-0 podman[214822]: 2025-11-26 01:16:42.152811851 +0000 UTC m=+1.842445620 container remove a62bf1e38274504ddfae7c9adc5ae8f0050f466ea1c79a7f7b21742840b38d0a (image=quay.io/ceph/ceph:v18, name=lucid_nightingale, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:16:42 compute-0 systemd[1]: libpod-conmon-a62bf1e38274504ddfae7c9adc5ae8f0050f466ea1c79a7f7b21742840b38d0a.scope: Deactivated successfully.
Nov 26 01:16:42 compute-0 python3[214988]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:42 compute-0 podman[214989]: 2025-11-26 01:16:42.672112713 +0000 UTC m=+0.081520538 container create 4dbf0ddf2efddbb629d7b0e1e533f1a90babe37096fe773da8384cea7e2e2834 (image=quay.io/ceph/ceph:v18, name=cranky_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 01:16:42 compute-0 systemd[1]: Started libpod-conmon-4dbf0ddf2efddbb629d7b0e1e533f1a90babe37096fe773da8384cea7e2e2834.scope.
Nov 26 01:16:42 compute-0 podman[214989]: 2025-11-26 01:16:42.650615623 +0000 UTC m=+0.060023438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc1ec5f3a27055722369b2e2a7110d669637d6b9015d5361402356a64fd6e1d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc1ec5f3a27055722369b2e2a7110d669637d6b9015d5361402356a64fd6e1d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:42 compute-0 podman[214989]: 2025-11-26 01:16:42.82958769 +0000 UTC m=+0.238995565 container init 4dbf0ddf2efddbb629d7b0e1e533f1a90babe37096fe773da8384cea7e2e2834 (image=quay.io/ceph/ceph:v18, name=cranky_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 01:16:42 compute-0 podman[214989]: 2025-11-26 01:16:42.847212363 +0000 UTC m=+0.256620188 container start 4dbf0ddf2efddbb629d7b0e1e533f1a90babe37096fe773da8384cea7e2e2834 (image=quay.io/ceph/ceph:v18, name=cranky_bardeen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 01:16:42 compute-0 podman[214989]: 2025-11-26 01:16:42.85427814 +0000 UTC m=+0.263685995 container attach 4dbf0ddf2efddbb629d7b0e1e533f1a90babe37096fe773da8384cea7e2e2834 (image=quay.io/ceph/ceph:v18, name=cranky_bardeen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 01:16:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v83: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 01:16:42 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 26 01:16:43 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:16:43 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:16:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 26 01:16:43 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 26 01:16:43 compute-0 ceph-mgr[193049]: [progress INFO root] update: starting ev e6401351-47ed-4e4f-a9fd-cd8645aa510f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 26 01:16:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 01:16:43 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:16:43 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:16:43 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/591664368' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 26 01:16:43 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:16:43 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 26 01:16:43 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4075458733' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 26 01:16:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:16:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 26 01:16:44 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:16:44 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4075458733' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 26 01:16:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 26 01:16:44 compute-0 cranky_bardeen[215003]: enabled application 'rbd' on pool 'backups'
Nov 26 01:16:44 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 26 01:16:44 compute-0 ceph-mgr[193049]: [progress INFO root] update: starting ev a879b428-3355-478b-a16a-7970d16a99de (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 32 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=32 pruub=8.381536484s) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active pruub 46.612926483s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=32 pruub=8.381536484s) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown pruub 46.612926483s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:16:44 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:16:44 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:16:44 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/4075458733' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 26 01:16:44 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.14( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.15( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.1a( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.1b( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.18( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.19( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.16( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.1e( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.1f( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.1c( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.1d( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.17( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.12( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.13( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.2( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.3( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.4( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.5( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.8( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.9( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.6( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.7( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.a( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.b( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.c( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.d( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.e( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.f( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.11( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.1( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 33 pg[2.10( empty local-lis/les=18/19 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 01:16:44 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:16:44 compute-0 systemd[1]: libpod-4dbf0ddf2efddbb629d7b0e1e533f1a90babe37096fe773da8384cea7e2e2834.scope: Deactivated successfully.
Nov 26 01:16:44 compute-0 podman[214989]: 2025-11-26 01:16:44.078957319 +0000 UTC m=+1.488365144 container died 4dbf0ddf2efddbb629d7b0e1e533f1a90babe37096fe773da8384cea7e2e2834 (image=quay.io/ceph/ceph:v18, name=cranky_bardeen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 01:16:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dc1ec5f3a27055722369b2e2a7110d669637d6b9015d5361402356a64fd6e1d-merged.mount: Deactivated successfully.
Nov 26 01:16:44 compute-0 podman[214989]: 2025-11-26 01:16:44.169031835 +0000 UTC m=+1.578439640 container remove 4dbf0ddf2efddbb629d7b0e1e533f1a90babe37096fe773da8384cea7e2e2834 (image=quay.io/ceph/ceph:v18, name=cranky_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 01:16:44 compute-0 systemd[1]: libpod-conmon-4dbf0ddf2efddbb629d7b0e1e533f1a90babe37096fe773da8384cea7e2e2834.scope: Deactivated successfully.
Nov 26 01:16:44 compute-0 python3[215063]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:44 compute-0 podman[215064]: 2025-11-26 01:16:44.686425472 +0000 UTC m=+0.096479235 container create 3e7b112eead4630f7413c1cc16ff737abc488d7b413a29eb6691e95ce8804ce6 (image=quay.io/ceph/ceph:v18, name=frosty_feistel, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 01:16:44 compute-0 podman[215064]: 2025-11-26 01:16:44.641452276 +0000 UTC m=+0.051506089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:44 compute-0 systemd[1]: Started libpod-conmon-3e7b112eead4630f7413c1cc16ff737abc488d7b413a29eb6691e95ce8804ce6.scope.
Nov 26 01:16:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31cb62caf0421affeccae4eb0dea99ef02ff566cb467ef69f50d883832a9a639/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31cb62caf0421affeccae4eb0dea99ef02ff566cb467ef69f50d883832a9a639/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:44 compute-0 podman[215064]: 2025-11-26 01:16:44.854949488 +0000 UTC m=+0.265003241 container init 3e7b112eead4630f7413c1cc16ff737abc488d7b413a29eb6691e95ce8804ce6 (image=quay.io/ceph/ceph:v18, name=frosty_feistel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:44 compute-0 podman[215064]: 2025-11-26 01:16:44.869509835 +0000 UTC m=+0.279563608 container start 3e7b112eead4630f7413c1cc16ff737abc488d7b413a29eb6691e95ce8804ce6 (image=quay.io/ceph/ceph:v18, name=frosty_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:16:44 compute-0 podman[215064]: 2025-11-26 01:16:44.876189231 +0000 UTC m=+0.286242994 container attach 3e7b112eead4630f7413c1cc16ff737abc488d7b413a29eb6691e95ce8804ce6 (image=quay.io/ceph/ceph:v18, name=frosty_feistel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v86: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 01:16:44 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 01:16:44 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 26 01:16:45 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:16:45 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:16:45 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:16:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 26 01:16:45 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 26 01:16:45 compute-0 ceph-mgr[193049]: [progress INFO root] update: starting ev 29dae7cf-879e-4814-8699-5c919ae80f26 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 26 01:16:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 01:16:45 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:16:45 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/4075458733' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 26 01:16:45 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:16:45 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:45 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.0( empty local-lis/les=32/34 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=18/18 les/c/f=19/19/0 sis=32) [2] r=0 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 26 01:16:45 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2316860961' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 26 01:16:45 compute-0 podman[215102]: 2025-11-26 01:16:45.581736104 +0000 UTC m=+0.117287046 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 26 01:16:45 compute-0 podman[215104]: 2025-11-26 01:16:45.593266656 +0000 UTC m=+0.127257915 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:16:45 compute-0 podman[215105]: 2025-11-26 01:16:45.64890626 +0000 UTC m=+0.180027579 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 26 01:16:45 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 34 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=10.774902344s) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active pruub 63.797882080s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:45 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 34 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=10.774902344s) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown pruub 63.797882080s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:45 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=8.694345474s) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active pruub 55.427513123s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:45 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=34 pruub=8.694345474s) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown pruub 55.427513123s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 26 01:16:46 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:16:46 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2316860961' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 26 01:16:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 26 01:16:46 compute-0 frosty_feistel[215077]: enabled application 'rbd' on pool 'images'
Nov 26 01:16:46 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 26 01:16:46 compute-0 ceph-mgr[193049]: [progress INFO root] update: starting ev afd99780-6cb2-4055-bd8a-2d8670d20242 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 26 01:16:46 compute-0 ceph-mgr[193049]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Nov 26 01:16:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 01:16:46 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:16:46 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:16:46 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:16:46 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:16:46 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:16:46 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/2316860961' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 26 01:16:46 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:16:46 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/2316860961' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.1f( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.1e( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.1d( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.1c( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.7( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.6( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.8( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.a( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.b( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.1a( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.5( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.1b( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.9( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.19( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.3( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.1( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.4( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.c( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.2( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.d( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.e( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.f( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.11( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.12( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.14( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.10( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.16( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.15( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.17( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.18( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.1f( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.13( empty local-lis/les=22/23 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.1e( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.1c( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.1b( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.a( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.8( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.6( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.7( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.3( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.1( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.4( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.2( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.b( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.1f( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.c( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.5( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.d( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.f( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.10( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.14( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.12( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.13( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.18( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.17( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.16( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.19( empty local-lis/les=20/21 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.1f( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.1b( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.a( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.1e( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.1c( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.1e( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.1d( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.1c( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.7( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.b( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.a( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.8( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.5( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.1b( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.1a( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.6( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.19( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.9( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.8( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.3( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.6( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.7( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.1( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.3( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.4( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.d( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.f( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.1( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.11( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.e( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.14( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.10( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.12( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.16( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.17( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.18( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.15( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.13( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.0( empty local-lis/les=34/35 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.2( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 35 pg[4.c( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=22/22 les/c/f=23/23/0 sis=34) [0] r=0 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.4( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=34/35 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.c( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.b( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.5( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.d( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.f( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.10( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.14( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.18( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.2( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.17( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.16( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.13( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.19( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 35 pg[3.12( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=20/20 les/c/f=21/21/0 sis=34) [1] r=0 lpr=34 pi=[20,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:46 compute-0 systemd[1]: libpod-3e7b112eead4630f7413c1cc16ff737abc488d7b413a29eb6691e95ce8804ce6.scope: Deactivated successfully.
Nov 26 01:16:46 compute-0 podman[215064]: 2025-11-26 01:16:46.129767417 +0000 UTC m=+1.539821240 container died 3e7b112eead4630f7413c1cc16ff737abc488d7b413a29eb6691e95ce8804ce6 (image=quay.io/ceph/ceph:v18, name=frosty_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 01:16:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-31cb62caf0421affeccae4eb0dea99ef02ff566cb467ef69f50d883832a9a639-merged.mount: Deactivated successfully.
Nov 26 01:16:46 compute-0 podman[215064]: 2025-11-26 01:16:46.204123474 +0000 UTC m=+1.614177227 container remove 3e7b112eead4630f7413c1cc16ff737abc488d7b413a29eb6691e95ce8804ce6 (image=quay.io/ceph/ceph:v18, name=frosty_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:46 compute-0 systemd[1]: libpod-conmon-3e7b112eead4630f7413c1cc16ff737abc488d7b413a29eb6691e95ce8804ce6.scope: Deactivated successfully.
Nov 26 01:16:46 compute-0 python3[215204]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:46 compute-0 podman[215205]: 2025-11-26 01:16:46.738958919 +0000 UTC m=+0.094289164 container create 544cb88208f8c919466d436fed9f61c48af3e788c445ac9066973ffb5c93b47f (image=quay.io/ceph/ceph:v18, name=infallible_hopper, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:46 compute-0 podman[215205]: 2025-11-26 01:16:46.705447043 +0000 UTC m=+0.060777338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:46 compute-0 systemd[1]: Started libpod-conmon-544cb88208f8c919466d436fed9f61c48af3e788c445ac9066973ffb5c93b47f.scope.
Nov 26 01:16:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38cd1b4dcafb1cb8bf7720f8236a1847b303cb50cd75977880fd268fec38f3ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38cd1b4dcafb1cb8bf7720f8236a1847b303cb50cd75977880fd268fec38f3ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:46 compute-0 podman[215205]: 2025-11-26 01:16:46.911560089 +0000 UTC m=+0.266890364 container init 544cb88208f8c919466d436fed9f61c48af3e788c445ac9066973ffb5c93b47f (image=quay.io/ceph/ceph:v18, name=infallible_hopper, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 01:16:46 compute-0 podman[215205]: 2025-11-26 01:16:46.928488532 +0000 UTC m=+0.283818747 container start 544cb88208f8c919466d436fed9f61c48af3e788c445ac9066973ffb5c93b47f (image=quay.io/ceph/ceph:v18, name=infallible_hopper, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 01:16:46 compute-0 podman[215205]: 2025-11-26 01:16:46.934171641 +0000 UTC m=+0.289501876 container attach 544cb88208f8c919466d436fed9f61c48af3e788c445ac9066973ffb5c93b47f (image=quay.io/ceph/ceph:v18, name=infallible_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:16:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v89: 100 pgs: 62 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 01:16:46 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 01:16:46 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 26 01:16:47 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:16:47 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:16:47 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:16:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 26 01:16:47 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 26 01:16:47 compute-0 ceph-mgr[193049]: [progress INFO root] update: starting ev dfc0c98a-325b-445a-b31c-95ba91cc246d (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 26 01:16:47 compute-0 ceph-mgr[193049]: [progress INFO root] complete: finished ev 6c6c07cc-cdc8-4afb-bc63-03e4178c60b3 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 26 01:16:47 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event 6c6c07cc-cdc8-4afb-bc63-03e4178c60b3 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Nov 26 01:16:47 compute-0 ceph-mgr[193049]: [progress INFO root] complete: finished ev e6401351-47ed-4e4f-a9fd-cd8645aa510f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 26 01:16:47 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event e6401351-47ed-4e4f-a9fd-cd8645aa510f (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Nov 26 01:16:47 compute-0 ceph-mgr[193049]: [progress INFO root] complete: finished ev a879b428-3355-478b-a16a-7970d16a99de (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 26 01:16:47 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event a879b428-3355-478b-a16a-7970d16a99de (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Nov 26 01:16:47 compute-0 ceph-mgr[193049]: [progress INFO root] complete: finished ev 29dae7cf-879e-4814-8699-5c919ae80f26 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 26 01:16:47 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event 29dae7cf-879e-4814-8699-5c919ae80f26 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Nov 26 01:16:47 compute-0 ceph-mgr[193049]: [progress INFO root] complete: finished ev afd99780-6cb2-4055-bd8a-2d8670d20242 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 26 01:16:47 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event afd99780-6cb2-4055-bd8a-2d8670d20242 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Nov 26 01:16:47 compute-0 ceph-mgr[193049]: [progress INFO root] complete: finished ev dfc0c98a-325b-445a-b31c-95ba91cc246d (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 26 01:16:47 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event dfc0c98a-325b-445a-b31c-95ba91cc246d (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 26 01:16:47 compute-0 ceph-mon[192746]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 01:16:47 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:16:47 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:47 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:47 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:16:47 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:16:47 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:16:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 26 01:16:47 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1820768699' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 26 01:16:48 compute-0 ceph-mon[192746]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 01:16:48 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/1820768699' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 26 01:16:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 26 01:16:48 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1820768699' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 26 01:16:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 26 01:16:48 compute-0 infallible_hopper[215220]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 26 01:16:48 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 26 01:16:48 compute-0 systemd[1]: libpod-544cb88208f8c919466d436fed9f61c48af3e788c445ac9066973ffb5c93b47f.scope: Deactivated successfully.
Nov 26 01:16:48 compute-0 podman[215205]: 2025-11-26 01:16:48.180483393 +0000 UTC m=+1.535813628 container died 544cb88208f8c919466d436fed9f61c48af3e788c445ac9066973ffb5c93b47f (image=quay.io/ceph/ceph:v18, name=infallible_hopper, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-38cd1b4dcafb1cb8bf7720f8236a1847b303cb50cd75977880fd268fec38f3ed-merged.mount: Deactivated successfully.
Nov 26 01:16:48 compute-0 podman[215205]: 2025-11-26 01:16:48.267427661 +0000 UTC m=+1.622757896 container remove 544cb88208f8c919466d436fed9f61c48af3e788c445ac9066973ffb5c93b47f (image=quay.io/ceph/ceph:v18, name=infallible_hopper, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 01:16:48 compute-0 systemd[1]: libpod-conmon-544cb88208f8c919466d436fed9f61c48af3e788c445ac9066973ffb5c93b47f.scope: Deactivated successfully.
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 36 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=36 pruub=12.353302956s) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active pruub 67.899711609s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=36 pruub=12.353302956s) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown pruub 67.899711609s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.2( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.8( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.3( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.9( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.a( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.b( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.c( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.d( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.6( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.7( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.e( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.f( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.12( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.13( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.10( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.11( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.14( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.15( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.18( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.19( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.16( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.17( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.1a( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.1b( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.1e( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.1f( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.1c( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.1d( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.1( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.4( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 37 pg[6.5( empty local-lis/les=26/27 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:48 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 26 01:16:48 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 26 01:16:48 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 26 01:16:48 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 26 01:16:48 compute-0 python3[215280]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:48 compute-0 podman[215281]: 2025-11-26 01:16:48.797801412 +0000 UTC m=+0.075264403 container create 656d63fb3771a5ce6a467c7a0d3afc0cd8d4c76e69a5207ce54c51c3a9fa281e (image=quay.io/ceph/ceph:v18, name=modest_keldysh, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:48 compute-0 systemd[194522]: Starting Mark boot as successful...
Nov 26 01:16:48 compute-0 systemd[194522]: Finished Mark boot as successful.
Nov 26 01:16:48 compute-0 systemd[1]: Started libpod-conmon-656d63fb3771a5ce6a467c7a0d3afc0cd8d4c76e69a5207ce54c51c3a9fa281e.scope.
Nov 26 01:16:48 compute-0 podman[215281]: 2025-11-26 01:16:48.769552163 +0000 UTC m=+0.047015144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9010fb8717962f2b76ea246c8b04b0a7672e1ce3b1b4b9fd85d723500a6fe072/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9010fb8717962f2b76ea246c8b04b0a7672e1ce3b1b4b9fd85d723500a6fe072/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:48 compute-0 podman[215281]: 2025-11-26 01:16:48.961235386 +0000 UTC m=+0.238698427 container init 656d63fb3771a5ce6a467c7a0d3afc0cd8d4c76e69a5207ce54c51c3a9fa281e (image=quay.io/ceph/ceph:v18, name=modest_keldysh, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 01:16:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:16:48 compute-0 podman[215281]: 2025-11-26 01:16:48.9717464 +0000 UTC m=+0.249209381 container start 656d63fb3771a5ce6a467c7a0d3afc0cd8d4c76e69a5207ce54c51c3a9fa281e (image=quay.io/ceph/ceph:v18, name=modest_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 01:16:48 compute-0 podman[215281]: 2025-11-26 01:16:48.977695216 +0000 UTC m=+0.255158207 container attach 656d63fb3771a5ce6a467c7a0d3afc0cd8d4c76e69a5207ce54c51c3a9fa281e (image=quay.io/ceph/ceph:v18, name=modest_keldysh, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v92: 162 pgs: 124 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 01:16:48 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 26 01:16:49 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:16:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 26 01:16:49 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 26 01:16:49 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 38 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=13.533523560s) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active pruub 63.657268524s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:49 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/1820768699' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 26 01:16:49 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:49 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 38 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=38 pruub=13.533523560s) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown pruub 63.657268524s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.15( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.17( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.16( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.14( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.11( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.10( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.12( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.c( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.f( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.0( empty local-lis/les=36/38 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.3( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.1b( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.6( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.18( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.7( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.19( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.4( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.9( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.5( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.a( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.1f( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.1e( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.1c( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.1a( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.b( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.1d( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 38 pg[6.13( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=26/26 les/c/f=27/27/0 sis=36) [0] r=0 lpr=36 pi=[26,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 26 01:16:49 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1400134407' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 26 01:16:49 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 26 01:16:49 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 36 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=36 pruub=8.832902908s) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active pruub 52.778926849s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=36 pruub=8.832902908s) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown pruub 52.778926849s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.b( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.a( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.10( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.11( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.12( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.13( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.15( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.14( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.16( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.17( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.1c( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.1d( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.1e( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.1f( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.7( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.8( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.9( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=24/25 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 26 01:16:50 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:16:50 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/1400134407' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 26 01:16:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1400134407' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 26 01:16:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 26 01:16:50 compute-0 modest_keldysh[215298]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 26 01:16:50 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.1e( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.1d( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.1c( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.13( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.12( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.10( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.17( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.15( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.11( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.16( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.14( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.b( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.a( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.9( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.8( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.f( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.6( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.5( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.1( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.4( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.2( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.7( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.c( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.3( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.d( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.1f( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.e( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.19( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.1a( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.1b( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.18( empty local-lis/les=28/29 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.1c( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.1d( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.1e( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.13( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.1c( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.12( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.10( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.1d( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.10( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.1f( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.1e( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.11( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.12( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.13( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.15( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.17( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.9( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.8( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.16( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.7( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=36/39 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.4( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.5( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.6( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.1( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.2( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.3( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.b( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.c( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.e( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.1b( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.19( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.d( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.1a( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.14( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.18( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.f( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 systemd[1]: libpod-656d63fb3771a5ce6a467c7a0d3afc0cd8d4c76e69a5207ce54c51c3a9fa281e.scope: Deactivated successfully.
Nov 26 01:16:50 compute-0 podman[215281]: 2025-11-26 01:16:50.238372671 +0000 UTC m=+1.515835662 container died 656d63fb3771a5ce6a467c7a0d3afc0cd8d4c76e69a5207ce54c51c3a9fa281e (image=quay.io/ceph/ceph:v18, name=modest_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.17( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.16( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.15( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.11( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.14( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 39 pg[5.a( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=24/24 les/c/f=25/25/0 sis=36) [2] r=0 lpr=36 pi=[24,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.a( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.8( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.b( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.6( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.9( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.0( empty local-lis/les=38/39 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.f( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.7( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.5( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.1( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.2( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.c( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.3( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.1f( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.d( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.e( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.1a( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.18( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.1b( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.19( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 39 pg[7.4( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=28/28 les/c/f=29/29/0 sis=38) [1] r=0 lpr=38 pi=[28,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9010fb8717962f2b76ea246c8b04b0a7672e1ce3b1b4b9fd85d723500a6fe072-merged.mount: Deactivated successfully.
Nov 26 01:16:50 compute-0 podman[215281]: 2025-11-26 01:16:50.329787547 +0000 UTC m=+1.607250498 container remove 656d63fb3771a5ce6a467c7a0d3afc0cd8d4c76e69a5207ce54c51c3a9fa281e (image=quay.io/ceph/ceph:v18, name=modest_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 01:16:50 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 26 01:16:50 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 26 01:16:50 compute-0 systemd[1]: libpod-conmon-656d63fb3771a5ce6a467c7a0d3afc0cd8d4c76e69a5207ce54c51c3a9fa281e.scope: Deactivated successfully.
Nov 26 01:16:50 compute-0 podman[215328]: 2025-11-26 01:16:50.396684342 +0000 UTC m=+0.111376230 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:16:50 compute-0 podman[215325]: 2025-11-26 01:16:50.420939456 +0000 UTC m=+0.134773899 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.expose-services=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, architecture=x86_64)
Nov 26 01:16:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v95: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:51 compute-0 ceph-mgr[193049]: [progress INFO root] Writing back 9 completed events
Nov 26 01:16:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 01:16:51 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:51 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 26 01:16:51 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 26 01:16:51 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/1400134407' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 26 01:16:51 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:51 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 26 01:16:51 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 26 01:16:51 compute-0 python3[215450]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 01:16:52 compute-0 python3[215521]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764119811.106644-37109-21090217587730/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:16:52 compute-0 ceph-mon[192746]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 26 01:16:52 compute-0 ceph-mon[192746]: Cluster is now healthy
Nov 26 01:16:52 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 26 01:16:52 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 26 01:16:52 compute-0 python3[215623]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 01:16:52 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 26 01:16:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v96: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:52 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 26 01:16:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 01:16:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 01:16:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 01:16:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 01:16:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 01:16:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 01:16:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 26 01:16:53 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:53 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:53 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:53 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:53 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:53 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:16:53 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:16:53 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:16:53 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:16:53 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:16:53 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:16:53 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:16:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 26 01:16:53 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.864023209s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.369216919s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.863967896s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.369216919s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.15( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.926956177s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.432296753s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.14( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.926912308s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.432350159s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.14( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.926892281s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.432350159s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.863505363s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.369140625s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.17( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.926709175s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.432418823s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.863520622s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.369277954s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.15( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.926892281s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.432296753s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.863501549s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.369277954s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.863412857s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.369140625s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.17( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.926638603s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.432418823s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.11( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.935113907s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.440986633s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.11( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.935097694s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.440986633s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.863089561s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.369102478s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.863152504s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.369178772s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.863072395s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.369102478s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.863109589s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.369178772s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.862797737s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.369056702s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.862771034s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.369056702s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.862747192s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.369140625s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.862723351s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.369140625s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.934623718s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.441184998s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.862504005s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.369102478s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.934591293s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.441184998s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.862484932s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.369102478s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.c( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.934486389s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.441200256s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.862232208s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.368988037s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.862204552s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.368988037s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.c( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.934441566s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.441200256s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.934885025s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.441825867s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.934860229s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.441825867s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.934246063s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.441314697s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.934147835s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.441314697s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.861610413s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.368820190s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.861818314s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.369064331s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.861578941s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.368820190s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.861772537s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.369064331s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.f( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.934440613s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.441223145s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.933984756s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.441390991s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.f( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.933775902s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.441223145s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.933922768s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.441390991s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.859885216s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.367591858s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.6( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.933685303s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.441413879s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.859856606s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.367591858s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.6( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.933649063s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.441413879s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.859555244s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.367454529s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.860880852s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.368774414s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.859420776s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.367355347s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.859395027s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.367355347s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.860784531s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.368774414s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.859302521s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.367454529s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.859017372s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.367393494s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.933042526s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.441452026s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.858952522s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.367393494s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.932990074s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.441452026s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.858649254s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.367301941s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.858611107s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.367301941s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.858305931s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.367187500s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.4( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.932595253s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.441467285s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.858395576s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.367332458s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.4( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.932551384s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.441467285s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.858274460s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.367187500s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.858365059s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.367332458s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.858038902s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 69.367134094s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=34/35 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.858016968s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.367134094s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.1e( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.932421684s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.441619873s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.1e( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.932381630s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.441619873s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.1f( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.932308197s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.441619873s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.1f( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.932281494s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.441619873s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.1c( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.932279587s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.441650391s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.1c( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.932240486s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.441650391s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.1d( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.933093071s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.442565918s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.1d( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.932994843s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.442565918s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.931890488s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.441734314s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.931860924s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.441734314s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.13( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.929685593s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 72.441009521s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[6.13( empty local-lis/les=36/38 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40 pruub=11.929640770s) [2] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.441009521s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[6.1e( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[4.d( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[6.c( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[4.f( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[6.d( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[6.2( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[4.2( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[6.6( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[4.4( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[6.4( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[6.1( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[4.7( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[4.5( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[6.e( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[4.9( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[6.b( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[4.8( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[6.17( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[4.14( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[4.12( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[4.10( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[6.1d( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[6.1c( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.953499794s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.185325623s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.953457832s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.185325623s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[4.18( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.851712227s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.083770752s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.851684570s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.083770752s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.851585388s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.083801270s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.851561546s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.083801270s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.952933311s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.185302734s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.952909470s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.185302734s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.851318359s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.083843231s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.850882530s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.083713531s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.851287842s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.083843231s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[4.1b( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[4.1a( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[6.f( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[4.e( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[4.1( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[3.17( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[7.13( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[3.15( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[3.12( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[6.8( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[4.a( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[6.14( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[6.15( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[4.13( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[6.11( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[4.11( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[6.13( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[4.1c( empty local-lis/les=0/0 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[6.1f( empty local-lis/les=0/0 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.820122719s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.277812958s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.850854874s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.083713531s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.962141037s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195114136s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.962112427s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195114136s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.850973129s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.084014893s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.820077896s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.277812958s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.953910828s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.411815643s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.953987122s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.411975861s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.953845978s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.411815643s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.953945160s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.411975861s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.850953102s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.084014893s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.820812225s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.279018402s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.850348473s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.083518982s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.820781708s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.279018402s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.819658279s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.277919769s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.819481850s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.277919769s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.819080353s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.277767181s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.819038391s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.277767181s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.961727142s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.194915771s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[3.f( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.850317001s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.083518982s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.961707115s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.194915771s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.850040436s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.083480835s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.850015640s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.083480835s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.850164413s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.083694458s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[7.f( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.818716049s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.277732849s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.818634987s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.277713776s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.818625450s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.277732849s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.818584442s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.277713776s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.850091934s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.083694458s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.952977180s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412002563s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.961524963s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195205688s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[3.c( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.952445984s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412002563s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.961498260s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195205688s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[7.1c( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.849321365s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.083179474s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.961327553s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195220947s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.961436272s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195350647s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.961414337s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195350647s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.961283684s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195220947s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.849257469s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.083179474s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.961130142s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195251465s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.961111069s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195251465s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.961857796s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.196075439s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.838085175s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.072341919s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.838067055s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.072341919s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.961815834s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.196075439s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.961013794s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195396423s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.960996628s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195396423s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.848796844s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.083339691s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.848721504s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.083339691s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.960750580s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195480347s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.960734367s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195480347s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[7.6( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[3.1( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[7.4( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[3.6( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.949638367s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.411998749s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.949604988s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.411998749s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.949514389s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412052155s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.949490547s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412052155s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.814393044s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.277076721s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.814372063s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.277076721s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[3.3( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.950159073s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412986755s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[7.3( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.949216843s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412075043s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.949197769s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412075043s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.950105667s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412986755s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.814074516s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.277069092s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.814058304s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.277069092s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[3.9( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.813825607s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.277015686s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.949203491s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412384033s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[3.a( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.837401390s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.072292328s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.837384224s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.072292328s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.960495949s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195526123s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.960478783s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195526123s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.837866783s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.072353363s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.837176323s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.072307587s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.837162018s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.072307587s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.837229729s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.072353363s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.960399628s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195640564s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[7.1f( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.960381508s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195640564s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.960147858s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195526123s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.960125923s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195526123s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.836861610s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.072254181s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.836819649s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.072254181s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.836698532s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.072216034s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.836677551s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.072216034s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.836481094s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.072170258s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.836460114s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.072170258s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[3.1b( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.959927559s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195732117s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.949119568s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412384033s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.948371887s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412242889s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[7.18( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.812950134s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.277023315s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.812907219s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.277023315s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.812727928s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.277015686s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.948055267s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412242889s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[3.18( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.811761856s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.276969910s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.811729431s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.276969910s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.946950912s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412395477s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.946922302s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412395477s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.811276436s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.276962280s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.811255455s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.276962280s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.946689606s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412498474s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.946672440s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412498474s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.811063766s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.276977539s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.946507454s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412456512s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.946486473s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412456512s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.810997963s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.276977539s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.810872078s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.276954651s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.810853958s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.276954651s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.946452141s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412639618s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[3.1f( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.946434021s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412639618s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[7.1b( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[7.9( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.946230888s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412578583s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.946211815s) [0] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412578583s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.946026802s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412540436s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[5.1e( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[2.18( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[2.19( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[2.16( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[2.13( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[5.15( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[5.14( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[2.11( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[2.f( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[5.7( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[2.2( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[5.5( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[5.4( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[5.3( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.959911346s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195732117s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[5.2( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[2.8( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[2.b( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[2.1c( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.959727287s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195648193s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.959711075s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195648193s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.836130142s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.072154999s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.836112976s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.072154999s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.959636688s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195808411s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.959613800s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195808411s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.835778236s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.072078705s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.835762024s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.072078705s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.959336281s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195770264s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.959319115s) [2] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195770264s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.835146904s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.071960449s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.835128784s) [2] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.071960449s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[2.1d( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.834998131s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 63.071933746s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=34/35 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=8.834982872s) [0] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.071933746s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.810159683s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.276935577s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.810117722s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.276935577s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.808786392s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.275817871s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 40 pg[2.1f( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.808758736s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.275817871s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.945467949s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412693024s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.954729080s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195846558s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.945442200s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412693024s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.954669952s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195846558s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.954040527s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active pruub 67.195274353s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=38/39 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40 pruub=12.954013824s) [0] r=-1 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.195274353s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.808750153s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.276294708s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.808707237s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.276294708s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.810067177s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.277793884s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.810032845s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.277793884s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.809799194s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.277729034s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[2.1b( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[5.1d( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[2.17( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[2.15( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[5.11( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[5.12( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[5.13( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[5.16( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[2.d( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[5.9( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[2.7( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[2.4( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[2.3( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[2.6( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[5.f( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[2.a( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[5.c( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[5.1( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[5.19( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.809774399s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.277729034s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.944615364s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412696838s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.944591522s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412696838s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.946007729s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412540436s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.807550430s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.275802612s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.807526588s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.275802612s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.808520317s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.277027130s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.808493614s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.277027130s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.944095612s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412887573s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.944061279s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412887573s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.943874359s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.412784576s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.943824768s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.412784576s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.806753159s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.275794983s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.806725502s) [0] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.275794983s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.943935394s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 60.413097382s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=36/39 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=12.943905830s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 60.413097382s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[3.16( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[7.11( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[3.11( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[7.15( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[3.e( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[7.a( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[7.5( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[3.5( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[7.1( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[7.2( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[3.7( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[7.c( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[3.8( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[7.e( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[3.1d( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[7.1a( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.800171852s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 63.276920319s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=32/34 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40 pruub=15.800130844s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.276920319s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[3.1e( empty local-lis/les=0/0 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 40 pg[7.8( empty local-lis/les=0/0 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[5.18( empty local-lis/les=0/0 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[2.9( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=0/0 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:16:53 compute-0 python3[215698]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764119812.5014172-37123-174967983333597/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=aebcad514522b0d1e40af28e7e36d100e59c5371 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:16:53 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 26 01:16:53 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 26 01:16:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:16:54 compute-0 python3[215748]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:54 compute-0 podman[215749]: 2025-11-26 01:16:54.215252059 +0000 UTC m=+0.075454877 container create e5b726a0f6650cfb536c732230a189bc25532ad4bb66ca9c0e9b4ed1b6c5cf18 (image=quay.io/ceph/ceph:v18, name=hopeful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 01:16:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 26 01:16:54 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:16:54 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:16:54 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:16:54 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:16:54 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:16:54 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:16:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 26 01:16:54 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 26 01:16:54 compute-0 podman[215749]: 2025-11-26 01:16:54.182948079 +0000 UTC m=+0.043150967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[7.1b( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[3.1f( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 systemd[1]: Started libpod-conmon-e5b726a0f6650cfb536c732230a189bc25532ad4bb66ca9c0e9b4ed1b6c5cf18.scope.
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[6.c( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[4.f( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[2.13( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[5.14( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[2.11( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[5.15( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[3.15( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[3.12( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[2.16( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[7.13( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[3.17( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[3.9( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[2.b( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[2.8( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[3.a( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[7.f( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[5.5( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[2.2( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[3.6( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[3.3( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[2.f( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[2.1c( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[2.1f( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[5.2( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[5.4( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[7.6( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[3.1( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[5.7( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[3.c( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[5.3( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[7.4( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[2.1d( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[7.18( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[3.1b( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[3.f( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[2.19( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[2.18( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[5.1e( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[7.3( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[7.9( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 41 pg[7.1f( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [0] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[6.14( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[6.15( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[6.11( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[6.13( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[6.1f( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[6.f( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[6.8( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[6.d( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[4.14( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[4.12( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[4.10( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[6.17( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[6.1c( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=40/41 n=0 ec=34/20 lis/c=34/34 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=40/41 n=0 ec=38/28 lis/c=38/38 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[38,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[6.1d( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[6.e( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[4.5( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[4.7( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[4.4( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[6.1( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[6.6( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[6.b( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[4.2( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[6.2( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=40/41 n=0 ec=32/18 lis/c=32/32 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[4.d( empty local-lis/les=40/41 n=0 ec=34/22 lis/c=34/34 les/c/f=35/35/0 sis=40) [1] r=0 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[5.1d( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=40/41 n=0 ec=36/24 lis/c=36/36 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[6.1e( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 41 pg[6.4( empty local-lis/les=40/41 n=0 ec=36/26 lis/c=36/36 les/c/f=38/38/0 sis=40) [1] r=0 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:16:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a8f02372465359baedf6e8a3679de78125bbe783239d8a7b0c233400d5d550/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a8f02372465359baedf6e8a3679de78125bbe783239d8a7b0c233400d5d550/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a8f02372465359baedf6e8a3679de78125bbe783239d8a7b0c233400d5d550/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:54 compute-0 podman[215749]: 2025-11-26 01:16:54.369437695 +0000 UTC m=+0.229640513 container init e5b726a0f6650cfb536c732230a189bc25532ad4bb66ca9c0e9b4ed1b6c5cf18 (image=quay.io/ceph/ceph:v18, name=hopeful_torvalds, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:16:54 compute-0 podman[215749]: 2025-11-26 01:16:54.380334642 +0000 UTC m=+0.240537440 container start e5b726a0f6650cfb536c732230a189bc25532ad4bb66ca9c0e9b4ed1b6c5cf18 (image=quay.io/ceph/ceph:v18, name=hopeful_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:16:54 compute-0 podman[215749]: 2025-11-26 01:16:54.384971992 +0000 UTC m=+0.245174820 container attach e5b726a0f6650cfb536c732230a189bc25532ad4bb66ca9c0e9b4ed1b6c5cf18 (image=quay.io/ceph/ceph:v18, name=hopeful_torvalds, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:54 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Nov 26 01:16:54 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Nov 26 01:16:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 26 01:16:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/835104380' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 01:16:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/835104380' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 01:16:54 compute-0 hopeful_torvalds[215765]: 
Nov 26 01:16:54 compute-0 hopeful_torvalds[215765]: [global]
Nov 26 01:16:54 compute-0 hopeful_torvalds[215765]: #011fsid = 36901f64-240e-5c29-a2e2-29b56f2c329c
Nov 26 01:16:54 compute-0 hopeful_torvalds[215765]: #011mon_host = 192.168.122.100
Nov 26 01:16:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:55 compute-0 systemd[1]: libpod-e5b726a0f6650cfb536c732230a189bc25532ad4bb66ca9c0e9b4ed1b6c5cf18.scope: Deactivated successfully.
Nov 26 01:16:55 compute-0 podman[215749]: 2025-11-26 01:16:55.019133555 +0000 UTC m=+0.879336343 container died e5b726a0f6650cfb536c732230a189bc25532ad4bb66ca9c0e9b4ed1b6c5cf18 (image=quay.io/ceph/ceph:v18, name=hopeful_torvalds, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:16:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-66a8f02372465359baedf6e8a3679de78125bbe783239d8a7b0c233400d5d550-merged.mount: Deactivated successfully.
Nov 26 01:16:55 compute-0 podman[215749]: 2025-11-26 01:16:55.104559593 +0000 UTC m=+0.964762391 container remove e5b726a0f6650cfb536c732230a189bc25532ad4bb66ca9c0e9b4ed1b6c5cf18 (image=quay.io/ceph/ceph:v18, name=hopeful_torvalds, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 01:16:55 compute-0 systemd[1]: libpod-conmon-e5b726a0f6650cfb536c732230a189bc25532ad4bb66ca9c0e9b4ed1b6c5cf18.scope: Deactivated successfully.
Nov 26 01:16:55 compute-0 podman[215794]: 2025-11-26 01:16:55.199402935 +0000 UTC m=+0.128698458 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 01:16:55 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/835104380' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 26 01:16:55 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/835104380' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 26 01:16:55 compute-0 python3[215920]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:55 compute-0 podman[215946]: 2025-11-26 01:16:55.57853906 +0000 UTC m=+0.071951069 container create 5e9191cf602ef84f439863b46f96f2c916812f9b87259a729725bc44a25a9fbb (image=quay.io/ceph/ceph:v18, name=interesting_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:55 compute-0 systemd[1]: Started libpod-conmon-5e9191cf602ef84f439863b46f96f2c916812f9b87259a729725bc44a25a9fbb.scope.
Nov 26 01:16:55 compute-0 podman[215946]: 2025-11-26 01:16:55.548065551 +0000 UTC m=+0.041477580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:55 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1643f570d186f86a20ccfc5b89214bb540d4ac079c55ffde0fee7dff75cc781/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1643f570d186f86a20ccfc5b89214bb540d4ac079c55ffde0fee7dff75cc781/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1643f570d186f86a20ccfc5b89214bb540d4ac079c55ffde0fee7dff75cc781/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:55 compute-0 podman[215946]: 2025-11-26 01:16:55.703679847 +0000 UTC m=+0.197091876 container init 5e9191cf602ef84f439863b46f96f2c916812f9b87259a729725bc44a25a9fbb (image=quay.io/ceph/ceph:v18, name=interesting_bardeen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:55 compute-0 podman[215946]: 2025-11-26 01:16:55.717889437 +0000 UTC m=+0.211301446 container start 5e9191cf602ef84f439863b46f96f2c916812f9b87259a729725bc44a25a9fbb (image=quay.io/ceph/ceph:v18, name=interesting_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:55 compute-0 podman[215946]: 2025-11-26 01:16:55.730105951 +0000 UTC m=+0.223517950 container attach 5e9191cf602ef84f439863b46f96f2c916812f9b87259a729725bc44a25a9fbb (image=quay.io/ceph/ceph:v18, name=interesting_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:56 compute-0 podman[216005]: 2025-11-26 01:16:56.058005613 +0000 UTC m=+0.103863799 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc.)
Nov 26 01:16:56 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event 1b72dec3-a965-407c-a683-47c932d5b429 (Global Recovery Event) in 10 seconds
Nov 26 01:16:56 compute-0 podman[216073]: 2025-11-26 01:16:56.284580478 +0000 UTC m=+0.118221193 container exec 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 01:16:56 compute-0 podman[216073]: 2025-11-26 01:16:56.399662751 +0000 UTC m=+0.233303396 container exec_died 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:16:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 26 01:16:56 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3366405822' entity='client.admin' 
Nov 26 01:16:56 compute-0 interesting_bardeen[215971]: set ssl_option
Nov 26 01:16:56 compute-0 systemd[1]: libpod-5e9191cf602ef84f439863b46f96f2c916812f9b87259a729725bc44a25a9fbb.scope: Deactivated successfully.
Nov 26 01:16:56 compute-0 podman[215946]: 2025-11-26 01:16:56.463978604 +0000 UTC m=+0.957390653 container died 5e9191cf602ef84f439863b46f96f2c916812f9b87259a729725bc44a25a9fbb (image=quay.io/ceph/ceph:v18, name=interesting_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1643f570d186f86a20ccfc5b89214bb540d4ac079c55ffde0fee7dff75cc781-merged.mount: Deactivated successfully.
Nov 26 01:16:56 compute-0 podman[215946]: 2025-11-26 01:16:56.554543916 +0000 UTC m=+1.047955925 container remove 5e9191cf602ef84f439863b46f96f2c916812f9b87259a729725bc44a25a9fbb (image=quay.io/ceph/ceph:v18, name=interesting_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:16:56 compute-0 systemd[1]: libpod-conmon-5e9191cf602ef84f439863b46f96f2c916812f9b87259a729725bc44a25a9fbb.scope: Deactivated successfully.
Nov 26 01:16:56 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 26 01:16:56 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 26 01:16:56 compute-0 python3[216191]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:16:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:57 compute-0 podman[216212]: 2025-11-26 01:16:57.071326701 +0000 UTC m=+0.085913322 container create b88231c22a9145afe402afb9920f6063ed0478b33f6b0258108d7080326216fe (image=quay.io/ceph/ceph:v18, name=pedantic_ganguly, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 01:16:57 compute-0 podman[216212]: 2025-11-26 01:16:57.043240329 +0000 UTC m=+0.057826960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:16:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:16:57 compute-0 systemd[1]: Started libpod-conmon-b88231c22a9145afe402afb9920f6063ed0478b33f6b0258108d7080326216fe.scope.
Nov 26 01:16:57 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:16:57 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:16:57 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:16:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:16:57 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:16:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:16:57 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:57 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 3c24ea2b-893c-448e-a29d-d498c94af37e does not exist
Nov 26 01:16:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:57 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 2f60a637-3f49-40dd-97c2-8c0893985e73 does not exist
Nov 26 01:16:57 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 3424afc5-403a-47a8-8173-3fb7e6db3406 does not exist
Nov 26 01:16:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:16:57 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06989675fb3ce8157e96eed1d8af20304491643340674c7b9d79ad2b4494d2d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06989675fb3ce8157e96eed1d8af20304491643340674c7b9d79ad2b4494d2d4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06989675fb3ce8157e96eed1d8af20304491643340674c7b9d79ad2b4494d2d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:16:57 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:16:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:16:57 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:16:57 compute-0 podman[216212]: 2025-11-26 01:16:57.252026134 +0000 UTC m=+0.266612795 container init b88231c22a9145afe402afb9920f6063ed0478b33f6b0258108d7080326216fe (image=quay.io/ceph/ceph:v18, name=pedantic_ganguly, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 01:16:57 compute-0 podman[216212]: 2025-11-26 01:16:57.270120414 +0000 UTC m=+0.284707005 container start b88231c22a9145afe402afb9920f6063ed0478b33f6b0258108d7080326216fe (image=quay.io/ceph/ceph:v18, name=pedantic_ganguly, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 01:16:57 compute-0 podman[216212]: 2025-11-26 01:16:57.276101562 +0000 UTC m=+0.290688233 container attach b88231c22a9145afe402afb9920f6063ed0478b33f6b0258108d7080326216fe (image=quay.io/ceph/ceph:v18, name=pedantic_ganguly, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:57 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/3366405822' entity='client.admin' 
Nov 26 01:16:57 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:57 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:57 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:16:57 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:57 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:16:57 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 01:16:57 compute-0 ceph-mgr[193049]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 26 01:16:57 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 26 01:16:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 01:16:57 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:57 compute-0 pedantic_ganguly[216242]: Scheduled rgw.rgw update...
Nov 26 01:16:57 compute-0 systemd[1]: libpod-b88231c22a9145afe402afb9920f6063ed0478b33f6b0258108d7080326216fe.scope: Deactivated successfully.
Nov 26 01:16:57 compute-0 podman[216212]: 2025-11-26 01:16:57.957755933 +0000 UTC m=+0.972342594 container died b88231c22a9145afe402afb9920f6063ed0478b33f6b0258108d7080326216fe (image=quay.io/ceph/ceph:v18, name=pedantic_ganguly, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:16:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-06989675fb3ce8157e96eed1d8af20304491643340674c7b9d79ad2b4494d2d4-merged.mount: Deactivated successfully.
Nov 26 01:16:58 compute-0 podman[216212]: 2025-11-26 01:16:58.045670771 +0000 UTC m=+1.060257352 container remove b88231c22a9145afe402afb9920f6063ed0478b33f6b0258108d7080326216fe (image=quay.io/ceph/ceph:v18, name=pedantic_ganguly, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 01:16:58 compute-0 systemd[1]: libpod-conmon-b88231c22a9145afe402afb9920f6063ed0478b33f6b0258108d7080326216fe.scope: Deactivated successfully.
Nov 26 01:16:58 compute-0 podman[216416]: 2025-11-26 01:16:58.273277416 +0000 UTC m=+0.087594480 container create 12c82b1c02e3e695265d1557c950f6a9449b28d50f0ccf487e383cc63a842272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_babbage, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:58 compute-0 podman[216416]: 2025-11-26 01:16:58.237560869 +0000 UTC m=+0.051877933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:58 compute-0 systemd[1]: Started libpod-conmon-12c82b1c02e3e695265d1557c950f6a9449b28d50f0ccf487e383cc63a842272.scope.
Nov 26 01:16:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:58 compute-0 podman[216416]: 2025-11-26 01:16:58.407657473 +0000 UTC m=+0.221974537 container init 12c82b1c02e3e695265d1557c950f6a9449b28d50f0ccf487e383cc63a842272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:16:58 compute-0 podman[216416]: 2025-11-26 01:16:58.423028756 +0000 UTC m=+0.237345780 container start 12c82b1c02e3e695265d1557c950f6a9449b28d50f0ccf487e383cc63a842272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_babbage, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:16:58 compute-0 podman[216416]: 2025-11-26 01:16:58.427913294 +0000 UTC m=+0.242230408 container attach 12c82b1c02e3e695265d1557c950f6a9449b28d50f0ccf487e383cc63a842272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:16:58 compute-0 flamboyant_babbage[216433]: 167 167
Nov 26 01:16:58 compute-0 systemd[1]: libpod-12c82b1c02e3e695265d1557c950f6a9449b28d50f0ccf487e383cc63a842272.scope: Deactivated successfully.
Nov 26 01:16:58 compute-0 podman[216416]: 2025-11-26 01:16:58.433990565 +0000 UTC m=+0.248307589 container died 12c82b1c02e3e695265d1557c950f6a9449b28d50f0ccf487e383cc63a842272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 01:16:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-27c80f17ec3ea3ca09ee51708aea0647e68983cad42e22c4d29a79124a6d322d-merged.mount: Deactivated successfully.
Nov 26 01:16:58 compute-0 podman[216416]: 2025-11-26 01:16:58.487188614 +0000 UTC m=+0.301505638 container remove 12c82b1c02e3e695265d1557c950f6a9449b28d50f0ccf487e383cc63a842272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_babbage, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 01:16:58 compute-0 systemd[1]: libpod-conmon-12c82b1c02e3e695265d1557c950f6a9449b28d50f0ccf487e383cc63a842272.scope: Deactivated successfully.
Nov 26 01:16:58 compute-0 podman[216455]: 2025-11-26 01:16:58.758625564 +0000 UTC m=+0.099005871 container create 413eb8f6b409266a20784200f0361492e9a054af69c3dba2f98cea5d15e130dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 01:16:58 compute-0 podman[216455]: 2025-11-26 01:16:58.723160735 +0000 UTC m=+0.063541102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:16:58 compute-0 systemd[1]: Started libpod-conmon-413eb8f6b409266a20784200f0361492e9a054af69c3dba2f98cea5d15e130dd.scope.
Nov 26 01:16:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879695d54f01ce19a059ad436dc34f1b8accce5999be1694af9f6ce75b7787df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879695d54f01ce19a059ad436dc34f1b8accce5999be1694af9f6ce75b7787df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879695d54f01ce19a059ad436dc34f1b8accce5999be1694af9f6ce75b7787df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879695d54f01ce19a059ad436dc34f1b8accce5999be1694af9f6ce75b7787df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879695d54f01ce19a059ad436dc34f1b8accce5999be1694af9f6ce75b7787df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:16:58 compute-0 podman[216455]: 2025-11-26 01:16:58.900916613 +0000 UTC m=+0.241296970 container init 413eb8f6b409266a20784200f0361492e9a054af69c3dba2f98cea5d15e130dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:16:58 compute-0 ceph-mon[192746]: Saving service rgw.rgw spec with placement compute-0
Nov 26 01:16:58 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:16:58 compute-0 podman[216455]: 2025-11-26 01:16:58.933170542 +0000 UTC m=+0.273550839 container start 413eb8f6b409266a20784200f0361492e9a054af69c3dba2f98cea5d15e130dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:16:58 compute-0 podman[216455]: 2025-11-26 01:16:58.940510639 +0000 UTC m=+0.280890986 container attach 413eb8f6b409266a20784200f0361492e9a054af69c3dba2f98cea5d15e130dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 01:16:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:16:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:16:59 compute-0 python3[216551]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 01:16:59 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 26 01:16:59 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 26 01:16:59 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 26 01:16:59 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 26 01:16:59 compute-0 python3[216622]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764119818.8034384-37164-223196006684071/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:16:59 compute-0 podman[158021]: time="2025-11-26T01:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:16:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30884 "" "Go-http-client/1.1"
Nov 26 01:16:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6270 "" "Go-http-client/1.1"
Nov 26 01:17:00 compute-0 upbeat_goldwasser[216495]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:17:00 compute-0 upbeat_goldwasser[216495]: --> relative data size: 1.0
Nov 26 01:17:00 compute-0 upbeat_goldwasser[216495]: --> All data devices are unavailable
Nov 26 01:17:00 compute-0 systemd[1]: libpod-413eb8f6b409266a20784200f0361492e9a054af69c3dba2f98cea5d15e130dd.scope: Deactivated successfully.
Nov 26 01:17:00 compute-0 systemd[1]: libpod-413eb8f6b409266a20784200f0361492e9a054af69c3dba2f98cea5d15e130dd.scope: Consumed 1.217s CPU time.
Nov 26 01:17:00 compute-0 podman[216455]: 2025-11-26 01:17:00.210599504 +0000 UTC m=+1.550979781 container died 413eb8f6b409266a20784200f0361492e9a054af69c3dba2f98cea5d15e130dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 01:17:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-879695d54f01ce19a059ad436dc34f1b8accce5999be1694af9f6ce75b7787df-merged.mount: Deactivated successfully.
Nov 26 01:17:00 compute-0 podman[216455]: 2025-11-26 01:17:00.314382789 +0000 UTC m=+1.654763066 container remove 413eb8f6b409266a20784200f0361492e9a054af69c3dba2f98cea5d15e130dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:00 compute-0 systemd[1]: libpod-conmon-413eb8f6b409266a20784200f0361492e9a054af69c3dba2f98cea5d15e130dd.scope: Deactivated successfully.
Nov 26 01:17:00 compute-0 python3[216696]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:00 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 26 01:17:00 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 26 01:17:00 compute-0 podman[216709]: 2025-11-26 01:17:00.48224917 +0000 UTC m=+0.100041040 container create dc49ac96a8b54804150b0f372c566e5cbea027cb16179b2a8342baa8d8fbddb9 (image=quay.io/ceph/ceph:v18, name=exciting_wilbur, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:17:00 compute-0 podman[216709]: 2025-11-26 01:17:00.439540927 +0000 UTC m=+0.057332797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:00 compute-0 systemd[1]: Started libpod-conmon-dc49ac96a8b54804150b0f372c566e5cbea027cb16179b2a8342baa8d8fbddb9.scope.
Nov 26 01:17:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd594b2dc861fc010ea797f384843b7dec8be2372557a500197f11de53d5d245/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd594b2dc861fc010ea797f384843b7dec8be2372557a500197f11de53d5d245/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd594b2dc861fc010ea797f384843b7dec8be2372557a500197f11de53d5d245/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:00 compute-0 podman[216709]: 2025-11-26 01:17:00.675526427 +0000 UTC m=+0.293318347 container init dc49ac96a8b54804150b0f372c566e5cbea027cb16179b2a8342baa8d8fbddb9 (image=quay.io/ceph/ceph:v18, name=exciting_wilbur, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:00 compute-0 podman[216709]: 2025-11-26 01:17:00.699141983 +0000 UTC m=+0.316933813 container start dc49ac96a8b54804150b0f372c566e5cbea027cb16179b2a8342baa8d8fbddb9 (image=quay.io/ceph/ceph:v18, name=exciting_wilbur, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 01:17:00 compute-0 podman[216709]: 2025-11-26 01:17:00.704299338 +0000 UTC m=+0.322091208 container attach dc49ac96a8b54804150b0f372c566e5cbea027cb16179b2a8342baa8d8fbddb9 (image=quay.io/ceph/ceph:v18, name=exciting_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 01:17:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:17:01 compute-0 ceph-mgr[193049]: [progress INFO root] Writing back 10 completed events
Nov 26 01:17:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 01:17:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:01 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 01:17:01 compute-0 ceph-mgr[193049]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 26 01:17:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 26 01:17:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 26 01:17:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 26 01:17:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 26 01:17:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 26 01:17:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 26 01:17:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 26 01:17:01 compute-0 ceph-mon[192746]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 26 01:17:01 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0[192742]: 2025-11-26T01:17:01.313+0000 7f84f1757640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 26 01:17:01 compute-0 ceph-mon[192746]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 26 01:17:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 26 01:17:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).mds e2 new map
Nov 26 01:17:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-26T01:17:01.314659+0000#012modified#0112025-11-26T01:17:01.314709+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Nov 26 01:17:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 26 01:17:01 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 26 01:17:01 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 26 01:17:01 compute-0 ceph-mgr[193049]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 26 01:17:01 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 26 01:17:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 01:17:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:01 compute-0 ceph-mgr[193049]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 26 01:17:01 compute-0 systemd[1]: libpod-dc49ac96a8b54804150b0f372c566e5cbea027cb16179b2a8342baa8d8fbddb9.scope: Deactivated successfully.
Nov 26 01:17:01 compute-0 podman[216709]: 2025-11-26 01:17:01.388205823 +0000 UTC m=+1.005997693 container died dc49ac96a8b54804150b0f372c566e5cbea027cb16179b2a8342baa8d8fbddb9 (image=quay.io/ceph/ceph:v18, name=exciting_wilbur, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:01 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 26 01:17:01 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 26 01:17:01 compute-0 openstack_network_exporter[160178]: ERROR   01:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:17:01 compute-0 openstack_network_exporter[160178]: ERROR   01:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:17:01 compute-0 openstack_network_exporter[160178]: ERROR   01:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:17:01 compute-0 openstack_network_exporter[160178]: ERROR   01:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:17:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:17:01 compute-0 openstack_network_exporter[160178]: ERROR   01:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:17:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:17:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd594b2dc861fc010ea797f384843b7dec8be2372557a500197f11de53d5d245-merged.mount: Deactivated successfully.
Nov 26 01:17:01 compute-0 podman[216709]: 2025-11-26 01:17:01.496234348 +0000 UTC m=+1.114026188 container remove dc49ac96a8b54804150b0f372c566e5cbea027cb16179b2a8342baa8d8fbddb9 (image=quay.io/ceph/ceph:v18, name=exciting_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 01:17:01 compute-0 podman[216884]: 2025-11-26 01:17:01.506158367 +0000 UTC m=+0.141240211 container create a66a66135fd28c41f52d820cf977f5cf60e995363ea322d6dba460c91c09c0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:17:01 compute-0 systemd[1]: libpod-conmon-dc49ac96a8b54804150b0f372c566e5cbea027cb16179b2a8342baa8d8fbddb9.scope: Deactivated successfully.
Nov 26 01:17:01 compute-0 podman[216884]: 2025-11-26 01:17:01.45410489 +0000 UTC m=+0.089186774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:01 compute-0 systemd[1]: Started libpod-conmon-a66a66135fd28c41f52d820cf977f5cf60e995363ea322d6dba460c91c09c0e5.scope.
Nov 26 01:17:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:01 compute-0 podman[216884]: 2025-11-26 01:17:01.646585835 +0000 UTC m=+0.281667739 container init a66a66135fd28c41f52d820cf977f5cf60e995363ea322d6dba460c91c09c0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_brown, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 01:17:01 compute-0 podman[216884]: 2025-11-26 01:17:01.660934939 +0000 UTC m=+0.296016803 container start a66a66135fd28c41f52d820cf977f5cf60e995363ea322d6dba460c91c09c0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 01:17:01 compute-0 podman[216884]: 2025-11-26 01:17:01.666968209 +0000 UTC m=+0.302050123 container attach a66a66135fd28c41f52d820cf977f5cf60e995363ea322d6dba460c91c09c0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:01 compute-0 reverent_brown[216912]: 167 167
Nov 26 01:17:01 compute-0 systemd[1]: libpod-a66a66135fd28c41f52d820cf977f5cf60e995363ea322d6dba460c91c09c0e5.scope: Deactivated successfully.
Nov 26 01:17:01 compute-0 podman[216884]: 2025-11-26 01:17:01.671450516 +0000 UTC m=+0.306532390 container died a66a66135fd28c41f52d820cf977f5cf60e995363ea322d6dba460c91c09c0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_brown, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 01:17:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fe1574d8f7244846b12643603b0fe98d24e70eee6d84f15399ed79e98066d1b-merged.mount: Deactivated successfully.
Nov 26 01:17:01 compute-0 podman[216884]: 2025-11-26 01:17:01.758740626 +0000 UTC m=+0.393822490 container remove a66a66135fd28c41f52d820cf977f5cf60e995363ea322d6dba460c91c09c0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_brown, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:17:01 compute-0 systemd[1]: libpod-conmon-a66a66135fd28c41f52d820cf977f5cf60e995363ea322d6dba460c91c09c0e5.scope: Deactivated successfully.
Nov 26 01:17:01 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:01 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 26 01:17:01 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 26 01:17:01 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 26 01:17:01 compute-0 ceph-mon[192746]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 26 01:17:01 compute-0 ceph-mon[192746]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 26 01:17:01 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 26 01:17:01 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:02 compute-0 python3[216956]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:02 compute-0 podman[216962]: 2025-11-26 01:17:02.054720227 +0000 UTC m=+0.095272226 container create 9b90bade9a7748116159446630f1e1c828018512b6255b6fa72a57aa4377aa23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:02 compute-0 podman[216962]: 2025-11-26 01:17:02.024533027 +0000 UTC m=+0.065085026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:02 compute-0 podman[216972]: 2025-11-26 01:17:02.128032024 +0000 UTC m=+0.099466805 container create bccd99d98eb6d3567ec9eec06f19675ec5379e31ab171d1e92621ba06f68ab83 (image=quay.io/ceph/ceph:v18, name=eager_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:17:02 compute-0 systemd[1]: Started libpod-conmon-9b90bade9a7748116159446630f1e1c828018512b6255b6fa72a57aa4377aa23.scope.
Nov 26 01:17:02 compute-0 podman[216972]: 2025-11-26 01:17:02.08353992 +0000 UTC m=+0.054974741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5d94d174aae487f6365c5dc102c3f76470562d52694c3abdd54d481123ae11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5d94d174aae487f6365c5dc102c3f76470562d52694c3abdd54d481123ae11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5d94d174aae487f6365c5dc102c3f76470562d52694c3abdd54d481123ae11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:02 compute-0 systemd[1]: Started libpod-conmon-bccd99d98eb6d3567ec9eec06f19675ec5379e31ab171d1e92621ba06f68ab83.scope.
Nov 26 01:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5d94d174aae487f6365c5dc102c3f76470562d52694c3abdd54d481123ae11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:02 compute-0 podman[216962]: 2025-11-26 01:17:02.24216309 +0000 UTC m=+0.282715119 container init 9b90bade9a7748116159446630f1e1c828018512b6255b6fa72a57aa4377aa23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_napier, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 01:17:02 compute-0 podman[216962]: 2025-11-26 01:17:02.276655482 +0000 UTC m=+0.317207481 container start 9b90bade9a7748116159446630f1e1c828018512b6255b6fa72a57aa4377aa23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 01:17:02 compute-0 podman[216962]: 2025-11-26 01:17:02.283470264 +0000 UTC m=+0.324022253 container attach 9b90bade9a7748116159446630f1e1c828018512b6255b6fa72a57aa4377aa23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_napier, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d03f8a19e25b98eb1babbfb28e2cc6630c722078b34c303fd48ba444f9f96a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d03f8a19e25b98eb1babbfb28e2cc6630c722078b34c303fd48ba444f9f96a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d03f8a19e25b98eb1babbfb28e2cc6630c722078b34c303fd48ba444f9f96a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:02 compute-0 podman[216972]: 2025-11-26 01:17:02.36526762 +0000 UTC m=+0.336702441 container init bccd99d98eb6d3567ec9eec06f19675ec5379e31ab171d1e92621ba06f68ab83 (image=quay.io/ceph/ceph:v18, name=eager_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 01:17:02 compute-0 podman[216972]: 2025-11-26 01:17:02.374944532 +0000 UTC m=+0.346379303 container start bccd99d98eb6d3567ec9eec06f19675ec5379e31ab171d1e92621ba06f68ab83 (image=quay.io/ceph/ceph:v18, name=eager_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:02 compute-0 podman[216972]: 2025-11-26 01:17:02.389124872 +0000 UTC m=+0.360559963 container attach bccd99d98eb6d3567ec9eec06f19675ec5379e31ab171d1e92621ba06f68ab83 (image=quay.io/ceph/ceph:v18, name=eager_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:02 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 26 01:17:02 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 26 01:17:02 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 01:17:02 compute-0 ceph-mgr[193049]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 26 01:17:02 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 26 01:17:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 01:17:02 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 26 01:17:02 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:02 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 26 01:17:02 compute-0 eager_kepler[216995]: Scheduled mds.cephfs update...
Nov 26 01:17:02 compute-0 ceph-mon[192746]: Saving service mds.cephfs spec with placement compute-0
Nov 26 01:17:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:02 compute-0 systemd[1]: libpod-bccd99d98eb6d3567ec9eec06f19675ec5379e31ab171d1e92621ba06f68ab83.scope: Deactivated successfully.
Nov 26 01:17:03 compute-0 podman[216972]: 2025-11-26 01:17:02.998182536 +0000 UTC m=+0.969617307 container died bccd99d98eb6d3567ec9eec06f19675ec5379e31ab171d1e92621ba06f68ab83 (image=quay.io/ceph/ceph:v18, name=eager_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:17:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5d03f8a19e25b98eb1babbfb28e2cc6630c722078b34c303fd48ba444f9f96a-merged.mount: Deactivated successfully.
Nov 26 01:17:03 compute-0 podman[216972]: 2025-11-26 01:17:03.066990355 +0000 UTC m=+1.038425106 container remove bccd99d98eb6d3567ec9eec06f19675ec5379e31ab171d1e92621ba06f68ab83 (image=quay.io/ceph/ceph:v18, name=eager_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 01:17:03 compute-0 friendly_napier[216990]: {
Nov 26 01:17:03 compute-0 friendly_napier[216990]:    "0": [
Nov 26 01:17:03 compute-0 friendly_napier[216990]:        {
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "devices": [
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "/dev/loop3"
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            ],
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "lv_name": "ceph_lv0",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "lv_size": "21470642176",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "name": "ceph_lv0",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "tags": {
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.cluster_name": "ceph",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.crush_device_class": "",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.encrypted": "0",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.osd_id": "0",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.type": "block",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.vdo": "0"
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            },
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "type": "block",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "vg_name": "ceph_vg0"
Nov 26 01:17:03 compute-0 friendly_napier[216990]:        }
Nov 26 01:17:03 compute-0 friendly_napier[216990]:    ],
Nov 26 01:17:03 compute-0 friendly_napier[216990]:    "1": [
Nov 26 01:17:03 compute-0 friendly_napier[216990]:        {
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "devices": [
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "/dev/loop4"
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            ],
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "lv_name": "ceph_lv1",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "lv_size": "21470642176",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "name": "ceph_lv1",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "tags": {
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.cluster_name": "ceph",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.crush_device_class": "",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.encrypted": "0",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.osd_id": "1",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.type": "block",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.vdo": "0"
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            },
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "type": "block",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "vg_name": "ceph_vg1"
Nov 26 01:17:03 compute-0 friendly_napier[216990]:        }
Nov 26 01:17:03 compute-0 friendly_napier[216990]:    ],
Nov 26 01:17:03 compute-0 friendly_napier[216990]:    "2": [
Nov 26 01:17:03 compute-0 friendly_napier[216990]:        {
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "devices": [
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "/dev/loop5"
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            ],
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "lv_name": "ceph_lv2",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "lv_size": "21470642176",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "name": "ceph_lv2",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "tags": {
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.cluster_name": "ceph",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.crush_device_class": "",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.encrypted": "0",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.osd_id": "2",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.type": "block",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:                "ceph.vdo": "0"
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            },
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "type": "block",
Nov 26 01:17:03 compute-0 friendly_napier[216990]:            "vg_name": "ceph_vg2"
Nov 26 01:17:03 compute-0 friendly_napier[216990]:        }
Nov 26 01:17:03 compute-0 friendly_napier[216990]:    ]
Nov 26 01:17:03 compute-0 friendly_napier[216990]: }
Nov 26 01:17:03 compute-0 systemd[1]: libpod-conmon-bccd99d98eb6d3567ec9eec06f19675ec5379e31ab171d1e92621ba06f68ab83.scope: Deactivated successfully.
Nov 26 01:17:03 compute-0 systemd[1]: libpod-9b90bade9a7748116159446630f1e1c828018512b6255b6fa72a57aa4377aa23.scope: Deactivated successfully.
Nov 26 01:17:03 compute-0 podman[216962]: 2025-11-26 01:17:03.109709379 +0000 UTC m=+1.150261338 container died 9b90bade9a7748116159446630f1e1c828018512b6255b6fa72a57aa4377aa23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:17:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e5d94d174aae487f6365c5dc102c3f76470562d52694c3abdd54d481123ae11-merged.mount: Deactivated successfully.
Nov 26 01:17:03 compute-0 podman[216962]: 2025-11-26 01:17:03.190568738 +0000 UTC m=+1.231120707 container remove 9b90bade9a7748116159446630f1e1c828018512b6255b6fa72a57aa4377aa23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_napier, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:03 compute-0 systemd[1]: libpod-conmon-9b90bade9a7748116159446630f1e1c828018512b6255b6fa72a57aa4377aa23.scope: Deactivated successfully.
Nov 26 01:17:03 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 26 01:17:03 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 26 01:17:03 compute-0 python3[217225]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 01:17:03 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 26 01:17:03 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:17:03 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 26 01:17:03 compute-0 ceph-mon[192746]: Saving service mds.cephfs spec with placement compute-0
Nov 26 01:17:04 compute-0 podman[217314]: 2025-11-26 01:17:04.366469309 +0000 UTC m=+0.093164027 container create fce620eaf3fa9ce0d161418fbaf25f005fc64998ee0b16b58f7ac343ed38e903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 01:17:04 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 26 01:17:04 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 26 01:17:04 compute-0 podman[217314]: 2025-11-26 01:17:04.33104029 +0000 UTC m=+0.057734998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:04 compute-0 systemd[1]: Started libpod-conmon-fce620eaf3fa9ce0d161418fbaf25f005fc64998ee0b16b58f7ac343ed38e903.scope.
Nov 26 01:17:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:04 compute-0 podman[217314]: 2025-11-26 01:17:04.513454741 +0000 UTC m=+0.240149469 container init fce620eaf3fa9ce0d161418fbaf25f005fc64998ee0b16b58f7ac343ed38e903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:04 compute-0 podman[217314]: 2025-11-26 01:17:04.53541254 +0000 UTC m=+0.262107258 container start fce620eaf3fa9ce0d161418fbaf25f005fc64998ee0b16b58f7ac343ed38e903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 01:17:04 compute-0 podman[217314]: 2025-11-26 01:17:04.541602305 +0000 UTC m=+0.268297033 container attach fce620eaf3fa9ce0d161418fbaf25f005fc64998ee0b16b58f7ac343ed38e903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:04 compute-0 friendly_murdock[217355]: 167 167
Nov 26 01:17:04 compute-0 systemd[1]: libpod-fce620eaf3fa9ce0d161418fbaf25f005fc64998ee0b16b58f7ac343ed38e903.scope: Deactivated successfully.
Nov 26 01:17:04 compute-0 podman[217314]: 2025-11-26 01:17:04.546430691 +0000 UTC m=+0.273125439 container died fce620eaf3fa9ce0d161418fbaf25f005fc64998ee0b16b58f7ac343ed38e903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-176955eb4da0374c323a89cb9adea6fa045b1d485d7f27042370c59f0f24d3da-merged.mount: Deactivated successfully.
Nov 26 01:17:04 compute-0 python3[217357]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764119823.50531-37194-180187953561235/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=f0b66bb9353ce94c732bb9473056fe6c0a7a3767 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:17:04 compute-0 podman[217314]: 2025-11-26 01:17:04.622797183 +0000 UTC m=+0.349491911 container remove fce620eaf3fa9ce0d161418fbaf25f005fc64998ee0b16b58f7ac343ed38e903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:04 compute-0 systemd[1]: libpod-conmon-fce620eaf3fa9ce0d161418fbaf25f005fc64998ee0b16b58f7ac343ed38e903.scope: Deactivated successfully.
Nov 26 01:17:04 compute-0 podman[217403]: 2025-11-26 01:17:04.881223556 +0000 UTC m=+0.080347795 container create 815639d7425979a5e6820e2499711692ce48481f48141518651cb967ab7ed6c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:04 compute-0 podman[217403]: 2025-11-26 01:17:04.847539787 +0000 UTC m=+0.046664046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:04 compute-0 systemd[1]: Started libpod-conmon-815639d7425979a5e6820e2499711692ce48481f48141518651cb967ab7ed6c9.scope.
Nov 26 01:17:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:17:05 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f7d4784d7ddad4ebb7f9a3f088752faca030507a3a3f456e1517f983eb9dda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f7d4784d7ddad4ebb7f9a3f088752faca030507a3a3f456e1517f983eb9dda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f7d4784d7ddad4ebb7f9a3f088752faca030507a3a3f456e1517f983eb9dda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f7d4784d7ddad4ebb7f9a3f088752faca030507a3a3f456e1517f983eb9dda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:05 compute-0 podman[217403]: 2025-11-26 01:17:05.046545055 +0000 UTC m=+0.245669344 container init 815639d7425979a5e6820e2499711692ce48481f48141518651cb967ab7ed6c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_visvesvaraya, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:17:05 compute-0 podman[217403]: 2025-11-26 01:17:05.07013565 +0000 UTC m=+0.269259889 container start 815639d7425979a5e6820e2499711692ce48481f48141518651cb967ab7ed6c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:05 compute-0 podman[217403]: 2025-11-26 01:17:05.077199969 +0000 UTC m=+0.276324238 container attach 815639d7425979a5e6820e2499711692ce48481f48141518651cb967ab7ed6c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:17:05 compute-0 python3[217449]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:05 compute-0 podman[217450]: 2025-11-26 01:17:05.43063736 +0000 UTC m=+0.093209278 container create e495d1fecd34b49b8e9ca0699bdf1a453814368c27ce3bbdc9380af82a9d84a4 (image=quay.io/ceph/ceph:v18, name=stoic_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:05 compute-0 podman[217450]: 2025-11-26 01:17:05.395555912 +0000 UTC m=+0.058127900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:05 compute-0 systemd[1]: Started libpod-conmon-e495d1fecd34b49b8e9ca0699bdf1a453814368c27ce3bbdc9380af82a9d84a4.scope.
Nov 26 01:17:05 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86480d3323da73c44e78332e25ed5365ecab61f5bcd85d1f4cd0a74106274ad0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86480d3323da73c44e78332e25ed5365ecab61f5bcd85d1f4cd0a74106274ad0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:05 compute-0 podman[217450]: 2025-11-26 01:17:05.593341446 +0000 UTC m=+0.255913434 container init e495d1fecd34b49b8e9ca0699bdf1a453814368c27ce3bbdc9380af82a9d84a4 (image=quay.io/ceph/ceph:v18, name=stoic_johnson, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:05 compute-0 podman[217450]: 2025-11-26 01:17:05.608715489 +0000 UTC m=+0.271287417 container start e495d1fecd34b49b8e9ca0699bdf1a453814368c27ce3bbdc9380af82a9d84a4 (image=quay.io/ceph/ceph:v18, name=stoic_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 01:17:05 compute-0 podman[217450]: 2025-11-26 01:17:05.616083407 +0000 UTC m=+0.278655395 container attach e495d1fecd34b49b8e9ca0699bdf1a453814368c27ce3bbdc9380af82a9d84a4 (image=quay.io/ceph/ceph:v18, name=stoic_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]: {
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:        "osd_id": 0,
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:        "type": "bluestore"
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:    },
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:        "osd_id": 2,
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:        "type": "bluestore"
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:    },
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:        "osd_id": 1,
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:        "type": "bluestore"
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]:    }
Nov 26 01:17:06 compute-0 kind_visvesvaraya[217419]: }
Nov 26 01:17:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 26 01:17:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/194574558' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 26 01:17:06 compute-0 systemd[1]: libpod-815639d7425979a5e6820e2499711692ce48481f48141518651cb967ab7ed6c9.scope: Deactivated successfully.
Nov 26 01:17:06 compute-0 systemd[1]: libpod-815639d7425979a5e6820e2499711692ce48481f48141518651cb967ab7ed6c9.scope: Consumed 1.167s CPU time.
Nov 26 01:17:06 compute-0 podman[217403]: 2025-11-26 01:17:06.244123416 +0000 UTC m=+1.443247685 container died 815639d7425979a5e6820e2499711692ce48481f48141518651cb967ab7ed6c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_visvesvaraya, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 26 01:17:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/194574558' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 26 01:17:06 compute-0 systemd[1]: libpod-e495d1fecd34b49b8e9ca0699bdf1a453814368c27ce3bbdc9380af82a9d84a4.scope: Deactivated successfully.
Nov 26 01:17:06 compute-0 conmon[217465]: conmon e495d1fecd34b49b8e9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e495d1fecd34b49b8e9ca0699bdf1a453814368c27ce3bbdc9380af82a9d84a4.scope/container/memory.events
Nov 26 01:17:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-07f7d4784d7ddad4ebb7f9a3f088752faca030507a3a3f456e1517f983eb9dda-merged.mount: Deactivated successfully.
Nov 26 01:17:06 compute-0 podman[217450]: 2025-11-26 01:17:06.294698191 +0000 UTC m=+0.957270119 container died e495d1fecd34b49b8e9ca0699bdf1a453814368c27ce3bbdc9380af82a9d84a4 (image=quay.io/ceph/ceph:v18, name=stoic_johnson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 01:17:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-86480d3323da73c44e78332e25ed5365ecab61f5bcd85d1f4cd0a74106274ad0-merged.mount: Deactivated successfully.
Nov 26 01:17:06 compute-0 podman[217450]: 2025-11-26 01:17:06.378248086 +0000 UTC m=+1.040819984 container remove e495d1fecd34b49b8e9ca0699bdf1a453814368c27ce3bbdc9380af82a9d84a4 (image=quay.io/ceph/ceph:v18, name=stoic_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 01:17:06 compute-0 systemd[1]: libpod-conmon-e495d1fecd34b49b8e9ca0699bdf1a453814368c27ce3bbdc9380af82a9d84a4.scope: Deactivated successfully.
Nov 26 01:17:06 compute-0 podman[217403]: 2025-11-26 01:17:06.400096762 +0000 UTC m=+1.599220981 container remove 815639d7425979a5e6820e2499711692ce48481f48141518651cb967ab7ed6c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:06 compute-0 systemd[1]: libpod-conmon-815639d7425979a5e6820e2499711692ce48481f48141518651cb967ab7ed6c9.scope: Deactivated successfully.
Nov 26 01:17:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:17:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:17:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:06 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 26 01:17:06 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 26 01:17:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:17:07 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/194574558' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 26 01:17:07 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/194574558' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 26 01:17:07 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:07 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:07 compute-0 python3[217668]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:07 compute-0 podman[217719]: 2025-11-26 01:17:07.310038687 +0000 UTC m=+0.090981465 container create 2fa9c7381be65e94375024e0c615bd7f0a58cdcbcf5f8cf775154dbb8acf7753 (image=quay.io/ceph/ceph:v18, name=stoic_zhukovsky, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 01:17:07 compute-0 podman[217719]: 2025-11-26 01:17:07.282196272 +0000 UTC m=+0.063139140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:07 compute-0 systemd[1]: Started libpod-conmon-2fa9c7381be65e94375024e0c615bd7f0a58cdcbcf5f8cf775154dbb8acf7753.scope.
Nov 26 01:17:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb41ac04378311acd8855d257a1126da7b4e95aebc806ae86eeb1701a2ee4b2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb41ac04378311acd8855d257a1126da7b4e95aebc806ae86eeb1701a2ee4b2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:07 compute-0 podman[217719]: 2025-11-26 01:17:07.473262897 +0000 UTC m=+0.254205715 container init 2fa9c7381be65e94375024e0c615bd7f0a58cdcbcf5f8cf775154dbb8acf7753 (image=quay.io/ceph/ceph:v18, name=stoic_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:07 compute-0 podman[217719]: 2025-11-26 01:17:07.483039232 +0000 UTC m=+0.263982060 container start 2fa9c7381be65e94375024e0c615bd7f0a58cdcbcf5f8cf775154dbb8acf7753 (image=quay.io/ceph/ceph:v18, name=stoic_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:17:07 compute-0 podman[217719]: 2025-11-26 01:17:07.508633684 +0000 UTC m=+0.289576552 container attach 2fa9c7381be65e94375024e0c615bd7f0a58cdcbcf5f8cf775154dbb8acf7753 (image=quay.io/ceph/ceph:v18, name=stoic_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 01:17:07 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.19 deep-scrub starts
Nov 26 01:17:07 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.19 deep-scrub ok
Nov 26 01:17:07 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 26 01:17:07 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 26 01:17:07 compute-0 podman[217822]: 2025-11-26 01:17:07.981950703 +0000 UTC m=+0.076759824 container exec 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 01:17:08 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2773913479' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 01:17:08 compute-0 stoic_zhukovsky[217747]: 
Nov 26 01:17:08 compute-0 stoic_zhukovsky[217747]: {"fsid":"36901f64-240e-5c29-a2e2-29b56f2c329c","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":194,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1764119771,"num_in_osds":3,"osd_in_since":1764119737,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84148224,"bytes_avail":64327778304,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-11-26T01:17:07.001244+0000","services":{"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 26 01:17:08 compute-0 podman[217822]: 2025-11-26 01:17:08.106348409 +0000 UTC m=+0.201157480 container exec_died 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:08 compute-0 systemd[1]: libpod-2fa9c7381be65e94375024e0c615bd7f0a58cdcbcf5f8cf775154dbb8acf7753.scope: Deactivated successfully.
Nov 26 01:17:08 compute-0 podman[217719]: 2025-11-26 01:17:08.137418105 +0000 UTC m=+0.918360883 container died 2fa9c7381be65e94375024e0c615bd7f0a58cdcbcf5f8cf775154dbb8acf7753 (image=quay.io/ceph/ceph:v18, name=stoic_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eb41ac04378311acd8855d257a1126da7b4e95aebc806ae86eeb1701a2ee4b2-merged.mount: Deactivated successfully.
Nov 26 01:17:08 compute-0 podman[217719]: 2025-11-26 01:17:08.220548538 +0000 UTC m=+1.001491316 container remove 2fa9c7381be65e94375024e0c615bd7f0a58cdcbcf5f8cf775154dbb8acf7753 (image=quay.io/ceph/ceph:v18, name=stoic_zhukovsky, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 01:17:08 compute-0 systemd[1]: libpod-conmon-2fa9c7381be65e94375024e0c615bd7f0a58cdcbcf5f8cf775154dbb8acf7753.scope: Deactivated successfully.
Nov 26 01:17:08 compute-0 python3[217932]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:08 compute-0 podman[217951]: 2025-11-26 01:17:08.79722257 +0000 UTC m=+0.093461915 container create e4ad7cbc6377e6d1005ac0b4a2900b14b38372b4e05a03aef25aa2848aeb49e0 (image=quay.io/ceph/ceph:v18, name=nostalgic_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 01:17:08 compute-0 podman[217951]: 2025-11-26 01:17:08.769124358 +0000 UTC m=+0.065363763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:08 compute-0 systemd[1]: Started libpod-conmon-e4ad7cbc6377e6d1005ac0b4a2900b14b38372b4e05a03aef25aa2848aeb49e0.scope.
Nov 26 01:17:08 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800c12cc7f35a1858099f8371b69ad184934b5049442754877637262ef215a62/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800c12cc7f35a1858099f8371b69ad184934b5049442754877637262ef215a62/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:08 compute-0 podman[217951]: 2025-11-26 01:17:08.954027849 +0000 UTC m=+0.250267284 container init e4ad7cbc6377e6d1005ac0b4a2900b14b38372b4e05a03aef25aa2848aeb49e0 (image=quay.io/ceph/ceph:v18, name=nostalgic_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 01:17:08 compute-0 podman[217951]: 2025-11-26 01:17:08.968265851 +0000 UTC m=+0.264505196 container start e4ad7cbc6377e6d1005ac0b4a2900b14b38372b4e05a03aef25aa2848aeb49e0 (image=quay.io/ceph/ceph:v18, name=nostalgic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:17:08 compute-0 podman[217951]: 2025-11-26 01:17:08.974054354 +0000 UTC m=+0.270293729 container attach e4ad7cbc6377e6d1005ac0b4a2900b14b38372b4e05a03aef25aa2848aeb49e0 (image=quay.io/ceph/ceph:v18, name=nostalgic_hawking, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 01:17:08 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 26 01:17:09 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 26 01:17:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:17:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:17:09 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:17:09 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:17:09 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:17:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:17:09 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:17:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:17:09 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:09 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev a7cb031c-702a-4f80-bbf7-ea0a1ca0a8fb does not exist
Nov 26 01:17:09 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 3d4bae2b-4264-42be-9b92-370b7a497822 does not exist
Nov 26 01:17:09 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 942306f3-5810-4652-b16f-c2e262def232 does not exist
Nov 26 01:17:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:17:09 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:17:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:17:09 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:17:09 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:09 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:09 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:17:09 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:17:09 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:17:09 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 26 01:17:09 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 26 01:17:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 01:17:09 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/461412157' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 01:17:09 compute-0 nostalgic_hawking[217983]: 
Nov 26 01:17:09 compute-0 nostalgic_hawking[217983]: {"epoch":1,"fsid":"36901f64-240e-5c29-a2e2-29b56f2c329c","modified":"2025-11-26T01:13:45.866362Z","created":"2025-11-26T01:13:45.866362Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 26 01:17:09 compute-0 nostalgic_hawking[217983]: dumped monmap epoch 1
Nov 26 01:17:09 compute-0 systemd[1]: libpod-e4ad7cbc6377e6d1005ac0b4a2900b14b38372b4e05a03aef25aa2848aeb49e0.scope: Deactivated successfully.
Nov 26 01:17:09 compute-0 podman[217951]: 2025-11-26 01:17:09.687612333 +0000 UTC m=+0.983851708 container died e4ad7cbc6377e6d1005ac0b4a2900b14b38372b4e05a03aef25aa2848aeb49e0 (image=quay.io/ceph/ceph:v18, name=nostalgic_hawking, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-800c12cc7f35a1858099f8371b69ad184934b5049442754877637262ef215a62-merged.mount: Deactivated successfully.
Nov 26 01:17:09 compute-0 podman[217951]: 2025-11-26 01:17:09.757014429 +0000 UTC m=+1.053253764 container remove e4ad7cbc6377e6d1005ac0b4a2900b14b38372b4e05a03aef25aa2848aeb49e0 (image=quay.io/ceph/ceph:v18, name=nostalgic_hawking, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:09 compute-0 systemd[1]: libpod-conmon-e4ad7cbc6377e6d1005ac0b4a2900b14b38372b4e05a03aef25aa2848aeb49e0.scope: Deactivated successfully.
Nov 26 01:17:10 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:17:10 compute-0 podman[218162]: 2025-11-26 01:17:10.242951874 +0000 UTC m=+0.091710026 container create 22e29c03b0e7f03ae807f407c9aeaff0ffdf5af48855f1549138df26164fb5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ishizaka, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:10 compute-0 podman[218162]: 2025-11-26 01:17:10.208079161 +0000 UTC m=+0.056837383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:10 compute-0 systemd[1]: Started libpod-conmon-22e29c03b0e7f03ae807f407c9aeaff0ffdf5af48855f1549138df26164fb5be.scope.
Nov 26 01:17:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:10 compute-0 podman[218162]: 2025-11-26 01:17:10.38616244 +0000 UTC m=+0.234920632 container init 22e29c03b0e7f03ae807f407c9aeaff0ffdf5af48855f1549138df26164fb5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 01:17:10 compute-0 podman[218162]: 2025-11-26 01:17:10.400930156 +0000 UTC m=+0.249688308 container start 22e29c03b0e7f03ae807f407c9aeaff0ffdf5af48855f1549138df26164fb5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ishizaka, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:10 compute-0 podman[218162]: 2025-11-26 01:17:10.408200971 +0000 UTC m=+0.256959123 container attach 22e29c03b0e7f03ae807f407c9aeaff0ffdf5af48855f1549138df26164fb5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ishizaka, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:10 compute-0 romantic_ishizaka[218183]: 167 167
Nov 26 01:17:10 compute-0 systemd[1]: libpod-22e29c03b0e7f03ae807f407c9aeaff0ffdf5af48855f1549138df26164fb5be.scope: Deactivated successfully.
Nov 26 01:17:10 compute-0 podman[218162]: 2025-11-26 01:17:10.413044088 +0000 UTC m=+0.261802240 container died 22e29c03b0e7f03ae807f407c9aeaff0ffdf5af48855f1549138df26164fb5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ishizaka, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:17:10 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Nov 26 01:17:10 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Nov 26 01:17:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b07d06a593f8208fd814ac75c158b3df2e95d9b004fe499a73ee3e51173186a-merged.mount: Deactivated successfully.
Nov 26 01:17:10 compute-0 podman[218162]: 2025-11-26 01:17:10.513589392 +0000 UTC m=+0.362347514 container remove 22e29c03b0e7f03ae807f407c9aeaff0ffdf5af48855f1549138df26164fb5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ishizaka, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 01:17:10 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.1d deep-scrub starts
Nov 26 01:17:10 compute-0 systemd[1]: libpod-conmon-22e29c03b0e7f03ae807f407c9aeaff0ffdf5af48855f1549138df26164fb5be.scope: Deactivated successfully.
Nov 26 01:17:10 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.1d deep-scrub ok
Nov 26 01:17:10 compute-0 python3[218208]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:10 compute-0 podman[218222]: 2025-11-26 01:17:10.711035266 +0000 UTC m=+0.093895577 container create 0f234027911fb8e22de04f9abdbf8a41e1db45a258b77ac56db3924c03c7f0d0 (image=quay.io/ceph/ceph:v18, name=modest_dewdney, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:10 compute-0 podman[218237]: 2025-11-26 01:17:10.749312645 +0000 UTC m=+0.075418657 container create 81a4f4c25a7513f367941de7bdc5d8b42daa5e34f5d16c05407ad1a7af3003a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_vaughan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:17:10 compute-0 podman[218222]: 2025-11-26 01:17:10.669539677 +0000 UTC m=+0.052399988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:10 compute-0 systemd[1]: Started libpod-conmon-0f234027911fb8e22de04f9abdbf8a41e1db45a258b77ac56db3924c03c7f0d0.scope.
Nov 26 01:17:10 compute-0 podman[218237]: 2025-11-26 01:17:10.702955818 +0000 UTC m=+0.029061860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd75a204808deebacd168ce88a680c09fd334b963dd55df85d0d131a82838388/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd75a204808deebacd168ce88a680c09fd334b963dd55df85d0d131a82838388/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:10 compute-0 systemd[1]: Started libpod-conmon-81a4f4c25a7513f367941de7bdc5d8b42daa5e34f5d16c05407ad1a7af3003a4.scope.
Nov 26 01:17:10 compute-0 podman[218222]: 2025-11-26 01:17:10.846765821 +0000 UTC m=+0.229626092 container init 0f234027911fb8e22de04f9abdbf8a41e1db45a258b77ac56db3924c03c7f0d0 (image=quay.io/ceph/ceph:v18, name=modest_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 01:17:10 compute-0 podman[218222]: 2025-11-26 01:17:10.865731506 +0000 UTC m=+0.248591777 container start 0f234027911fb8e22de04f9abdbf8a41e1db45a258b77ac56db3924c03c7f0d0 (image=quay.io/ceph/ceph:v18, name=modest_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:10 compute-0 podman[218222]: 2025-11-26 01:17:10.870239933 +0000 UTC m=+0.253100204 container attach 0f234027911fb8e22de04f9abdbf8a41e1db45a258b77ac56db3924c03c7f0d0 (image=quay.io/ceph/ceph:v18, name=modest_dewdney, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 01:17:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a59f658dfdc7690f76c8f27eafd344964d9ce799f418110fa23e9e9e68009e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a59f658dfdc7690f76c8f27eafd344964d9ce799f418110fa23e9e9e68009e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a59f658dfdc7690f76c8f27eafd344964d9ce799f418110fa23e9e9e68009e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a59f658dfdc7690f76c8f27eafd344964d9ce799f418110fa23e9e9e68009e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a59f658dfdc7690f76c8f27eafd344964d9ce799f418110fa23e9e9e68009e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:10 compute-0 podman[218237]: 2025-11-26 01:17:10.927363613 +0000 UTC m=+0.253469665 container init 81a4f4c25a7513f367941de7bdc5d8b42daa5e34f5d16c05407ad1a7af3003a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 01:17:10 compute-0 podman[218237]: 2025-11-26 01:17:10.951095692 +0000 UTC m=+0.277201654 container start 81a4f4c25a7513f367941de7bdc5d8b42daa5e34f5d16c05407ad1a7af3003a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_vaughan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 26 01:17:10 compute-0 podman[218237]: 2025-11-26 01:17:10.95527138 +0000 UTC m=+0.281377352 container attach 81a4f4c25a7513f367941de7bdc5d8b42daa5e34f5d16c05407ad1a7af3003a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:17:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:17:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:17:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:17:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:17:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:17:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:17:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 26 01:17:11 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1427211556' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 26 01:17:11 compute-0 modest_dewdney[218257]: [client.openstack]
Nov 26 01:17:11 compute-0 modest_dewdney[218257]: #011key = AQAhVCZpAAAAABAAlD7bW8mlSeVnQJPFz4cgog==
Nov 26 01:17:11 compute-0 modest_dewdney[218257]: #011caps mgr = "allow *"
Nov 26 01:17:11 compute-0 modest_dewdney[218257]: #011caps mon = "profile rbd"
Nov 26 01:17:11 compute-0 modest_dewdney[218257]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 26 01:17:11 compute-0 systemd[1]: libpod-0f234027911fb8e22de04f9abdbf8a41e1db45a258b77ac56db3924c03c7f0d0.scope: Deactivated successfully.
Nov 26 01:17:11 compute-0 podman[218222]: 2025-11-26 01:17:11.609020974 +0000 UTC m=+0.991881285 container died 0f234027911fb8e22de04f9abdbf8a41e1db45a258b77ac56db3924c03c7f0d0 (image=quay.io/ceph/ceph:v18, name=modest_dewdney, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 01:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd75a204808deebacd168ce88a680c09fd334b963dd55df85d0d131a82838388-merged.mount: Deactivated successfully.
Nov 26 01:17:11 compute-0 podman[218222]: 2025-11-26 01:17:11.71920914 +0000 UTC m=+1.102069411 container remove 0f234027911fb8e22de04f9abdbf8a41e1db45a258b77ac56db3924c03c7f0d0 (image=quay.io/ceph/ceph:v18, name=modest_dewdney, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:11 compute-0 systemd[1]: libpod-conmon-0f234027911fb8e22de04f9abdbf8a41e1db45a258b77ac56db3924c03c7f0d0.scope: Deactivated successfully.
Nov 26 01:17:12 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 26 01:17:12 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 26 01:17:12 compute-0 focused_vaughan[218262]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:17:12 compute-0 focused_vaughan[218262]: --> relative data size: 1.0
Nov 26 01:17:12 compute-0 focused_vaughan[218262]: --> All data devices are unavailable
Nov 26 01:17:12 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/1427211556' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 26 01:17:12 compute-0 systemd[1]: libpod-81a4f4c25a7513f367941de7bdc5d8b42daa5e34f5d16c05407ad1a7af3003a4.scope: Deactivated successfully.
Nov 26 01:17:12 compute-0 systemd[1]: libpod-81a4f4c25a7513f367941de7bdc5d8b42daa5e34f5d16c05407ad1a7af3003a4.scope: Consumed 1.129s CPU time.
Nov 26 01:17:12 compute-0 podman[218324]: 2025-11-26 01:17:12.239122702 +0000 UTC m=+0.051491572 container died 81a4f4c25a7513f367941de7bdc5d8b42daa5e34f5d16c05407ad1a7af3003a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:17:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-17a59f658dfdc7690f76c8f27eafd344964d9ce799f418110fa23e9e9e68009e-merged.mount: Deactivated successfully.
Nov 26 01:17:12 compute-0 podman[218324]: 2025-11-26 01:17:12.351300624 +0000 UTC m=+0.163669464 container remove 81a4f4c25a7513f367941de7bdc5d8b42daa5e34f5d16c05407ad1a7af3003a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 01:17:12 compute-0 systemd[1]: libpod-conmon-81a4f4c25a7513f367941de7bdc5d8b42daa5e34f5d16c05407ad1a7af3003a4.scope: Deactivated successfully.
Nov 26 01:17:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:17:13 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 26 01:17:13 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 26 01:17:13 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 26 01:17:13 compute-0 podman[218601]: 2025-11-26 01:17:13.554619406 +0000 UTC m=+0.079473591 container create 6df18e4953d567aa24e939930014c07ff5eacfcbbe1499e2cfbac42c34306e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shtern, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 01:17:13 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 26 01:17:13 compute-0 podman[218601]: 2025-11-26 01:17:13.521619626 +0000 UTC m=+0.046473891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:13 compute-0 systemd[1]: Started libpod-conmon-6df18e4953d567aa24e939930014c07ff5eacfcbbe1499e2cfbac42c34306e8b.scope.
Nov 26 01:17:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:13 compute-0 podman[218601]: 2025-11-26 01:17:13.699673034 +0000 UTC m=+0.224527239 container init 6df18e4953d567aa24e939930014c07ff5eacfcbbe1499e2cfbac42c34306e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shtern, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:13 compute-0 podman[218601]: 2025-11-26 01:17:13.714880443 +0000 UTC m=+0.239734638 container start 6df18e4953d567aa24e939930014c07ff5eacfcbbe1499e2cfbac42c34306e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:17:13 compute-0 podman[218601]: 2025-11-26 01:17:13.721082408 +0000 UTC m=+0.245936613 container attach 6df18e4953d567aa24e939930014c07ff5eacfcbbe1499e2cfbac42c34306e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shtern, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:13 compute-0 eager_shtern[218642]: 167 167
Nov 26 01:17:13 compute-0 systemd[1]: libpod-6df18e4953d567aa24e939930014c07ff5eacfcbbe1499e2cfbac42c34306e8b.scope: Deactivated successfully.
Nov 26 01:17:13 compute-0 podman[218601]: 2025-11-26 01:17:13.727708094 +0000 UTC m=+0.252562299 container died 6df18e4953d567aa24e939930014c07ff5eacfcbbe1499e2cfbac42c34306e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-88e0f03d149af3e43601514282feadf52bd71b9a45f1decba6408cfcbad1b401-merged.mount: Deactivated successfully.
Nov 26 01:17:13 compute-0 ansible-async_wrapper.py[218636]: Invoked with j909742265468 30 /home/zuul/.ansible/tmp/ansible-tmp-1764119832.8940647-37266-234294530693173/AnsiballZ_command.py _
Nov 26 01:17:13 compute-0 ansible-async_wrapper.py[218659]: Starting module and watcher
Nov 26 01:17:13 compute-0 podman[218601]: 2025-11-26 01:17:13.805189568 +0000 UTC m=+0.330043763 container remove 6df18e4953d567aa24e939930014c07ff5eacfcbbe1499e2cfbac42c34306e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shtern, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:17:13 compute-0 ansible-async_wrapper.py[218660]: Start module (218660)
Nov 26 01:17:13 compute-0 ansible-async_wrapper.py[218659]: Start watching 218660 (30)
Nov 26 01:17:13 compute-0 ansible-async_wrapper.py[218636]: Return async_wrapper task started.
Nov 26 01:17:13 compute-0 systemd[1]: libpod-conmon-6df18e4953d567aa24e939930014c07ff5eacfcbbe1499e2cfbac42c34306e8b.scope: Deactivated successfully.
Nov 26 01:17:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:17:13 compute-0 python3[218661]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:14 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 26 01:17:14 compute-0 podman[218669]: 2025-11-26 01:17:14.050197863 +0000 UTC m=+0.074994075 container create 68520f8ce69f90b89878b53d90977dce87deb70f2ba973a687a497bf7ead15c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 01:17:14 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 26 01:17:14 compute-0 podman[218669]: 2025-11-26 01:17:14.022174143 +0000 UTC m=+0.046970365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:14 compute-0 systemd[1]: Started libpod-conmon-68520f8ce69f90b89878b53d90977dce87deb70f2ba973a687a497bf7ead15c9.scope.
Nov 26 01:17:14 compute-0 podman[218676]: 2025-11-26 01:17:14.121094261 +0000 UTC m=+0.110579447 container create e3f4389fcaa50381774812ce443076a30f17dda8f47dc9a177d11fcedff5d988 (image=quay.io/ceph/ceph:v18, name=eager_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 01:17:14 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6a1880842eb7b847b10df95c3998f9b6020f9070a5048062d0d76d6c5dc0e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6a1880842eb7b847b10df95c3998f9b6020f9070a5048062d0d76d6c5dc0e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6a1880842eb7b847b10df95c3998f9b6020f9070a5048062d0d76d6c5dc0e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6a1880842eb7b847b10df95c3998f9b6020f9070a5048062d0d76d6c5dc0e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:14 compute-0 systemd[1]: Started libpod-conmon-e3f4389fcaa50381774812ce443076a30f17dda8f47dc9a177d11fcedff5d988.scope.
Nov 26 01:17:14 compute-0 podman[218676]: 2025-11-26 01:17:14.084629383 +0000 UTC m=+0.074114599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:14 compute-0 podman[218669]: 2025-11-26 01:17:14.185689332 +0000 UTC m=+0.210485564 container init 68520f8ce69f90b89878b53d90977dce87deb70f2ba973a687a497bf7ead15c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:14 compute-0 podman[218669]: 2025-11-26 01:17:14.206127078 +0000 UTC m=+0.230923280 container start 68520f8ce69f90b89878b53d90977dce87deb70f2ba973a687a497bf7ead15c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 01:17:14 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:14 compute-0 podman[218669]: 2025-11-26 01:17:14.217369025 +0000 UTC m=+0.242165237 container attach 68520f8ce69f90b89878b53d90977dce87deb70f2ba973a687a497bf7ead15c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 01:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35ba47db5705d13cb3b06fe10b537c3e8318ac40d9db8f9c65263b7c80c529dc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35ba47db5705d13cb3b06fe10b537c3e8318ac40d9db8f9c65263b7c80c529dc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:14 compute-0 podman[218676]: 2025-11-26 01:17:14.245300372 +0000 UTC m=+0.234785648 container init e3f4389fcaa50381774812ce443076a30f17dda8f47dc9a177d11fcedff5d988 (image=quay.io/ceph/ceph:v18, name=eager_ishizaka, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:17:14 compute-0 podman[218676]: 2025-11-26 01:17:14.261651413 +0000 UTC m=+0.251136609 container start e3f4389fcaa50381774812ce443076a30f17dda8f47dc9a177d11fcedff5d988 (image=quay.io/ceph/ceph:v18, name=eager_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 01:17:14 compute-0 podman[218676]: 2025-11-26 01:17:14.266686294 +0000 UTC m=+0.256171580 container attach e3f4389fcaa50381774812ce443076a30f17dda8f47dc9a177d11fcedff5d988 (image=quay.io/ceph/ceph:v18, name=eager_ishizaka, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:14 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 01:17:14 compute-0 eager_ishizaka[218702]: 
Nov 26 01:17:14 compute-0 eager_ishizaka[218702]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 01:17:14 compute-0 systemd[1]: libpod-e3f4389fcaa50381774812ce443076a30f17dda8f47dc9a177d11fcedff5d988.scope: Deactivated successfully.
Nov 26 01:17:14 compute-0 podman[218676]: 2025-11-26 01:17:14.858531444 +0000 UTC m=+0.848016660 container died e3f4389fcaa50381774812ce443076a30f17dda8f47dc9a177d11fcedff5d988 (image=quay.io/ceph/ceph:v18, name=eager_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 01:17:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-35ba47db5705d13cb3b06fe10b537c3e8318ac40d9db8f9c65263b7c80c529dc-merged.mount: Deactivated successfully.
Nov 26 01:17:14 compute-0 podman[218676]: 2025-11-26 01:17:14.941515123 +0000 UTC m=+0.931000309 container remove e3f4389fcaa50381774812ce443076a30f17dda8f47dc9a177d11fcedff5d988 (image=quay.io/ceph/ceph:v18, name=eager_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:14 compute-0 systemd[1]: libpod-conmon-e3f4389fcaa50381774812ce443076a30f17dda8f47dc9a177d11fcedff5d988.scope: Deactivated successfully.
Nov 26 01:17:14 compute-0 ansible-async_wrapper.py[218660]: Module complete (218660)
Nov 26 01:17:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:17:15 compute-0 jovial_colden[218696]: {
Nov 26 01:17:15 compute-0 jovial_colden[218696]:    "0": [
Nov 26 01:17:15 compute-0 jovial_colden[218696]:        {
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "devices": [
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "/dev/loop3"
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            ],
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "lv_name": "ceph_lv0",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "lv_size": "21470642176",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "name": "ceph_lv0",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "tags": {
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.cluster_name": "ceph",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.crush_device_class": "",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.encrypted": "0",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.osd_id": "0",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.type": "block",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.vdo": "0"
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            },
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "type": "block",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "vg_name": "ceph_vg0"
Nov 26 01:17:15 compute-0 jovial_colden[218696]:        }
Nov 26 01:17:15 compute-0 jovial_colden[218696]:    ],
Nov 26 01:17:15 compute-0 jovial_colden[218696]:    "1": [
Nov 26 01:17:15 compute-0 jovial_colden[218696]:        {
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "devices": [
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "/dev/loop4"
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            ],
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "lv_name": "ceph_lv1",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "lv_size": "21470642176",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "name": "ceph_lv1",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "tags": {
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.cluster_name": "ceph",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.crush_device_class": "",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.encrypted": "0",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.osd_id": "1",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.type": "block",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.vdo": "0"
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            },
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "type": "block",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "vg_name": "ceph_vg1"
Nov 26 01:17:15 compute-0 jovial_colden[218696]:        }
Nov 26 01:17:15 compute-0 jovial_colden[218696]:    ],
Nov 26 01:17:15 compute-0 jovial_colden[218696]:    "2": [
Nov 26 01:17:15 compute-0 jovial_colden[218696]:        {
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "devices": [
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "/dev/loop5"
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            ],
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "lv_name": "ceph_lv2",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "lv_size": "21470642176",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "name": "ceph_lv2",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "tags": {
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.cluster_name": "ceph",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.crush_device_class": "",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.encrypted": "0",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.osd_id": "2",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.type": "block",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:                "ceph.vdo": "0"
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            },
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "type": "block",
Nov 26 01:17:15 compute-0 jovial_colden[218696]:            "vg_name": "ceph_vg2"
Nov 26 01:17:15 compute-0 jovial_colden[218696]:        }
Nov 26 01:17:15 compute-0 jovial_colden[218696]:    ]
Nov 26 01:17:15 compute-0 jovial_colden[218696]: }
Nov 26 01:17:15 compute-0 systemd[1]: libpod-68520f8ce69f90b89878b53d90977dce87deb70f2ba973a687a497bf7ead15c9.scope: Deactivated successfully.
Nov 26 01:17:15 compute-0 podman[218669]: 2025-11-26 01:17:15.081172539 +0000 UTC m=+1.105968751 container died 68520f8ce69f90b89878b53d90977dce87deb70f2ba973a687a497bf7ead15c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 01:17:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca6a1880842eb7b847b10df95c3998f9b6020f9070a5048062d0d76d6c5dc0e9-merged.mount: Deactivated successfully.
Nov 26 01:17:15 compute-0 podman[218669]: 2025-11-26 01:17:15.154965899 +0000 UTC m=+1.179762071 container remove 68520f8ce69f90b89878b53d90977dce87deb70f2ba973a687a497bf7ead15c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:17:15 compute-0 systemd[1]: libpod-conmon-68520f8ce69f90b89878b53d90977dce87deb70f2ba973a687a497bf7ead15c9.scope: Deactivated successfully.
Nov 26 01:17:15 compute-0 python3[218793]: ansible-ansible.legacy.async_status Invoked with jid=j909742265468.218636 mode=status _async_dir=/root/.ansible_async
Nov 26 01:17:15 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.1c deep-scrub starts
Nov 26 01:17:15 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 3.1c deep-scrub ok
Nov 26 01:17:15 compute-0 python3[218918]: ansible-ansible.legacy.async_status Invoked with jid=j909742265468.218636 mode=cleanup _async_dir=/root/.ansible_async
Nov 26 01:17:15 compute-0 podman[218927]: 2025-11-26 01:17:15.790900221 +0000 UTC m=+0.120731613 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:17:15 compute-0 podman[218926]: 2025-11-26 01:17:15.801120239 +0000 UTC m=+0.136032624 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:17:15 compute-0 podman[218991]: 2025-11-26 01:17:15.916539142 +0000 UTC m=+0.124249002 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:17:16 compute-0 podman[219060]: 2025-11-26 01:17:16.29895844 +0000 UTC m=+0.094065892 container create 1bbbfa7ed0a6a5688e1f78f0c3d166eb76ec79e5578af756a33b57d418f2b0b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 01:17:16 compute-0 podman[219060]: 2025-11-26 01:17:16.262301427 +0000 UTC m=+0.057408929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:16 compute-0 systemd[1]: Started libpod-conmon-1bbbfa7ed0a6a5688e1f78f0c3d166eb76ec79e5578af756a33b57d418f2b0b2.scope.
Nov 26 01:17:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:16 compute-0 podman[219060]: 2025-11-26 01:17:16.452466656 +0000 UTC m=+0.247574158 container init 1bbbfa7ed0a6a5688e1f78f0c3d166eb76ec79e5578af756a33b57d418f2b0b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 01:17:16 compute-0 podman[219060]: 2025-11-26 01:17:16.470377601 +0000 UTC m=+0.265485063 container start 1bbbfa7ed0a6a5688e1f78f0c3d166eb76ec79e5578af756a33b57d418f2b0b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 01:17:16 compute-0 podman[219060]: 2025-11-26 01:17:16.478815619 +0000 UTC m=+0.273923131 container attach 1bbbfa7ed0a6a5688e1f78f0c3d166eb76ec79e5578af756a33b57d418f2b0b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:16 compute-0 recursing_mclaren[219101]: 167 167
Nov 26 01:17:16 compute-0 systemd[1]: libpod-1bbbfa7ed0a6a5688e1f78f0c3d166eb76ec79e5578af756a33b57d418f2b0b2.scope: Deactivated successfully.
Nov 26 01:17:16 compute-0 conmon[219101]: conmon 1bbbfa7ed0a6a5688e1f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1bbbfa7ed0a6a5688e1f78f0c3d166eb76ec79e5578af756a33b57d418f2b0b2.scope/container/memory.events
Nov 26 01:17:16 compute-0 podman[219060]: 2025-11-26 01:17:16.486442844 +0000 UTC m=+0.281550306 container died 1bbbfa7ed0a6a5688e1f78f0c3d166eb76ec79e5578af756a33b57d418f2b0b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-75657a61f340c5b88aa4b692b8b213eccd1e085a81dbf3fbfa1d0a9e44e3af7f-merged.mount: Deactivated successfully.
Nov 26 01:17:16 compute-0 podman[219060]: 2025-11-26 01:17:16.571000337 +0000 UTC m=+0.366107799 container remove 1bbbfa7ed0a6a5688e1f78f0c3d166eb76ec79e5578af756a33b57d418f2b0b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclaren, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 26 01:17:16 compute-0 python3[219103]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:16 compute-0 systemd[1]: libpod-conmon-1bbbfa7ed0a6a5688e1f78f0c3d166eb76ec79e5578af756a33b57d418f2b0b2.scope: Deactivated successfully.
Nov 26 01:17:16 compute-0 podman[219119]: 2025-11-26 01:17:16.693419007 +0000 UTC m=+0.080140609 container create 4315d73fac3857b4c82fc9ce1a008b678a70f194a58214a44c14b251b165db31 (image=quay.io/ceph/ceph:v18, name=kind_pasteur, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 01:17:16 compute-0 podman[219119]: 2025-11-26 01:17:16.658361649 +0000 UTC m=+0.045083331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:16 compute-0 systemd[1]: Started libpod-conmon-4315d73fac3857b4c82fc9ce1a008b678a70f194a58214a44c14b251b165db31.scope.
Nov 26 01:17:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6095e6785b7693b6445a9329d4f19855bcb028de908968ddf7aadf48fd9dd4a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6095e6785b7693b6445a9329d4f19855bcb028de908968ddf7aadf48fd9dd4a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:16 compute-0 podman[219140]: 2025-11-26 01:17:16.873050699 +0000 UTC m=+0.097025445 container create 5d9905239bfc45ead8ace71f64544b17a671edd7d44608132c76d5f53d4997ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_black, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:16 compute-0 podman[219119]: 2025-11-26 01:17:16.893167806 +0000 UTC m=+0.279889438 container init 4315d73fac3857b4c82fc9ce1a008b678a70f194a58214a44c14b251b165db31 (image=quay.io/ceph/ceph:v18, name=kind_pasteur, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 01:17:16 compute-0 podman[219119]: 2025-11-26 01:17:16.908026735 +0000 UTC m=+0.294748297 container start 4315d73fac3857b4c82fc9ce1a008b678a70f194a58214a44c14b251b165db31 (image=quay.io/ceph/ceph:v18, name=kind_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 01:17:16 compute-0 podman[219119]: 2025-11-26 01:17:16.913132538 +0000 UTC m=+0.299854110 container attach 4315d73fac3857b4c82fc9ce1a008b678a70f194a58214a44c14b251b165db31 (image=quay.io/ceph/ceph:v18, name=kind_pasteur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:17:16 compute-0 podman[219140]: 2025-11-26 01:17:16.836704184 +0000 UTC m=+0.060678980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:16 compute-0 systemd[1]: Started libpod-conmon-5d9905239bfc45ead8ace71f64544b17a671edd7d44608132c76d5f53d4997ac.scope.
Nov 26 01:17:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:17:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6bccf650cff1485a2cc5bfb05c93222d617e63ea45c8e4647ff2de5f11c0932/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6bccf650cff1485a2cc5bfb05c93222d617e63ea45c8e4647ff2de5f11c0932/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6bccf650cff1485a2cc5bfb05c93222d617e63ea45c8e4647ff2de5f11c0932/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6bccf650cff1485a2cc5bfb05c93222d617e63ea45c8e4647ff2de5f11c0932/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:17 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.1e deep-scrub starts
Nov 26 01:17:17 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 2.1e deep-scrub ok
Nov 26 01:17:17 compute-0 podman[219140]: 2025-11-26 01:17:17.072031097 +0000 UTC m=+0.296005883 container init 5d9905239bfc45ead8ace71f64544b17a671edd7d44608132c76d5f53d4997ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_black, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:17:17 compute-0 podman[219140]: 2025-11-26 01:17:17.097291799 +0000 UTC m=+0.321266505 container start 5d9905239bfc45ead8ace71f64544b17a671edd7d44608132c76d5f53d4997ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_black, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:17 compute-0 podman[219140]: 2025-11-26 01:17:17.102950908 +0000 UTC m=+0.326925654 container attach 5d9905239bfc45ead8ace71f64544b17a671edd7d44608132c76d5f53d4997ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_black, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 01:17:17 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 26 01:17:17 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 26 01:17:17 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.1f deep-scrub starts
Nov 26 01:17:17 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 4.1f deep-scrub ok
Nov 26 01:17:17 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 01:17:17 compute-0 kind_pasteur[219147]: 
Nov 26 01:17:17 compute-0 kind_pasteur[219147]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 26 01:17:17 compute-0 systemd[1]: libpod-4315d73fac3857b4c82fc9ce1a008b678a70f194a58214a44c14b251b165db31.scope: Deactivated successfully.
Nov 26 01:17:17 compute-0 podman[219119]: 2025-11-26 01:17:17.602090315 +0000 UTC m=+0.988811917 container died 4315d73fac3857b4c82fc9ce1a008b678a70f194a58214a44c14b251b165db31 (image=quay.io/ceph/ceph:v18, name=kind_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 01:17:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6095e6785b7693b6445a9329d4f19855bcb028de908968ddf7aadf48fd9dd4a-merged.mount: Deactivated successfully.
Nov 26 01:17:17 compute-0 podman[219119]: 2025-11-26 01:17:17.691689441 +0000 UTC m=+1.078411033 container remove 4315d73fac3857b4c82fc9ce1a008b678a70f194a58214a44c14b251b165db31 (image=quay.io/ceph/ceph:v18, name=kind_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:17 compute-0 systemd[1]: libpod-conmon-4315d73fac3857b4c82fc9ce1a008b678a70f194a58214a44c14b251b165db31.scope: Deactivated successfully.
Nov 26 01:17:18 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 26 01:17:18 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 26 01:17:18 compute-0 musing_black[219160]: {
Nov 26 01:17:18 compute-0 musing_black[219160]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:17:18 compute-0 musing_black[219160]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:18 compute-0 musing_black[219160]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:17:18 compute-0 musing_black[219160]:        "osd_id": 0,
Nov 26 01:17:18 compute-0 musing_black[219160]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:17:18 compute-0 musing_black[219160]:        "type": "bluestore"
Nov 26 01:17:18 compute-0 musing_black[219160]:    },
Nov 26 01:17:18 compute-0 musing_black[219160]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:17:18 compute-0 musing_black[219160]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:18 compute-0 musing_black[219160]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:17:18 compute-0 musing_black[219160]:        "osd_id": 2,
Nov 26 01:17:18 compute-0 musing_black[219160]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:17:18 compute-0 musing_black[219160]:        "type": "bluestore"
Nov 26 01:17:18 compute-0 musing_black[219160]:    },
Nov 26 01:17:18 compute-0 musing_black[219160]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:17:18 compute-0 musing_black[219160]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:18 compute-0 musing_black[219160]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:17:18 compute-0 musing_black[219160]:        "osd_id": 1,
Nov 26 01:17:18 compute-0 musing_black[219160]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:17:18 compute-0 musing_black[219160]:        "type": "bluestore"
Nov 26 01:17:18 compute-0 musing_black[219160]:    }
Nov 26 01:17:18 compute-0 musing_black[219160]: }
Nov 26 01:17:18 compute-0 systemd[1]: libpod-5d9905239bfc45ead8ace71f64544b17a671edd7d44608132c76d5f53d4997ac.scope: Deactivated successfully.
Nov 26 01:17:18 compute-0 podman[219140]: 2025-11-26 01:17:18.131254239 +0000 UTC m=+1.355229005 container died 5d9905239bfc45ead8ace71f64544b17a671edd7d44608132c76d5f53d4997ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_black, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:18 compute-0 systemd[1]: libpod-5d9905239bfc45ead8ace71f64544b17a671edd7d44608132c76d5f53d4997ac.scope: Consumed 1.026s CPU time.
Nov 26 01:17:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6bccf650cff1485a2cc5bfb05c93222d617e63ea45c8e4647ff2de5f11c0932-merged.mount: Deactivated successfully.
Nov 26 01:17:18 compute-0 podman[219140]: 2025-11-26 01:17:18.221872443 +0000 UTC m=+1.445847189 container remove 5d9905239bfc45ead8ace71f64544b17a671edd7d44608132c76d5f53d4997ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_black, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:18 compute-0 systemd[1]: libpod-conmon-5d9905239bfc45ead8ace71f64544b17a671edd7d44608132c76d5f53d4997ac.scope: Deactivated successfully.
Nov 26 01:17:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:17:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:17:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:18 compute-0 ceph-mgr[193049]: [progress INFO root] update: starting ev d54855ae-71a9-43ac-a9a3-521d772d0c08 (Updating rgw.rgw deployment (+1 -> 1))
Nov 26 01:17:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.klkwcz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 26 01:17:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.klkwcz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 26 01:17:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.klkwcz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 26 01:17:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 26 01:17:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:17:18 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:17:18 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.klkwcz on compute-0
Nov 26 01:17:18 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.klkwcz on compute-0
Nov 26 01:17:18 compute-0 python3[219319]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:18 compute-0 ansible-async_wrapper.py[218659]: Done in kid B.
Nov 26 01:17:18 compute-0 podman[219359]: 2025-11-26 01:17:18.917142277 +0000 UTC m=+0.085900021 container create 005934a7d7fdc94aef50deddb1982282d4fcc68a9f900898acd025ba73de7377 (image=quay.io/ceph/ceph:v18, name=crazy_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:17:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:17:18 compute-0 podman[219359]: 2025-11-26 01:17:18.887302216 +0000 UTC m=+0.056059991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:18 compute-0 systemd[1]: Started libpod-conmon-005934a7d7fdc94aef50deddb1982282d4fcc68a9f900898acd025ba73de7377.scope.
Nov 26 01:17:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v112: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:17:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8aa3b16c8bd76c9e9a2ef9a6e04ef1da9a62feead6f90f1ba847f3d1f605b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8aa3b16c8bd76c9e9a2ef9a6e04ef1da9a62feead6f90f1ba847f3d1f605b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:19 compute-0 podman[219359]: 2025-11-26 01:17:19.062968277 +0000 UTC m=+0.231726051 container init 005934a7d7fdc94aef50deddb1982282d4fcc68a9f900898acd025ba73de7377 (image=quay.io/ceph/ceph:v18, name=crazy_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:19 compute-0 podman[219359]: 2025-11-26 01:17:19.072366932 +0000 UTC m=+0.241124666 container start 005934a7d7fdc94aef50deddb1982282d4fcc68a9f900898acd025ba73de7377 (image=quay.io/ceph/ceph:v18, name=crazy_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:17:19 compute-0 podman[219359]: 2025-11-26 01:17:19.078147605 +0000 UTC m=+0.246905359 container attach 005934a7d7fdc94aef50deddb1982282d4fcc68a9f900898acd025ba73de7377 (image=quay.io/ceph/ceph:v18, name=crazy_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:17:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.klkwcz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 26 01:17:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.klkwcz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 26 01:17:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:19 compute-0 ceph-mon[192746]: Deploying daemon rgw.rgw.compute-0.klkwcz on compute-0
Nov 26 01:17:19 compute-0 podman[219419]: 2025-11-26 01:17:19.406603002 +0000 UTC m=+0.100044521 container create 669bbc727a9c1ccf679da84f9d7f12094cf57e210ed3ac781414900e3047c7da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:17:19 compute-0 podman[219419]: 2025-11-26 01:17:19.368269692 +0000 UTC m=+0.061711261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:19 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.b deep-scrub starts
Nov 26 01:17:19 compute-0 systemd[1]: Started libpod-conmon-669bbc727a9c1ccf679da84f9d7f12094cf57e210ed3ac781414900e3047c7da.scope.
Nov 26 01:17:19 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.b deep-scrub ok
Nov 26 01:17:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:19 compute-0 podman[219419]: 2025-11-26 01:17:19.540117645 +0000 UTC m=+0.233559174 container init 669bbc727a9c1ccf679da84f9d7f12094cf57e210ed3ac781414900e3047c7da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rubin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:19 compute-0 podman[219419]: 2025-11-26 01:17:19.551020032 +0000 UTC m=+0.244461521 container start 669bbc727a9c1ccf679da84f9d7f12094cf57e210ed3ac781414900e3047c7da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:17:19 compute-0 zen_rubin[219453]: 167 167
Nov 26 01:17:19 compute-0 podman[219419]: 2025-11-26 01:17:19.564931504 +0000 UTC m=+0.258372993 container attach 669bbc727a9c1ccf679da84f9d7f12094cf57e210ed3ac781414900e3047c7da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rubin, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 01:17:19 compute-0 systemd[1]: libpod-669bbc727a9c1ccf679da84f9d7f12094cf57e210ed3ac781414900e3047c7da.scope: Deactivated successfully.
Nov 26 01:17:19 compute-0 conmon[219453]: conmon 669bbc727a9c1ccf679d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-669bbc727a9c1ccf679da84f9d7f12094cf57e210ed3ac781414900e3047c7da.scope/container/memory.events
Nov 26 01:17:19 compute-0 podman[219419]: 2025-11-26 01:17:19.572561609 +0000 UTC m=+0.266003118 container died 669bbc727a9c1ccf679da84f9d7f12094cf57e210ed3ac781414900e3047c7da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rubin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:17:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f5bba95322ca7bcdeec155934831c087a327c39d32559dc0317b89ba662275b-merged.mount: Deactivated successfully.
Nov 26 01:17:19 compute-0 podman[219419]: 2025-11-26 01:17:19.629233676 +0000 UTC m=+0.322675165 container remove 669bbc727a9c1ccf679da84f9d7f12094cf57e210ed3ac781414900e3047c7da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rubin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 01:17:19 compute-0 systemd[1]: libpod-conmon-669bbc727a9c1ccf679da84f9d7f12094cf57e210ed3ac781414900e3047c7da.scope: Deactivated successfully.
Nov 26 01:17:19 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 01:17:19 compute-0 crazy_shockley[219385]: 
Nov 26 01:17:19 compute-0 crazy_shockley[219385]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 26 01:17:19 compute-0 systemd[1]: libpod-005934a7d7fdc94aef50deddb1982282d4fcc68a9f900898acd025ba73de7377.scope: Deactivated successfully.
Nov 26 01:17:19 compute-0 conmon[219385]: conmon 005934a7d7fdc94aef50 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-005934a7d7fdc94aef50deddb1982282d4fcc68a9f900898acd025ba73de7377.scope/container/memory.events
Nov 26 01:17:19 compute-0 podman[219359]: 2025-11-26 01:17:19.690185984 +0000 UTC m=+0.858943748 container died 005934a7d7fdc94aef50deddb1982282d4fcc68a9f900898acd025ba73de7377 (image=quay.io/ceph/ceph:v18, name=crazy_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 01:17:19 compute-0 systemd[1]: Reloading.
Nov 26 01:17:19 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:17:19 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:17:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-df8aa3b16c8bd76c9e9a2ef9a6e04ef1da9a62feead6f90f1ba847f3d1f605b5-merged.mount: Deactivated successfully.
Nov 26 01:17:20 compute-0 podman[219359]: 2025-11-26 01:17:20.187635054 +0000 UTC m=+1.356392778 container remove 005934a7d7fdc94aef50deddb1982282d4fcc68a9f900898acd025ba73de7377 (image=quay.io/ceph/ceph:v18, name=crazy_shockley, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:17:20 compute-0 systemd[1]: Reloading.
Nov 26 01:17:20 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:17:20 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:17:20 compute-0 systemd[1]: libpod-conmon-005934a7d7fdc94aef50deddb1982282d4fcc68a9f900898acd025ba73de7377.scope: Deactivated successfully.
Nov 26 01:17:20 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.klkwcz for 36901f64-240e-5c29-a2e2-29b56f2c329c...
Nov 26 01:17:20 compute-0 podman[219567]: 2025-11-26 01:17:20.807530793 +0000 UTC m=+0.114643822 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:17:20 compute-0 podman[219566]: 2025-11-26 01:17:20.807562744 +0000 UTC m=+0.115332031 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container)
Nov 26 01:17:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v113: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:17:21 compute-0 podman[219651]: 2025-11-26 01:17:21.082098562 +0000 UTC m=+0.079182083 container create 7b10489e9d549fba76f502dc5d4c363531c7f952c51e8dcc9e2221644e424f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-rgw-rgw-compute-0-klkwcz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 01:17:21 compute-0 podman[219651]: 2025-11-26 01:17:21.042352131 +0000 UTC m=+0.039435692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85123db948e41dbb89b55717dd0f31ca10db7651e3a4f871fb8b7af68682490/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85123db948e41dbb89b55717dd0f31ca10db7651e3a4f871fb8b7af68682490/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85123db948e41dbb89b55717dd0f31ca10db7651e3a4f871fb8b7af68682490/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85123db948e41dbb89b55717dd0f31ca10db7651e3a4f871fb8b7af68682490/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.klkwcz supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:21 compute-0 podman[219651]: 2025-11-26 01:17:21.219595527 +0000 UTC m=+0.216679068 container init 7b10489e9d549fba76f502dc5d4c363531c7f952c51e8dcc9e2221644e424f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-rgw-rgw-compute-0-klkwcz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 01:17:21 compute-0 podman[219651]: 2025-11-26 01:17:21.23353993 +0000 UTC m=+0.230623441 container start 7b10489e9d549fba76f502dc5d4c363531c7f952c51e8dcc9e2221644e424f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-rgw-rgw-compute-0-klkwcz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 01:17:21 compute-0 bash[219651]: 7b10489e9d549fba76f502dc5d4c363531c7f952c51e8dcc9e2221644e424f92
Nov 26 01:17:21 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.klkwcz for 36901f64-240e-5c29-a2e2-29b56f2c329c.
Nov 26 01:17:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:17:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:17:21 compute-0 radosgw[219693]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 26 01:17:21 compute-0 radosgw[219693]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 26 01:17:21 compute-0 radosgw[219693]: framework: beast
Nov 26 01:17:21 compute-0 radosgw[219693]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 26 01:17:21 compute-0 radosgw[219693]: init_numa not setting numa affinity
Nov 26 01:17:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 01:17:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:21 compute-0 ceph-mgr[193049]: [progress INFO root] complete: finished ev d54855ae-71a9-43ac-a9a3-521d772d0c08 (Updating rgw.rgw deployment (+1 -> 1))
Nov 26 01:17:21 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event d54855ae-71a9-43ac-a9a3-521d772d0c08 (Updating rgw.rgw deployment (+1 -> 1)) in 3 seconds
Nov 26 01:17:21 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 26 01:17:21 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 26 01:17:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 01:17:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 26 01:17:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:21 compute-0 ceph-mgr[193049]: [progress INFO root] update: starting ev 89e3c7ad-b59b-4a3b-a29a-1a7ec870ec2b (Updating mds.cephfs deployment (+1 -> 1))
Nov 26 01:17:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gmppdy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 26 01:17:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gmppdy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 26 01:17:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gmppdy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 26 01:17:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:17:21 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:17:21 compute-0 ceph-mgr[193049]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.gmppdy on compute-0
Nov 26 01:17:21 compute-0 ceph-mgr[193049]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.gmppdy on compute-0
Nov 26 01:17:21 compute-0 python3[219694]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:21 compute-0 podman[219757]: 2025-11-26 01:17:21.513017896 +0000 UTC m=+0.070663022 container create da3b6a45d580e33f42a8b24575c9c79daa181e9cbf140b0fbabcdbe1efbc6ad4 (image=quay.io/ceph/ceph:v18, name=quirky_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 01:17:21 compute-0 podman[219757]: 2025-11-26 01:17:21.487769115 +0000 UTC m=+0.045414271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:21 compute-0 systemd[1]: Started libpod-conmon-da3b6a45d580e33f42a8b24575c9c79daa181e9cbf140b0fbabcdbe1efbc6ad4.scope.
Nov 26 01:17:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d243a12acc6b608749d0e11df85bdf50c174cc4b22b1973e6f5a52941cb192b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d243a12acc6b608749d0e11df85bdf50c174cc4b22b1973e6f5a52941cb192b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:21 compute-0 podman[219757]: 2025-11-26 01:17:21.676661948 +0000 UTC m=+0.234307154 container init da3b6a45d580e33f42a8b24575c9c79daa181e9cbf140b0fbabcdbe1efbc6ad4 (image=quay.io/ceph/ceph:v18, name=quirky_wing, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 26 01:17:21 compute-0 podman[219757]: 2025-11-26 01:17:21.696749004 +0000 UTC m=+0.254394150 container start da3b6a45d580e33f42a8b24575c9c79daa181e9cbf140b0fbabcdbe1efbc6ad4 (image=quay.io/ceph/ceph:v18, name=quirky_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 01:17:21 compute-0 podman[219757]: 2025-11-26 01:17:21.703261308 +0000 UTC m=+0.260906454 container attach da3b6a45d580e33f42a8b24575c9c79daa181e9cbf140b0fbabcdbe1efbc6ad4 (image=quay.io/ceph/ceph:v18, name=quirky_wing, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:17:22 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 26 01:17:22 compute-0 quirky_wing[219803]: 
Nov 26 01:17:22 compute-0 quirky_wing[219803]: [{"container_id": "6e99a14a2bad", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.42%", "created": "2025-11-26T01:15:18.244363Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-26T01:15:18.328866Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T01:17:09.016761Z", "memory_usage": 11628707, "ports": [], "service_name": "crash", "started": "2025-11-26T01:15:17.992269Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-36901f64-240e-5c29-a2e2-29b56f2c329c@crash.compute-0", "version": "18.2.7"}, {"container_id": "7222fbf079f0", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "24.93%", "created": "2025-11-26T01:13:56.932172Z", "daemon_id": "compute-0.vbisdw", "daemon_name": "mgr.compute-0.vbisdw", "daemon_type": "mgr", "events": ["2025-11-26T01:16:28.738276Z daemon:mgr.compute-0.vbisdw [INFO] \"Reconfigured mgr.compute-0.vbisdw on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T01:17:09.016631Z", "memory_usage": 549348966, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-26T01:13:56.723892Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-36901f64-240e-5c29-a2e2-29b56f2c329c@mgr.compute-0.vbisdw", "version": "18.2.7"}, {"container_id": "4ef91eb781dd", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "3.26%", "created": "2025-11-26T01:13:49.259972Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-26T01:16:27.410899Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T01:17:09.016469Z", "memory_request": 2147483648, "memory_usage": 40003174, "ports": [], "service_name": "mon", "started": "2025-11-26T01:13:53.434303Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-36901f64-240e-5c29-a2e2-29b56f2c329c@mon.compute-0", "version": "18.2.7"}, {"container_id": "fd4f624ba4cc", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.89%", "created": "2025-11-26T01:15:51.426289Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-26T01:15:51.491434Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T01:17:09.016964Z", "memory_request": 4294967296, "memory_usage": 69090672, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-26T01:15:51.232607Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-36901f64-240e-5c29-a2e2-29b56f2c329c@osd.0", "version": "18.2.7"}, {"container_id": "538a4fcc44e5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "4.22%", "created": "2025-11-26T01:15:58.003473Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-26T01:15:58.054921Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T01:17:09.017135Z", "memory_request": 4294967296, "memory_usage": 68230840, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-26T01:15:57.891720Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-36901f64-240e-5c29-a2e2-29b56f2c329c@osd.1", "version": "18.2.7"}, {"container_id": "f57382b83849", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "4.16%", "created": "2025-11-26T01:16:04.694660Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-26T01:16:04.782074Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-26T01:17:09.017255Z", "memory_request": 4294967296, "memory_usage": 64896368, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-26T01:16:04.469565Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-36901f64-240e-5c29-a2e2-29b56f2c329c@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.klkwcz", "daemon_name": "rgw.rgw.compute-0.klkwcz", "daemon_type": "rgw", "events": ["2025-11-26T01:17:21.337928Z daemon:rgw.rgw.compute-0.klkwcz [INFO] \"Deployed rgw.rgw.compute-0.klkwcz on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Nov 26 01:17:22 compute-0 systemd[1]: libpod-da3b6a45d580e33f42a8b24575c9c79daa181e9cbf140b0fbabcdbe1efbc6ad4.scope: Deactivated successfully.
Nov 26 01:17:22 compute-0 podman[219757]: 2025-11-26 01:17:22.287655497 +0000 UTC m=+0.845300623 container died da3b6a45d580e33f42a8b24575c9c79daa181e9cbf140b0fbabcdbe1efbc6ad4 (image=quay.io/ceph/ceph:v18, name=quirky_wing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 01:17:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d243a12acc6b608749d0e11df85bdf50c174cc4b22b1973e6f5a52941cb192b-merged.mount: Deactivated successfully.
Nov 26 01:17:22 compute-0 podman[219757]: 2025-11-26 01:17:22.3580185 +0000 UTC m=+0.915663626 container remove da3b6a45d580e33f42a8b24575c9c79daa181e9cbf140b0fbabcdbe1efbc6ad4 (image=quay.io/ceph/ceph:v18, name=quirky_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:17:22 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:22 compute-0 ceph-mon[192746]: Saving service rgw.rgw spec with placement compute-0
Nov 26 01:17:22 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:22 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:22 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gmppdy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 26 01:17:22 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gmppdy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 26 01:17:22 compute-0 ceph-mon[192746]: Deploying daemon mds.cephfs.compute-0.gmppdy on compute-0
Nov 26 01:17:22 compute-0 systemd[1]: libpod-conmon-da3b6a45d580e33f42a8b24575c9c79daa181e9cbf140b0fbabcdbe1efbc6ad4.scope: Deactivated successfully.
Nov 26 01:17:22 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 26 01:17:22 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 26 01:17:22 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 26 01:17:22 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 26 01:17:22 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4003297178' entity='client.rgw.rgw.compute-0.klkwcz' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 26 01:17:22 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 26 01:17:22 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 43 pg[8.0( empty local-lis/les=0/0 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:22 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 26 01:17:22 compute-0 podman[219944]: 2025-11-26 01:17:22.506398982 +0000 UTC m=+0.066105274 container create 067d413a40a40e3557618cd2ae9f01a2d906bb6dce91b34a2fa997a99b0bf857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 01:17:22 compute-0 podman[219944]: 2025-11-26 01:17:22.474579375 +0000 UTC m=+0.034285737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:22 compute-0 systemd[1]: Started libpod-conmon-067d413a40a40e3557618cd2ae9f01a2d906bb6dce91b34a2fa997a99b0bf857.scope.
Nov 26 01:17:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:22 compute-0 podman[219944]: 2025-11-26 01:17:22.651278885 +0000 UTC m=+0.210985217 container init 067d413a40a40e3557618cd2ae9f01a2d906bb6dce91b34a2fa997a99b0bf857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 01:17:22 compute-0 podman[219944]: 2025-11-26 01:17:22.66848422 +0000 UTC m=+0.228190532 container start 067d413a40a40e3557618cd2ae9f01a2d906bb6dce91b34a2fa997a99b0bf857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:22 compute-0 podman[219944]: 2025-11-26 01:17:22.674639063 +0000 UTC m=+0.234345385 container attach 067d413a40a40e3557618cd2ae9f01a2d906bb6dce91b34a2fa997a99b0bf857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euler, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:17:22 compute-0 friendly_euler[219960]: 167 167
Nov 26 01:17:22 compute-0 systemd[1]: libpod-067d413a40a40e3557618cd2ae9f01a2d906bb6dce91b34a2fa997a99b0bf857.scope: Deactivated successfully.
Nov 26 01:17:22 compute-0 podman[219944]: 2025-11-26 01:17:22.679908402 +0000 UTC m=+0.239614724 container died 067d413a40a40e3557618cd2ae9f01a2d906bb6dce91b34a2fa997a99b0bf857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c975c418a5913035f60481ac342e55f9b54937b535a31dd91908428bce6227f7-merged.mount: Deactivated successfully.
Nov 26 01:17:22 compute-0 podman[219944]: 2025-11-26 01:17:22.763546019 +0000 UTC m=+0.323252301 container remove 067d413a40a40e3557618cd2ae9f01a2d906bb6dce91b34a2fa997a99b0bf857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 01:17:22 compute-0 systemd[1]: libpod-conmon-067d413a40a40e3557618cd2ae9f01a2d906bb6dce91b34a2fa997a99b0bf857.scope: Deactivated successfully.
Nov 26 01:17:22 compute-0 systemd[1]: Reloading.
Nov 26 01:17:22 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:17:22 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:17:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v115: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:17:23 compute-0 systemd[1]: Reloading.
Nov 26 01:17:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 26 01:17:23 compute-0 ceph-mon[192746]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 01:17:23 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4003297178' entity='client.rgw.rgw.compute-0.klkwcz' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 26 01:17:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 26 01:17:23 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 26 01:17:23 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/4003297178' entity='client.rgw.rgw.compute-0.klkwcz' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 26 01:17:23 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 44 pg[8.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:23 compute-0 python3[220044]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:23 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:17:23 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:17:23 compute-0 podman[220054]: 2025-11-26 01:17:23.609568072 +0000 UTC m=+0.073574504 container create 97c8899bed8555a57d589d97efd55bd05534ad91721764ff1f186e43baa3578f (image=quay.io/ceph/ceph:v18, name=exciting_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 01:17:23 compute-0 podman[220054]: 2025-11-26 01:17:23.581873402 +0000 UTC m=+0.045879874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:23 compute-0 systemd[1]: Started libpod-conmon-97c8899bed8555a57d589d97efd55bd05534ad91721764ff1f186e43baa3578f.scope.
Nov 26 01:17:23 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.gmppdy for 36901f64-240e-5c29-a2e2-29b56f2c329c...
Nov 26 01:17:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61593a6b8dbb04bb9b66797b974a43738586a66b5bc281377e94c189cd063a1c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61593a6b8dbb04bb9b66797b974a43738586a66b5bc281377e94c189cd063a1c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:23 compute-0 podman[220054]: 2025-11-26 01:17:23.874933571 +0000 UTC m=+0.338940073 container init 97c8899bed8555a57d589d97efd55bd05534ad91721764ff1f186e43baa3578f (image=quay.io/ceph/ceph:v18, name=exciting_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 01:17:23 compute-0 podman[220054]: 2025-11-26 01:17:23.89795531 +0000 UTC m=+0.361961772 container start 97c8899bed8555a57d589d97efd55bd05534ad91721764ff1f186e43baa3578f (image=quay.io/ceph/ceph:v18, name=exciting_yalow, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 01:17:23 compute-0 podman[220054]: 2025-11-26 01:17:23.905422961 +0000 UTC m=+0.369429473 container attach 97c8899bed8555a57d589d97efd55bd05534ad91721764ff1f186e43baa3578f (image=quay.io/ceph/ceph:v18, name=exciting_yalow, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 01:17:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:17:24 compute-0 podman[220146]: 2025-11-26 01:17:24.27126951 +0000 UTC m=+0.076609870 container create 7c8776b7f728ea7bfde0dc39c51baa26e0e0b96a1917178045481b51e17dfc01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mds-cephfs-compute-0-gmppdy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 01:17:24 compute-0 podman[220146]: 2025-11-26 01:17:24.243817786 +0000 UTC m=+0.049158116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:24 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 26 01:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4acdba973aa2a5a1dc28886d6db9b4952276cb6371cbc8c7f66502a6421e07d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4acdba973aa2a5a1dc28886d6db9b4952276cb6371cbc8c7f66502a6421e07d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4acdba973aa2a5a1dc28886d6db9b4952276cb6371cbc8c7f66502a6421e07d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4acdba973aa2a5a1dc28886d6db9b4952276cb6371cbc8c7f66502a6421e07d6/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.gmppdy supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:24 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 26 01:17:24 compute-0 podman[220146]: 2025-11-26 01:17:24.388443773 +0000 UTC m=+0.193784183 container init 7c8776b7f728ea7bfde0dc39c51baa26e0e0b96a1917178045481b51e17dfc01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mds-cephfs-compute-0-gmppdy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 01:17:24 compute-0 podman[220146]: 2025-11-26 01:17:24.406732888 +0000 UTC m=+0.212073248 container start 7c8776b7f728ea7bfde0dc39c51baa26e0e0b96a1917178045481b51e17dfc01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mds-cephfs-compute-0-gmppdy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 01:17:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 26 01:17:24 compute-0 bash[220146]: 7c8776b7f728ea7bfde0dc39c51baa26e0e0b96a1917178045481b51e17dfc01
Nov 26 01:17:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 26 01:17:24 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 26 01:17:24 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 45 pg[9.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 26 01:17:24 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4003297178' entity='client.rgw.rgw.compute-0.klkwcz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 26 01:17:24 compute-0 ceph-mon[192746]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 26 01:17:24 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/4003297178' entity='client.rgw.rgw.compute-0.klkwcz' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 26 01:17:24 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.gmppdy for 36901f64-240e-5c29-a2e2-29b56f2c329c.
Nov 26 01:17:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:17:24 compute-0 ceph-mds[220183]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 01:17:24 compute-0 ceph-mds[220183]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 26 01:17:24 compute-0 ceph-mds[220183]: main not setting numa affinity
Nov 26 01:17:24 compute-0 ceph-mds[220183]: pidfile_write: ignore empty --pid-file
Nov 26 01:17:24 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mds-cephfs-compute-0-gmppdy[220179]: starting mds.cephfs.compute-0.gmppdy at 
Nov 26 01:17:24 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:17:24 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:24 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy Updating MDS map to version 2 from mon.0
Nov 26 01:17:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 01:17:24 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:24 compute-0 ceph-mgr[193049]: [progress INFO root] complete: finished ev 89e3c7ad-b59b-4a3b-a29a-1a7ec870ec2b (Updating mds.cephfs deployment (+1 -> 1))
Nov 26 01:17:24 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event 89e3c7ad-b59b-4a3b-a29a-1a7ec870ec2b (Updating mds.cephfs deployment (+1 -> 1)) in 3 seconds
Nov 26 01:17:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 26 01:17:24 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 26 01:17:24 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 26 01:17:24 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3633780563' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 26 01:17:24 compute-0 exciting_yalow[220099]: 
Nov 26 01:17:24 compute-0 exciting_yalow[220099]: {"fsid":"36901f64-240e-5c29-a2e2-29b56f2c329c","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false},"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":210,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":45,"num_osds":3,"num_up_osds":3,"osd_up_since":1764119771,"num_in_osds":3,"osd_in_since":1764119737,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193},{"state_name":"unknown","count":1}],"num_pgs":194,"num_pools":8,"num_objects":2,"data_bytes":459280,"bytes_used":84156416,"bytes_avail":64327770112,"bytes_total":64411926528,"unknown_pgs_ratio":0.0051546390168368816},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-11-26T01:17:13.003777+0000","services":{}},"progress_events":{"89e3c7ad-b59b-4a3b-a29a-1a7ec870ec2b":{"message":"Updating mds.cephfs deployment (+1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Nov 26 01:17:24 compute-0 systemd[1]: libpod-97c8899bed8555a57d589d97efd55bd05534ad91721764ff1f186e43baa3578f.scope: Deactivated successfully.
Nov 26 01:17:24 compute-0 podman[220054]: 2025-11-26 01:17:24.644348314 +0000 UTC m=+1.108354776 container died 97c8899bed8555a57d589d97efd55bd05534ad91721764ff1f186e43baa3578f (image=quay.io/ceph/ceph:v18, name=exciting_yalow, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 01:17:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-61593a6b8dbb04bb9b66797b974a43738586a66b5bc281377e94c189cd063a1c-merged.mount: Deactivated successfully.
Nov 26 01:17:24 compute-0 podman[220054]: 2025-11-26 01:17:24.729210746 +0000 UTC m=+1.193217198 container remove 97c8899bed8555a57d589d97efd55bd05534ad91721764ff1f186e43baa3578f (image=quay.io/ceph/ceph:v18, name=exciting_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:24 compute-0 systemd[1]: libpod-conmon-97c8899bed8555a57d589d97efd55bd05534ad91721764ff1f186e43baa3578f.scope: Deactivated successfully.
Nov 26 01:17:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v118: 195 pgs: 2 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:17:25 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 26 01:17:25 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 26 01:17:25 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 26 01:17:25 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 26 01:17:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 26 01:17:25 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4003297178' entity='client.rgw.rgw.compute-0.klkwcz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 26 01:17:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 26 01:17:25 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 26 01:17:25 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 46 pg[9.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:25 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/4003297178' entity='client.rgw.rgw.compute-0.klkwcz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 26 01:17:25 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:25 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:25 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:25 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:25 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:25 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/4003297178' entity='client.rgw.rgw.compute-0.klkwcz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 26 01:17:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).mds e3 new map
Nov 26 01:17:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-26T01:17:01.314659+0000#012modified#0112025-11-26T01:17:01.314709+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.gmppdy{-1:14271} state up:standby seq 1 addr [v2:192.168.122.100:6814/3176931036,v1:192.168.122.100:6815/3176931036] compat {c=[1],r=[1],i=[7ff]}]
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy Updating MDS map to version 3 from mon.0
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy Monitors have assigned me to become a standby.
Nov 26 01:17:25 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3176931036,v1:192.168.122.100:6815/3176931036] up:boot
Nov 26 01:17:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/3176931036,v1:192.168.122.100:6815/3176931036] as mds.0
Nov 26 01:17:25 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.gmppdy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 26 01:17:25 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 26 01:17:25 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 26 01:17:25 compute-0 podman[220365]: 2025-11-26 01:17:25.53904577 +0000 UTC m=+0.147452056 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:17:25 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 26 01:17:25 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 26 01:17:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.gmppdy"} v 0) v1
Nov 26 01:17:25 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.gmppdy"}]: dispatch
Nov 26 01:17:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).mds e3 all = 0
Nov 26 01:17:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).mds e4 new map
Nov 26 01:17:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-26T01:17:01.314659+0000#012modified#0112025-11-26T01:17:25.535816+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14271}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.gmppdy{0:14271} state up:creating seq 1 addr [v2:192.168.122.100:6814/3176931036,v1:192.168.122.100:6815/3176931036] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 26 01:17:25 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 26 01:17:25 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.gmppdy=up:creating}
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy Updating MDS map to version 4 from mon.0
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.0.cache creating system inode with ino:0x1
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.0.cache creating system inode with ino:0x100
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.0.cache creating system inode with ino:0x600
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.0.cache creating system inode with ino:0x601
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.0.cache creating system inode with ino:0x602
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.0.cache creating system inode with ino:0x603
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.0.cache creating system inode with ino:0x604
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.0.cache creating system inode with ino:0x605
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.0.cache creating system inode with ino:0x606
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.0.cache creating system inode with ino:0x607
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.0.cache creating system inode with ino:0x608
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.0.cache creating system inode with ino:0x609
Nov 26 01:17:25 compute-0 ceph-mds[220183]: mds.0.4 creating_done
Nov 26 01:17:25 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.gmppdy is now active in filesystem cephfs as rank 0
Nov 26 01:17:25 compute-0 python3[220442]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:25 compute-0 podman[220459]: 2025-11-26 01:17:25.932132238 +0000 UTC m=+0.106060400 container create a6b09e5964e2532680dd2309c196a319d2597dc30e2c6ecaea46c8fafb8cb315 (image=quay.io/ceph/ceph:v18, name=loving_shockley, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:25 compute-0 podman[220459]: 2025-11-26 01:17:25.896635968 +0000 UTC m=+0.070564190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:26 compute-0 systemd[1]: Started libpod-conmon-a6b09e5964e2532680dd2309c196a319d2597dc30e2c6ecaea46c8fafb8cb315.scope.
Nov 26 01:17:26 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0222bdbcec6242d1934c8829e054b5527f3874ed6ab503a3f73e1211749df654/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0222bdbcec6242d1934c8829e054b5527f3874ed6ab503a3f73e1211749df654/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:26 compute-0 podman[220459]: 2025-11-26 01:17:26.075306174 +0000 UTC m=+0.249234346 container init a6b09e5964e2532680dd2309c196a319d2597dc30e2c6ecaea46c8fafb8cb315 (image=quay.io/ceph/ceph:v18, name=loving_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:26 compute-0 podman[220459]: 2025-11-26 01:17:26.087666152 +0000 UTC m=+0.261594284 container start a6b09e5964e2532680dd2309c196a319d2597dc30e2c6ecaea46c8fafb8cb315 (image=quay.io/ceph/ceph:v18, name=loving_shockley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 01:17:26 compute-0 podman[220459]: 2025-11-26 01:17:26.100000289 +0000 UTC m=+0.273928461 container attach a6b09e5964e2532680dd2309c196a319d2597dc30e2c6ecaea46c8fafb8cb315 (image=quay.io/ceph/ceph:v18, name=loving_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:17:26 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 26 01:17:26 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 26 01:17:26 compute-0 podman[220502]: 2025-11-26 01:17:26.12981146 +0000 UTC m=+0.104078025 container exec 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 01:17:26 compute-0 ceph-mgr[193049]: [progress INFO root] Writing back 12 completed events
Nov 26 01:17:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 01:17:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:26 compute-0 podman[220502]: 2025-11-26 01:17:26.247421514 +0000 UTC m=+0.221688049 container exec_died 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:26 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 26 01:17:26 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 26 01:17:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 26 01:17:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 26 01:17:26 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 26 01:17:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 26 01:17:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4003297178' entity='client.rgw.rgw.compute-0.klkwcz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 26 01:17:26 compute-0 podman[220540]: 2025-11-26 01:17:26.448751898 +0000 UTC m=+0.113008536 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1214.1726694543, version=9.4, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 01:17:26 compute-0 ceph-mon[192746]: daemon mds.cephfs.compute-0.gmppdy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 26 01:17:26 compute-0 ceph-mon[192746]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 26 01:17:26 compute-0 ceph-mon[192746]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 26 01:17:26 compute-0 ceph-mon[192746]: daemon mds.cephfs.compute-0.gmppdy is now active in filesystem cephfs as rank 0
Nov 26 01:17:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:26 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/4003297178' entity='client.rgw.rgw.compute-0.klkwcz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 26 01:17:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).mds e5 new map
Nov 26 01:17:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-26T01:17:01.314659+0000#012modified#0112025-11-26T01:17:26.596629+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14271}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.gmppdy{0:14271} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/3176931036,v1:192.168.122.100:6815/3176931036] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 26 01:17:26 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy Updating MDS map to version 5 from mon.0
Nov 26 01:17:26 compute-0 ceph-mds[220183]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 26 01:17:26 compute-0 ceph-mds[220183]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 26 01:17:26 compute-0 ceph-mds[220183]: mds.0.4 recovery_done -- successful recovery!
Nov 26 01:17:26 compute-0 ceph-mds[220183]: mds.0.4 active_start
Nov 26 01:17:26 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3176931036,v1:192.168.122.100:6815/3176931036] up:active
Nov 26 01:17:26 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.gmppdy=up:active}
Nov 26 01:17:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 26 01:17:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4257997096' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 26 01:17:26 compute-0 loving_shockley[220504]: 
Nov 26 01:17:26 compute-0 loving_shockley[220504]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.klkwcz","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 26 01:17:26 compute-0 systemd[1]: libpod-a6b09e5964e2532680dd2309c196a319d2597dc30e2c6ecaea46c8fafb8cb315.scope: Deactivated successfully.
Nov 26 01:17:26 compute-0 podman[220459]: 2025-11-26 01:17:26.646648596 +0000 UTC m=+0.820576758 container died a6b09e5964e2532680dd2309c196a319d2597dc30e2c6ecaea46c8fafb8cb315 (image=quay.io/ceph/ceph:v18, name=loving_shockley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0222bdbcec6242d1934c8829e054b5527f3874ed6ab503a3f73e1211749df654-merged.mount: Deactivated successfully.
Nov 26 01:17:26 compute-0 podman[220459]: 2025-11-26 01:17:26.730735095 +0000 UTC m=+0.904663237 container remove a6b09e5964e2532680dd2309c196a319d2597dc30e2c6ecaea46c8fafb8cb315 (image=quay.io/ceph/ceph:v18, name=loving_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 01:17:26 compute-0 systemd[1]: libpod-conmon-a6b09e5964e2532680dd2309c196a319d2597dc30e2c6ecaea46c8fafb8cb315.scope: Deactivated successfully.
Nov 26 01:17:26 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 47 pg[10.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [2] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v121: 196 pgs: 1 unknown, 195 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 9 op/s
Nov 26 01:17:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:17:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:17:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 26 01:17:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4003297178' entity='client.rgw.rgw.compute-0.klkwcz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 26 01:17:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 26 01:17:27 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 26 01:17:27 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 48 pg[10.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [2] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:27 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:27 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:27 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/4003297178' entity='client.rgw.rgw.compute-0.klkwcz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 26 01:17:27 compute-0 python3[220804]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:27 compute-0 podman[220841]: 2025-11-26 01:17:27.938129052 +0000 UTC m=+0.062799040 container create 45d9c87f28b115445b328226f057508da29948547c6e1c87008fa9000da02a7e (image=quay.io/ceph/ceph:v18, name=compassionate_roentgen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 01:17:28 compute-0 podman[220841]: 2025-11-26 01:17:27.907746666 +0000 UTC m=+0.032416704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:28 compute-0 systemd[1]: Started libpod-conmon-45d9c87f28b115445b328226f057508da29948547c6e1c87008fa9000da02a7e.scope.
Nov 26 01:17:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10632a1cde02700b28b784fe60525f51c7facdbdb99ba7c1bcc6ec6d2f720aa4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10632a1cde02700b28b784fe60525f51c7facdbdb99ba7c1bcc6ec6d2f720aa4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:28 compute-0 podman[220841]: 2025-11-26 01:17:28.077793379 +0000 UTC m=+0.202463357 container init 45d9c87f28b115445b328226f057508da29948547c6e1c87008fa9000da02a7e (image=quay.io/ceph/ceph:v18, name=compassionate_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:28 compute-0 podman[220841]: 2025-11-26 01:17:28.091793253 +0000 UTC m=+0.216463241 container start 45d9c87f28b115445b328226f057508da29948547c6e1c87008fa9000da02a7e (image=quay.io/ceph/ceph:v18, name=compassionate_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 01:17:28 compute-0 podman[220841]: 2025-11-26 01:17:28.097419032 +0000 UTC m=+0.222089090 container attach 45d9c87f28b115445b328226f057508da29948547c6e1c87008fa9000da02a7e (image=quay.io/ceph/ceph:v18, name=compassionate_roentgen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:28 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 26 01:17:28 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 26 01:17:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 26 01:17:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 26 01:17:28 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 26 01:17:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 26 01:17:28 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2088957816' entity='client.rgw.rgw.compute-0.klkwcz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 26 01:17:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:17:28 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:17:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:17:28 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:17:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:17:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:17:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v124: 197 pgs: 2 unknown, 195 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1007 B/s rd, 4.4 KiB/s wr, 9 op/s
Nov 26 01:17:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 26 01:17:29 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2779980097' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 26 01:17:29 compute-0 compassionate_roentgen[220859]: mimic
Nov 26 01:17:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:29 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev ae848c03-6230-46ac-8320-356eb0bab607 does not exist
Nov 26 01:17:29 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 56b74d6a-eb92-4c4a-8471-f8d99a753242 does not exist
Nov 26 01:17:29 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev f375e671-480b-411e-8f9a-3e3df024e207 does not exist
Nov 26 01:17:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:17:29 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:17:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:17:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:17:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:17:29 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:17:29 compute-0 systemd[1]: libpod-45d9c87f28b115445b328226f057508da29948547c6e1c87008fa9000da02a7e.scope: Deactivated successfully.
Nov 26 01:17:29 compute-0 podman[220841]: 2025-11-26 01:17:29.103686561 +0000 UTC m=+1.228356509 container died 45d9c87f28b115445b328226f057508da29948547c6e1c87008fa9000da02a7e (image=quay.io/ceph/ceph:v18, name=compassionate_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 01:17:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-10632a1cde02700b28b784fe60525f51c7facdbdb99ba7c1bcc6ec6d2f720aa4-merged.mount: Deactivated successfully.
Nov 26 01:17:29 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.b deep-scrub starts
Nov 26 01:17:29 compute-0 podman[220841]: 2025-11-26 01:17:29.159632368 +0000 UTC m=+1.284302326 container remove 45d9c87f28b115445b328226f057508da29948547c6e1c87008fa9000da02a7e (image=quay.io/ceph/ceph:v18, name=compassionate_roentgen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 01:17:29 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.b deep-scrub ok
Nov 26 01:17:29 compute-0 systemd[1]: libpod-conmon-45d9c87f28b115445b328226f057508da29948547c6e1c87008fa9000da02a7e.scope: Deactivated successfully.
Nov 26 01:17:29 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 26 01:17:29 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 26 01:17:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2088957816' entity='client.rgw.rgw.compute-0.klkwcz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 26 01:17:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 26 01:17:29 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/2088957816' entity='client.rgw.rgw.compute-0.klkwcz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 26 01:17:29 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:17:29 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:29 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:17:29 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 26 01:17:29 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 26 01:17:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 26 01:17:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2088957816' entity='client.rgw.rgw.compute-0.klkwcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 26 01:17:29 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:29 compute-0 podman[158021]: time="2025-11-26T01:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:17:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:17:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6782 "" "Go-http-client/1.1"
Nov 26 01:17:30 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 26 01:17:30 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 26 01:17:30 compute-0 podman[221071]: 2025-11-26 01:17:30.185067338 +0000 UTC m=+0.088664940 container create eb1233e521ec50c4a26fbfc8b88ce7e6457d0cbc2ea1a85d7c429c6977e2ade5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:30 compute-0 podman[221071]: 2025-11-26 01:17:30.151043089 +0000 UTC m=+0.054640781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:30 compute-0 systemd[1]: Started libpod-conmon-eb1233e521ec50c4a26fbfc8b88ce7e6457d0cbc2ea1a85d7c429c6977e2ade5.scope.
Nov 26 01:17:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:30 compute-0 python3[221099]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:30 compute-0 podman[221071]: 2025-11-26 01:17:30.302629092 +0000 UTC m=+0.206226704 container init eb1233e521ec50c4a26fbfc8b88ce7e6457d0cbc2ea1a85d7c429c6977e2ade5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:17:30 compute-0 podman[221071]: 2025-11-26 01:17:30.315310049 +0000 UTC m=+0.218907681 container start eb1233e521ec50c4a26fbfc8b88ce7e6457d0cbc2ea1a85d7c429c6977e2ade5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 26 01:17:30 compute-0 quizzical_borg[221102]: 167 167
Nov 26 01:17:30 compute-0 podman[221071]: 2025-11-26 01:17:30.322327127 +0000 UTC m=+0.225924799 container attach eb1233e521ec50c4a26fbfc8b88ce7e6457d0cbc2ea1a85d7c429c6977e2ade5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:30 compute-0 systemd[1]: libpod-eb1233e521ec50c4a26fbfc8b88ce7e6457d0cbc2ea1a85d7c429c6977e2ade5.scope: Deactivated successfully.
Nov 26 01:17:30 compute-0 podman[221071]: 2025-11-26 01:17:30.330123126 +0000 UTC m=+0.233720738 container died eb1233e521ec50c4a26fbfc8b88ce7e6457d0cbc2ea1a85d7c429c6977e2ade5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 01:17:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-674dc926ad9b5821524ab3cf6a871835cb5db110b4601e0d19a35d005cf9f329-merged.mount: Deactivated successfully.
Nov 26 01:17:30 compute-0 podman[221071]: 2025-11-26 01:17:30.398052321 +0000 UTC m=+0.301649923 container remove eb1233e521ec50c4a26fbfc8b88ce7e6457d0cbc2ea1a85d7c429c6977e2ade5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 01:17:30 compute-0 systemd[1]: libpod-conmon-eb1233e521ec50c4a26fbfc8b88ce7e6457d0cbc2ea1a85d7c429c6977e2ade5.scope: Deactivated successfully.
Nov 26 01:17:30 compute-0 podman[221105]: 2025-11-26 01:17:30.426594315 +0000 UTC m=+0.104776864 container create d66fb2c2b876d716ff1499d7a591658222d251e03b1e434f2be9f2a885c92ea1 (image=quay.io/ceph/ceph:v18, name=determined_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:30 compute-0 podman[221105]: 2025-11-26 01:17:30.381959197 +0000 UTC m=+0.060141776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 26 01:17:30 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/2088957816' entity='client.rgw.rgw.compute-0.klkwcz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 26 01:17:30 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/2088957816' entity='client.rgw.rgw.compute-0.klkwcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 26 01:17:30 compute-0 systemd[1]: Started libpod-conmon-d66fb2c2b876d716ff1499d7a591658222d251e03b1e434f2be9f2a885c92ea1.scope.
Nov 26 01:17:30 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2088957816' entity='client.rgw.rgw.compute-0.klkwcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 26 01:17:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 26 01:17:30 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 26 01:17:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/980a2a92445640b4808c413c6fd2f54994f13bdcd532c7ed739348de73e97b16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/980a2a92445640b4808c413c6fd2f54994f13bdcd532c7ed739348de73e97b16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:30 compute-0 podman[221105]: 2025-11-26 01:17:30.56835128 +0000 UTC m=+0.246533869 container init d66fb2c2b876d716ff1499d7a591658222d251e03b1e434f2be9f2a885c92ea1 (image=quay.io/ceph/ceph:v18, name=determined_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:30 compute-0 podman[221105]: 2025-11-26 01:17:30.584853416 +0000 UTC m=+0.263035965 container start d66fb2c2b876d716ff1499d7a591658222d251e03b1e434f2be9f2a885c92ea1 (image=quay.io/ceph/ceph:v18, name=determined_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 01:17:30 compute-0 podman[221105]: 2025-11-26 01:17:30.589348562 +0000 UTC m=+0.267531121 container attach d66fb2c2b876d716ff1499d7a591658222d251e03b1e434f2be9f2a885c92ea1 (image=quay.io/ceph/ceph:v18, name=determined_galileo, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:30 compute-0 podman[221144]: 2025-11-26 01:17:30.685111741 +0000 UTC m=+0.074898672 container create ee5ccef648f038992778bcbf177fff7d4e24cdc1ed64e550e501d4b4d895a1d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 01:17:30 compute-0 podman[221144]: 2025-11-26 01:17:30.651529195 +0000 UTC m=+0.041316096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:30 compute-0 systemd[1]: Started libpod-conmon-ee5ccef648f038992778bcbf177fff7d4e24cdc1ed64e550e501d4b4d895a1d3.scope.
Nov 26 01:17:30 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-rgw-rgw-compute-0-klkwcz[219670]: 2025-11-26T01:17:30.800+0000 7fd52169f940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 26 01:17:30 compute-0 radosgw[219693]: LDAP not started since no server URIs were provided in the configuration.
Nov 26 01:17:30 compute-0 radosgw[219693]: framework: beast
Nov 26 01:17:30 compute-0 radosgw[219693]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 26 01:17:30 compute-0 radosgw[219693]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 26 01:17:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abea42a398316adef9bec11a4e549fadb9ea6a932b81334a3005df0aa81c93b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abea42a398316adef9bec11a4e549fadb9ea6a932b81334a3005df0aa81c93b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abea42a398316adef9bec11a4e549fadb9ea6a932b81334a3005df0aa81c93b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abea42a398316adef9bec11a4e549fadb9ea6a932b81334a3005df0aa81c93b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abea42a398316adef9bec11a4e549fadb9ea6a932b81334a3005df0aa81c93b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:30 compute-0 radosgw[219693]: starting handler: beast
Nov 26 01:17:30 compute-0 podman[221144]: 2025-11-26 01:17:30.860951177 +0000 UTC m=+0.250738108 container init ee5ccef648f038992778bcbf177fff7d4e24cdc1ed64e550e501d4b4d895a1d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_fermi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 01:17:30 compute-0 radosgw[219693]: set uid:gid to 167:167 (ceph:ceph)
Nov 26 01:17:30 compute-0 podman[221144]: 2025-11-26 01:17:30.876953998 +0000 UTC m=+0.266740919 container start ee5ccef648f038992778bcbf177fff7d4e24cdc1ed64e550e501d4b4d895a1d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_fermi, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:30 compute-0 podman[221144]: 2025-11-26 01:17:30.888084391 +0000 UTC m=+0.277871332 container attach ee5ccef648f038992778bcbf177fff7d4e24cdc1ed64e550e501d4b4d895a1d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_fermi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:30 compute-0 radosgw[219693]: mgrc service_daemon_register rgw.14275 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.klkwcz,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=9be083cb-5874-4aac-9f37-978a7c101882,zone_name=default,zonegroup_id=4cb7597d-bf51-432c-8990-5f8d03384091,zonegroup_name=default}
Nov 26 01:17:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 1 unknown, 196 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 251 B/s rd, 4.2 KiB/s wr, 20 op/s
Nov 26 01:17:31 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 26 01:17:31 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 26 01:17:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 26 01:17:31 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1202178345' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 26 01:17:31 compute-0 determined_galileo[221135]: 
Nov 26 01:17:31 compute-0 determined_galileo[221135]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Nov 26 01:17:31 compute-0 systemd[1]: libpod-d66fb2c2b876d716ff1499d7a591658222d251e03b1e434f2be9f2a885c92ea1.scope: Deactivated successfully.
Nov 26 01:17:31 compute-0 podman[221105]: 2025-11-26 01:17:31.24680773 +0000 UTC m=+0.924990309 container died d66fb2c2b876d716ff1499d7a591658222d251e03b1e434f2be9f2a885c92ea1 (image=quay.io/ceph/ceph:v18, name=determined_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 01:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-980a2a92445640b4808c413c6fd2f54994f13bdcd532c7ed739348de73e97b16-merged.mount: Deactivated successfully.
Nov 26 01:17:31 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.17 deep-scrub starts
Nov 26 01:17:31 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.17 deep-scrub ok
Nov 26 01:17:31 compute-0 podman[221105]: 2025-11-26 01:17:31.316375261 +0000 UTC m=+0.994557830 container remove d66fb2c2b876d716ff1499d7a591658222d251e03b1e434f2be9f2a885c92ea1 (image=quay.io/ceph/ceph:v18, name=determined_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:31 compute-0 systemd[1]: libpod-conmon-d66fb2c2b876d716ff1499d7a591658222d251e03b1e434f2be9f2a885c92ea1.scope: Deactivated successfully.
Nov 26 01:17:31 compute-0 openstack_network_exporter[160178]: ERROR   01:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:17:31 compute-0 openstack_network_exporter[160178]: ERROR   01:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:17:31 compute-0 openstack_network_exporter[160178]: ERROR   01:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:17:31 compute-0 openstack_network_exporter[160178]: ERROR   01:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:17:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:17:31 compute-0 openstack_network_exporter[160178]: ERROR   01:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:17:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:17:31 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 26 01:17:31 compute-0 ceph-mon[192746]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 26 01:17:31 compute-0 ceph-mon[192746]: from='client.? 192.168.122.100:0/2088957816' entity='client.rgw.rgw.compute-0.klkwcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 26 01:17:32 compute-0 suspicious_fermi[221160]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:17:32 compute-0 suspicious_fermi[221160]: --> relative data size: 1.0
Nov 26 01:17:32 compute-0 suspicious_fermi[221160]: --> All data devices are unavailable
Nov 26 01:17:32 compute-0 systemd[1]: libpod-ee5ccef648f038992778bcbf177fff7d4e24cdc1ed64e550e501d4b4d895a1d3.scope: Deactivated successfully.
Nov 26 01:17:32 compute-0 systemd[1]: libpod-ee5ccef648f038992778bcbf177fff7d4e24cdc1ed64e550e501d4b4d895a1d3.scope: Consumed 1.105s CPU time.
Nov 26 01:17:32 compute-0 podman[221144]: 2025-11-26 01:17:32.083296285 +0000 UTC m=+1.473083196 container died ee5ccef648f038992778bcbf177fff7d4e24cdc1ed64e550e501d4b4d895a1d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_fermi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-abea42a398316adef9bec11a4e549fadb9ea6a932b81334a3005df0aa81c93b3-merged.mount: Deactivated successfully.
Nov 26 01:17:32 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 26 01:17:32 compute-0 podman[221144]: 2025-11-26 01:17:32.178626392 +0000 UTC m=+1.568413273 container remove ee5ccef648f038992778bcbf177fff7d4e24cdc1ed64e550e501d4b4d895a1d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_fermi, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:32 compute-0 systemd[1]: libpod-conmon-ee5ccef648f038992778bcbf177fff7d4e24cdc1ed64e550e501d4b4d895a1d3.scope: Deactivated successfully.
Nov 26 01:17:32 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 26 01:17:32 compute-0 ceph-mon[192746]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 26 01:17:32 compute-0 ceph-mon[192746]: Cluster is now healthy
Nov 26 01:17:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 197 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 181 B/s rd, 3.0 KiB/s wr, 14 op/s
Nov 26 01:17:33 compute-0 podman[221912]: 2025-11-26 01:17:33.191052215 +0000 UTC m=+0.079311656 container create e2aa56e179c13c208caa100af2c4501f97148ca7c6edbdcc0d1c9d477528282e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lewin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 01:17:33 compute-0 podman[221912]: 2025-11-26 01:17:33.154427813 +0000 UTC m=+0.042687254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:33 compute-0 systemd[1]: Started libpod-conmon-e2aa56e179c13c208caa100af2c4501f97148ca7c6edbdcc0d1c9d477528282e.scope.
Nov 26 01:17:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:33 compute-0 podman[221912]: 2025-11-26 01:17:33.31966391 +0000 UTC m=+0.207923381 container init e2aa56e179c13c208caa100af2c4501f97148ca7c6edbdcc0d1c9d477528282e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 01:17:33 compute-0 podman[221912]: 2025-11-26 01:17:33.337014469 +0000 UTC m=+0.225273900 container start e2aa56e179c13c208caa100af2c4501f97148ca7c6edbdcc0d1c9d477528282e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lewin, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:33 compute-0 elated_lewin[221928]: 167 167
Nov 26 01:17:33 compute-0 systemd[1]: libpod-e2aa56e179c13c208caa100af2c4501f97148ca7c6edbdcc0d1c9d477528282e.scope: Deactivated successfully.
Nov 26 01:17:33 compute-0 podman[221912]: 2025-11-26 01:17:33.345791756 +0000 UTC m=+0.234051187 container attach e2aa56e179c13c208caa100af2c4501f97148ca7c6edbdcc0d1c9d477528282e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lewin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:33 compute-0 podman[221912]: 2025-11-26 01:17:33.350699685 +0000 UTC m=+0.238959126 container died e2aa56e179c13c208caa100af2c4501f97148ca7c6edbdcc0d1c9d477528282e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lewin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 01:17:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5b8e8c73d3936487eb212fcae32b4c0cbe7536dfeab34c3487641b6a64020e1-merged.mount: Deactivated successfully.
Nov 26 01:17:33 compute-0 podman[221912]: 2025-11-26 01:17:33.420613975 +0000 UTC m=+0.308873406 container remove e2aa56e179c13c208caa100af2c4501f97148ca7c6edbdcc0d1c9d477528282e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lewin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 01:17:33 compute-0 systemd[1]: libpod-conmon-e2aa56e179c13c208caa100af2c4501f97148ca7c6edbdcc0d1c9d477528282e.scope: Deactivated successfully.
Nov 26 01:17:33 compute-0 podman[221950]: 2025-11-26 01:17:33.691108218 +0000 UTC m=+0.100858353 container create 352038af5886488c0f2d2a44f5d58564a20bf3a509a3fad8fa56a22b7859c396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_roentgen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:33 compute-0 podman[221950]: 2025-11-26 01:17:33.656787391 +0000 UTC m=+0.066537576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:33 compute-0 systemd[1]: Started libpod-conmon-352038af5886488c0f2d2a44f5d58564a20bf3a509a3fad8fa56a22b7859c396.scope.
Nov 26 01:17:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36144d3dad0196fbf8b7fe637889c9b6a20df01b2d08b4f07d3b52acd7452b61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36144d3dad0196fbf8b7fe637889c9b6a20df01b2d08b4f07d3b52acd7452b61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36144d3dad0196fbf8b7fe637889c9b6a20df01b2d08b4f07d3b52acd7452b61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36144d3dad0196fbf8b7fe637889c9b6a20df01b2d08b4f07d3b52acd7452b61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:33 compute-0 podman[221950]: 2025-11-26 01:17:33.898347119 +0000 UTC m=+0.308097304 container init 352038af5886488c0f2d2a44f5d58564a20bf3a509a3fad8fa56a22b7859c396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_roentgen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 01:17:33 compute-0 podman[221950]: 2025-11-26 01:17:33.914294778 +0000 UTC m=+0.324044883 container start 352038af5886488c0f2d2a44f5d58564a20bf3a509a3fad8fa56a22b7859c396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_roentgen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 01:17:33 compute-0 podman[221950]: 2025-11-26 01:17:33.920481283 +0000 UTC m=+0.330231458 container attach 352038af5886488c0f2d2a44f5d58564a20bf3a509a3fad8fa56a22b7859c396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:17:34 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 26 01:17:34 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]: {
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:    "0": [
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:        {
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "devices": [
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "/dev/loop3"
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            ],
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "lv_name": "ceph_lv0",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "lv_size": "21470642176",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "name": "ceph_lv0",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "tags": {
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.cluster_name": "ceph",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.crush_device_class": "",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.encrypted": "0",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.osd_id": "0",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.type": "block",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.vdo": "0"
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            },
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "type": "block",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "vg_name": "ceph_vg0"
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:        }
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:    ],
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:    "1": [
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:        {
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "devices": [
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "/dev/loop4"
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            ],
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "lv_name": "ceph_lv1",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "lv_size": "21470642176",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "name": "ceph_lv1",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "tags": {
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.cluster_name": "ceph",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.crush_device_class": "",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.encrypted": "0",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.osd_id": "1",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.type": "block",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.vdo": "0"
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            },
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "type": "block",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "vg_name": "ceph_vg1"
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:        }
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:    ],
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:    "2": [
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:        {
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "devices": [
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "/dev/loop5"
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            ],
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "lv_name": "ceph_lv2",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "lv_size": "21470642176",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "name": "ceph_lv2",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "tags": {
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.cluster_name": "ceph",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.crush_device_class": "",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.encrypted": "0",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.osd_id": "2",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.type": "block",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:                "ceph.vdo": "0"
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            },
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "type": "block",
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:            "vg_name": "ceph_vg2"
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:        }
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]:    ]
Nov 26 01:17:34 compute-0 interesting_roentgen[221967]: }
Nov 26 01:17:34 compute-0 systemd[1]: libpod-352038af5886488c0f2d2a44f5d58564a20bf3a509a3fad8fa56a22b7859c396.scope: Deactivated successfully.
Nov 26 01:17:34 compute-0 podman[221950]: 2025-11-26 01:17:34.792901559 +0000 UTC m=+1.202651684 container died 352038af5886488c0f2d2a44f5d58564a20bf3a509a3fad8fa56a22b7859c396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_roentgen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 01:17:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-36144d3dad0196fbf8b7fe637889c9b6a20df01b2d08b4f07d3b52acd7452b61-merged.mount: Deactivated successfully.
Nov 26 01:17:34 compute-0 podman[221950]: 2025-11-26 01:17:34.906872821 +0000 UTC m=+1.316622956 container remove 352038af5886488c0f2d2a44f5d58564a20bf3a509a3fad8fa56a22b7859c396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_roentgen, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 01:17:34 compute-0 systemd[1]: libpod-conmon-352038af5886488c0f2d2a44f5d58564a20bf3a509a3fad8fa56a22b7859c396.scope: Deactivated successfully.
Nov 26 01:17:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 154 B/s rd, 2.6 KiB/s wr, 11 op/s
Nov 26 01:17:35 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 26 01:17:35 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 26 01:17:35 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 26 01:17:35 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 26 01:17:36 compute-0 podman[222124]: 2025-11-26 01:17:36.060166215 +0000 UTC m=+0.070727545 container create d718724db7ab4f1b630be327e23a077c86f7e5fdfd30ff07aa999b45869e7e72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 26 01:17:36 compute-0 podman[222124]: 2025-11-26 01:17:36.02736926 +0000 UTC m=+0.037930630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:36 compute-0 systemd[1]: Started libpod-conmon-d718724db7ab4f1b630be327e23a077c86f7e5fdfd30ff07aa999b45869e7e72.scope.
Nov 26 01:17:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:36 compute-0 podman[222124]: 2025-11-26 01:17:36.200917342 +0000 UTC m=+0.211478652 container init d718724db7ab4f1b630be327e23a077c86f7e5fdfd30ff07aa999b45869e7e72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_agnesi, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:36 compute-0 podman[222124]: 2025-11-26 01:17:36.215704158 +0000 UTC m=+0.226265448 container start d718724db7ab4f1b630be327e23a077c86f7e5fdfd30ff07aa999b45869e7e72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_agnesi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 01:17:36 compute-0 podman[222124]: 2025-11-26 01:17:36.221257235 +0000 UTC m=+0.231818565 container attach d718724db7ab4f1b630be327e23a077c86f7e5fdfd30ff07aa999b45869e7e72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:17:36 compute-0 mystifying_agnesi[222140]: 167 167
Nov 26 01:17:36 compute-0 systemd[1]: libpod-d718724db7ab4f1b630be327e23a077c86f7e5fdfd30ff07aa999b45869e7e72.scope: Deactivated successfully.
Nov 26 01:17:36 compute-0 podman[222124]: 2025-11-26 01:17:36.228163709 +0000 UTC m=+0.238725039 container died d718724db7ab4f1b630be327e23a077c86f7e5fdfd30ff07aa999b45869e7e72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 01:17:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1ea344de860c2972ef398502eb08c33378034fdcf8f016b045769f43508a794-merged.mount: Deactivated successfully.
Nov 26 01:17:36 compute-0 podman[222124]: 2025-11-26 01:17:36.300327813 +0000 UTC m=+0.310889143 container remove d718724db7ab4f1b630be327e23a077c86f7e5fdfd30ff07aa999b45869e7e72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 26 01:17:36 compute-0 systemd[1]: libpod-conmon-d718724db7ab4f1b630be327e23a077c86f7e5fdfd30ff07aa999b45869e7e72.scope: Deactivated successfully.
Nov 26 01:17:36 compute-0 podman[222162]: 2025-11-26 01:17:36.546559833 +0000 UTC m=+0.066988199 container create 03e05fedd42d8cafdc5c7885eb24e2767f2f92bd02cf70d1c83f452c0ffe7861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_euler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 01:17:36 compute-0 podman[222162]: 2025-11-26 01:17:36.516344761 +0000 UTC m=+0.036773177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:36 compute-0 systemd[1]: Started libpod-conmon-03e05fedd42d8cafdc5c7885eb24e2767f2f92bd02cf70d1c83f452c0ffe7861.scope.
Nov 26 01:17:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a0ea8845481a17afdc8624c8091df0f05fcb1dc081c834ad1e9e3f0504a9fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a0ea8845481a17afdc8624c8091df0f05fcb1dc081c834ad1e9e3f0504a9fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a0ea8845481a17afdc8624c8091df0f05fcb1dc081c834ad1e9e3f0504a9fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a0ea8845481a17afdc8624c8091df0f05fcb1dc081c834ad1e9e3f0504a9fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:36 compute-0 podman[222162]: 2025-11-26 01:17:36.732635567 +0000 UTC m=+0.253063973 container init 03e05fedd42d8cafdc5c7885eb24e2767f2f92bd02cf70d1c83f452c0ffe7861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_euler, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:36 compute-0 podman[222162]: 2025-11-26 01:17:36.751077287 +0000 UTC m=+0.271505653 container start 03e05fedd42d8cafdc5c7885eb24e2767f2f92bd02cf70d1c83f452c0ffe7861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:17:36 compute-0 podman[222162]: 2025-11-26 01:17:36.75900164 +0000 UTC m=+0.279430116 container attach 03e05fedd42d8cafdc5c7885eb24e2767f2f92bd02cf70d1c83f452c0ffe7861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_euler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 4.2 KiB/s wr, 199 op/s
Nov 26 01:17:37 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 26 01:17:37 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 26 01:17:37 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 26 01:17:37 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 26 01:17:37 compute-0 festive_euler[222179]: {
Nov 26 01:17:37 compute-0 festive_euler[222179]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:17:37 compute-0 festive_euler[222179]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:37 compute-0 festive_euler[222179]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:17:37 compute-0 festive_euler[222179]:        "osd_id": 0,
Nov 26 01:17:37 compute-0 festive_euler[222179]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:17:37 compute-0 festive_euler[222179]:        "type": "bluestore"
Nov 26 01:17:37 compute-0 festive_euler[222179]:    },
Nov 26 01:17:37 compute-0 festive_euler[222179]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:17:37 compute-0 festive_euler[222179]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:37 compute-0 festive_euler[222179]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:17:37 compute-0 festive_euler[222179]:        "osd_id": 2,
Nov 26 01:17:37 compute-0 festive_euler[222179]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:17:37 compute-0 festive_euler[222179]:        "type": "bluestore"
Nov 26 01:17:37 compute-0 festive_euler[222179]:    },
Nov 26 01:17:37 compute-0 festive_euler[222179]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:17:37 compute-0 festive_euler[222179]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:37 compute-0 festive_euler[222179]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:17:37 compute-0 festive_euler[222179]:        "osd_id": 1,
Nov 26 01:17:37 compute-0 festive_euler[222179]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:17:37 compute-0 festive_euler[222179]:        "type": "bluestore"
Nov 26 01:17:37 compute-0 festive_euler[222179]:    }
Nov 26 01:17:37 compute-0 festive_euler[222179]: }
Nov 26 01:17:37 compute-0 systemd[1]: libpod-03e05fedd42d8cafdc5c7885eb24e2767f2f92bd02cf70d1c83f452c0ffe7861.scope: Deactivated successfully.
Nov 26 01:17:37 compute-0 podman[222162]: 2025-11-26 01:17:37.966408558 +0000 UTC m=+1.486836914 container died 03e05fedd42d8cafdc5c7885eb24e2767f2f92bd02cf70d1c83f452c0ffe7861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_euler, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 01:17:37 compute-0 systemd[1]: libpod-03e05fedd42d8cafdc5c7885eb24e2767f2f92bd02cf70d1c83f452c0ffe7861.scope: Consumed 1.215s CPU time.
Nov 26 01:17:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-45a0ea8845481a17afdc8624c8091df0f05fcb1dc081c834ad1e9e3f0504a9fd-merged.mount: Deactivated successfully.
Nov 26 01:17:38 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 26 01:17:38 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 26 01:17:38 compute-0 podman[222162]: 2025-11-26 01:17:38.09030476 +0000 UTC m=+1.610733126 container remove 03e05fedd42d8cafdc5c7885eb24e2767f2f92bd02cf70d1c83f452c0ffe7861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_euler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 01:17:38 compute-0 systemd[1]: libpod-conmon-03e05fedd42d8cafdc5c7885eb24e2767f2f92bd02cf70d1c83f452c0ffe7861.scope: Deactivated successfully.
Nov 26 01:17:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:17:38 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:17:38 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:38 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev beb86f90-baa0-4ba9-b53f-5fc70d903011 does not exist
Nov 26 01:17:38 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev bf2c981d-d3e1-410a-88ef-62541b318687 does not exist
Nov 26 01:17:38 compute-0 python3[222324]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:38 compute-0 podman[222358]: 2025-11-26 01:17:38.905939306 +0000 UTC m=+0.087572589 container create 9c575ba04eb3a2350fc8c5384dd24cdce683d65662765555c0297ed5d795704d (image=quay.io/ceph/ceph:v18, name=vigilant_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 01:17:38 compute-0 systemd[1]: Started libpod-conmon-9c575ba04eb3a2350fc8c5384dd24cdce683d65662765555c0297ed5d795704d.scope.
Nov 26 01:17:38 compute-0 podman[222358]: 2025-11-26 01:17:38.879899682 +0000 UTC m=+0.061532985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:17:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d4fbd2ea0e984a202564dd4f636f6dc55dc99430fd3598c2bff6cb030f4488/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d4fbd2ea0e984a202564dd4f636f6dc55dc99430fd3598c2bff6cb030f4488/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:39 compute-0 podman[222358]: 2025-11-26 01:17:39.042531666 +0000 UTC m=+0.224164949 container init 9c575ba04eb3a2350fc8c5384dd24cdce683d65662765555c0297ed5d795704d (image=quay.io/ceph/ceph:v18, name=vigilant_thompson, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:39 compute-0 podman[222358]: 2025-11-26 01:17:39.071082841 +0000 UTC m=+0.252716134 container start 9c575ba04eb3a2350fc8c5384dd24cdce683d65662765555c0297ed5d795704d (image=quay.io/ceph/ceph:v18, name=vigilant_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 01:17:39 compute-0 podman[222358]: 2025-11-26 01:17:39.079975851 +0000 UTC m=+0.261609144 container attach 9c575ba04eb3a2350fc8c5384dd24cdce683d65662765555c0297ed5d795704d (image=quay.io/ceph/ceph:v18, name=vigilant_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 3.5 KiB/s wr, 166 op/s
Nov 26 01:17:39 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:39 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:39 compute-0 vigilant_thompson[222408]: could not fetch user info: no user info saved
Nov 26 01:17:39 compute-0 systemd[1]: libpod-9c575ba04eb3a2350fc8c5384dd24cdce683d65662765555c0297ed5d795704d.scope: Deactivated successfully.
Nov 26 01:17:39 compute-0 podman[222541]: 2025-11-26 01:17:39.560156214 +0000 UTC m=+0.045792881 container died 9c575ba04eb3a2350fc8c5384dd24cdce683d65662765555c0297ed5d795704d (image=quay.io/ceph/ceph:v18, name=vigilant_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:17:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0d4fbd2ea0e984a202564dd4f636f6dc55dc99430fd3598c2bff6cb030f4488-merged.mount: Deactivated successfully.
Nov 26 01:17:39 compute-0 podman[222541]: 2025-11-26 01:17:39.620887256 +0000 UTC m=+0.106523923 container remove 9c575ba04eb3a2350fc8c5384dd24cdce683d65662765555c0297ed5d795704d (image=quay.io/ceph/ceph:v18, name=vigilant_thompson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:39 compute-0 systemd[1]: libpod-conmon-9c575ba04eb3a2350fc8c5384dd24cdce683d65662765555c0297ed5d795704d.scope: Deactivated successfully.
Nov 26 01:17:39 compute-0 podman[222581]: 2025-11-26 01:17:39.889946779 +0000 UTC m=+0.115414314 container exec 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:17:40 compute-0 podman[222581]: 2025-11-26 01:17:40.005161776 +0000 UTC m=+0.230629261 container exec_died 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 01:17:40 compute-0 python3[222625]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 36901f64-240e-5c29-a2e2-29b56f2c329c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:40 compute-0 podman[222645]: 2025-11-26 01:17:40.242956868 +0000 UTC m=+0.106575685 container create b3cd6a587ad75c36535f0992ee6ac0a14d65dc319e030893033f0cfcf8ad81e2 (image=quay.io/ceph/ceph:v18, name=vigilant_jang, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:40 compute-0 podman[222645]: 2025-11-26 01:17:40.205353158 +0000 UTC m=+0.068971985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 26 01:17:40 compute-0 systemd[1]: Started libpod-conmon-b3cd6a587ad75c36535f0992ee6ac0a14d65dc319e030893033f0cfcf8ad81e2.scope.
Nov 26 01:17:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/302a7448524bfed882b394c9432fa5e0e5325e54b449473844649b4d8aee5408/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/302a7448524bfed882b394c9432fa5e0e5325e54b449473844649b4d8aee5408/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:40 compute-0 podman[222645]: 2025-11-26 01:17:40.381383969 +0000 UTC m=+0.245002816 container init b3cd6a587ad75c36535f0992ee6ac0a14d65dc319e030893033f0cfcf8ad81e2 (image=quay.io/ceph/ceph:v18, name=vigilant_jang, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 01:17:40 compute-0 podman[222645]: 2025-11-26 01:17:40.397490443 +0000 UTC m=+0.261109230 container start b3cd6a587ad75c36535f0992ee6ac0a14d65dc319e030893033f0cfcf8ad81e2 (image=quay.io/ceph/ceph:v18, name=vigilant_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 01:17:40 compute-0 podman[222645]: 2025-11-26 01:17:40.403328817 +0000 UTC m=+0.266947704 container attach b3cd6a587ad75c36535f0992ee6ac0a14d65dc319e030893033f0cfcf8ad81e2 (image=quay.io/ceph/ceph:v18, name=vigilant_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:17:40 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 26 01:17:40 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 26 01:17:40 compute-0 vigilant_jang[222680]: {
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "user_id": "openstack",
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "display_name": "openstack",
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "email": "",
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "suspended": 0,
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "max_buckets": 1000,
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "subusers": [],
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "keys": [
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:        {
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:            "user": "openstack",
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:            "access_key": "57RB3A1XM8H7360KSH09",
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:            "secret_key": "9Gs7mJlsQlFtnXUxjwMCj0e5l1ChuuPmb6mQPUB3"
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:        }
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    ],
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "swift_keys": [],
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "caps": [],
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "op_mask": "read, write, delete",
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "default_placement": "",
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "default_storage_class": "",
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "placement_tags": [],
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "bucket_quota": {
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:        "enabled": false,
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:        "check_on_raw": false,
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:        "max_size": -1,
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:        "max_size_kb": 0,
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:        "max_objects": -1
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    },
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "user_quota": {
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:        "enabled": false,
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:        "check_on_raw": false,
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:        "max_size": -1,
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:        "max_size_kb": 0,
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:        "max_objects": -1
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    },
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "temp_url_keys": [],
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "type": "rgw",
Nov 26 01:17:40 compute-0 vigilant_jang[222680]:    "mfa_ids": []
Nov 26 01:17:40 compute-0 vigilant_jang[222680]: }
Nov 26 01:17:40 compute-0 vigilant_jang[222680]: 
Nov 26 01:17:40 compute-0 systemd[1]: libpod-b3cd6a587ad75c36535f0992ee6ac0a14d65dc319e030893033f0cfcf8ad81e2.scope: Deactivated successfully.
Nov 26 01:17:40 compute-0 podman[222645]: 2025-11-26 01:17:40.874244689 +0000 UTC m=+0.737863506 container died b3cd6a587ad75c36535f0992ee6ac0a14d65dc319e030893033f0cfcf8ad81e2 (image=quay.io/ceph/ceph:v18, name=vigilant_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 01:17:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-302a7448524bfed882b394c9432fa5e0e5325e54b449473844649b4d8aee5408-merged.mount: Deactivated successfully.
Nov 26 01:17:40 compute-0 podman[222645]: 2025-11-26 01:17:40.946210137 +0000 UTC m=+0.809828924 container remove b3cd6a587ad75c36535f0992ee6ac0a14d65dc319e030893033f0cfcf8ad81e2 (image=quay.io/ceph/ceph:v18, name=vigilant_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 01:17:40 compute-0 systemd[1]: libpod-conmon-b3cd6a587ad75c36535f0992ee6ac0a14d65dc319e030893033f0cfcf8ad81e2.scope: Deactivated successfully.
Nov 26 01:17:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:17:40
Nov 26 01:17:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:17:40 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:17:40 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.log', 'backups', 'images', '.mgr', 'cephfs.cephfs.data']
Nov 26 01:17:40 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 3.2 KiB/s wr, 150 op/s
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:17:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:17:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:17:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:17:41 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:17:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:17:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:17:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:17:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev fe2926a2-aa12-4e5a-a7da-dcd2d45919c6 does not exist
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev b85aaa26-f367-48af-b793-c08193b2479c does not exist
Nov 26 01:17:41 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev f4ad5e37-b6a0-409c-a23a-b919fd9b8c80 does not exist
Nov 26 01:17:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:17:41 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:17:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:17:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:17:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:17:41 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:17:41 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 26 01:17:41 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 26 01:17:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:17:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:17:42 compute-0 podman[223009]: 2025-11-26 01:17:42.357778349 +0000 UTC m=+0.102749527 container create 11e020282c6cf9c996488ca406ce82241e6a878922b0c2c499aa61ea80d500c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_knuth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:42 compute-0 podman[223009]: 2025-11-26 01:17:42.31348275 +0000 UTC m=+0.058453978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:42 compute-0 systemd[1]: Started libpod-conmon-11e020282c6cf9c996488ca406ce82241e6a878922b0c2c499aa61ea80d500c6.scope.
Nov 26 01:17:42 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Nov 26 01:17:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:42 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Nov 26 01:17:42 compute-0 podman[223009]: 2025-11-26 01:17:42.495255003 +0000 UTC m=+0.240226201 container init 11e020282c6cf9c996488ca406ce82241e6a878922b0c2c499aa61ea80d500c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:17:42 compute-0 podman[223009]: 2025-11-26 01:17:42.513412175 +0000 UTC m=+0.258383353 container start 11e020282c6cf9c996488ca406ce82241e6a878922b0c2c499aa61ea80d500c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_knuth, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:42 compute-0 great_knuth[223025]: 167 167
Nov 26 01:17:42 compute-0 podman[223009]: 2025-11-26 01:17:42.519645651 +0000 UTC m=+0.264616819 container attach 11e020282c6cf9c996488ca406ce82241e6a878922b0c2c499aa61ea80d500c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_knuth, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 01:17:42 compute-0 systemd[1]: libpod-11e020282c6cf9c996488ca406ce82241e6a878922b0c2c499aa61ea80d500c6.scope: Deactivated successfully.
Nov 26 01:17:42 compute-0 podman[223009]: 2025-11-26 01:17:42.522202323 +0000 UTC m=+0.267173501 container died 11e020282c6cf9c996488ca406ce82241e6a878922b0c2c499aa61ea80d500c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_knuth, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:17:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6df8f4bddeec624579ab7a2b9d842ec6a957d1e0d303171a709bb3446c17b6bb-merged.mount: Deactivated successfully.
Nov 26 01:17:42 compute-0 podman[223009]: 2025-11-26 01:17:42.588334946 +0000 UTC m=+0.333306124 container remove 11e020282c6cf9c996488ca406ce82241e6a878922b0c2c499aa61ea80d500c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_knuth, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:42 compute-0 systemd[1]: libpod-conmon-11e020282c6cf9c996488ca406ce82241e6a878922b0c2c499aa61ea80d500c6.scope: Deactivated successfully.
Nov 26 01:17:42 compute-0 podman[223049]: 2025-11-26 01:17:42.838733614 +0000 UTC m=+0.080928402 container create a7e9893d577e4647846c6a0911380b24c4c3ec598c11297aec1f861e74c8929b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:42 compute-0 podman[223049]: 2025-11-26 01:17:42.796943706 +0000 UTC m=+0.039138544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:42 compute-0 systemd[1]: Started libpod-conmon-a7e9893d577e4647846c6a0911380b24c4c3ec598c11297aec1f861e74c8929b.scope.
Nov 26 01:17:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a34946ead534fa762c004068603ef91211483a10f80fa23bdc792228f94de45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a34946ead534fa762c004068603ef91211483a10f80fa23bdc792228f94de45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a34946ead534fa762c004068603ef91211483a10f80fa23bdc792228f94de45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a34946ead534fa762c004068603ef91211483a10f80fa23bdc792228f94de45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a34946ead534fa762c004068603ef91211483a10f80fa23bdc792228f94de45/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:43 compute-0 podman[223049]: 2025-11-26 01:17:43.002722685 +0000 UTC m=+0.244917513 container init a7e9893d577e4647846c6a0911380b24c4c3ec598c11297aec1f861e74c8929b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:17:43 compute-0 podman[223049]: 2025-11-26 01:17:43.022788981 +0000 UTC m=+0.264983779 container start a7e9893d577e4647846c6a0911380b24c4c3ec598c11297aec1f861e74c8929b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 01:17:43 compute-0 podman[223049]: 2025-11-26 01:17:43.030003844 +0000 UTC m=+0.272198642 container attach a7e9893d577e4647846c6a0911380b24c4c3ec598c11297aec1f861e74c8929b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 1.4 KiB/s wr, 127 op/s
Nov 26 01:17:43 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Nov 26 01:17:43 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Nov 26 01:17:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:17:44 compute-0 nifty_zhukovsky[223066]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:17:44 compute-0 nifty_zhukovsky[223066]: --> relative data size: 1.0
Nov 26 01:17:44 compute-0 nifty_zhukovsky[223066]: --> All data devices are unavailable
Nov 26 01:17:44 compute-0 systemd[1]: libpod-a7e9893d577e4647846c6a0911380b24c4c3ec598c11297aec1f861e74c8929b.scope: Deactivated successfully.
Nov 26 01:17:44 compute-0 systemd[1]: libpod-a7e9893d577e4647846c6a0911380b24c4c3ec598c11297aec1f861e74c8929b.scope: Consumed 1.241s CPU time.
Nov 26 01:17:44 compute-0 podman[223049]: 2025-11-26 01:17:44.31324729 +0000 UTC m=+1.555442078 container died a7e9893d577e4647846c6a0911380b24c4c3ec598c11297aec1f861e74c8929b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:17:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a34946ead534fa762c004068603ef91211483a10f80fa23bdc792228f94de45-merged.mount: Deactivated successfully.
Nov 26 01:17:44 compute-0 podman[223049]: 2025-11-26 01:17:44.413474954 +0000 UTC m=+1.655669752 container remove a7e9893d577e4647846c6a0911380b24c4c3ec598c11297aec1f861e74c8929b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:44 compute-0 systemd[1]: libpod-conmon-a7e9893d577e4647846c6a0911380b24c4c3ec598c11297aec1f861e74c8929b.scope: Deactivated successfully.
Nov 26 01:17:44 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 26 01:17:44 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 26 01:17:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 1.4 KiB/s wr, 126 op/s
Nov 26 01:17:45 compute-0 podman[223244]: 2025-11-26 01:17:45.607294499 +0000 UTC m=+0.084527803 container create 8c38dea2338b9776cd8d32d6582054f43d20776e2e037babf6ca93ce7bbf574c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jemison, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 26 01:17:45 compute-0 podman[223244]: 2025-11-26 01:17:45.575927525 +0000 UTC m=+0.053160879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:45 compute-0 systemd[1]: Started libpod-conmon-8c38dea2338b9776cd8d32d6582054f43d20776e2e037babf6ca93ce7bbf574c.scope.
Nov 26 01:17:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:45 compute-0 podman[223244]: 2025-11-26 01:17:45.752761529 +0000 UTC m=+0.229994833 container init 8c38dea2338b9776cd8d32d6582054f43d20776e2e037babf6ca93ce7bbf574c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jemison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:45 compute-0 podman[223244]: 2025-11-26 01:17:45.768388929 +0000 UTC m=+0.245622243 container start 8c38dea2338b9776cd8d32d6582054f43d20776e2e037babf6ca93ce7bbf574c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jemison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:45 compute-0 podman[223244]: 2025-11-26 01:17:45.776318102 +0000 UTC m=+0.253551466 container attach 8c38dea2338b9776cd8d32d6582054f43d20776e2e037babf6ca93ce7bbf574c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jemison, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 01:17:45 compute-0 thirsty_jemison[223260]: 167 167
Nov 26 01:17:45 compute-0 systemd[1]: libpod-8c38dea2338b9776cd8d32d6582054f43d20776e2e037babf6ca93ce7bbf574c.scope: Deactivated successfully.
Nov 26 01:17:45 compute-0 podman[223244]: 2025-11-26 01:17:45.779569194 +0000 UTC m=+0.256802498 container died 8c38dea2338b9776cd8d32d6582054f43d20776e2e037babf6ca93ce7bbf574c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:17:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc229c468af66b7b90f18167e1fd645aa954fe735ff079c6b32c531c327e3584-merged.mount: Deactivated successfully.
Nov 26 01:17:45 compute-0 podman[223244]: 2025-11-26 01:17:45.852800758 +0000 UTC m=+0.330034072 container remove 8c38dea2338b9776cd8d32d6582054f43d20776e2e037babf6ca93ce7bbf574c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 01:17:45 compute-0 systemd[1]: libpod-conmon-8c38dea2338b9776cd8d32d6582054f43d20776e2e037babf6ca93ce7bbf574c.scope: Deactivated successfully.
Nov 26 01:17:45 compute-0 podman[223276]: 2025-11-26 01:17:45.955165253 +0000 UTC m=+0.091927122 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:17:45 compute-0 podman[223272]: 2025-11-26 01:17:45.973705845 +0000 UTC m=+0.108784426 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true)
Nov 26 01:17:46 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 26 01:17:46 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 26 01:17:46 compute-0 podman[223330]: 2025-11-26 01:17:46.06859641 +0000 UTC m=+0.048927680 container create 01cd71060342f6cbd8caed7558b7df98a3b10053d19de8e0447d98c7d3474093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 01:17:46 compute-0 podman[223317]: 2025-11-26 01:17:46.109001779 +0000 UTC m=+0.121736012 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:17:46 compute-0 systemd[1]: Started libpod-conmon-01cd71060342f6cbd8caed7558b7df98a3b10053d19de8e0447d98c7d3474093.scope.
Nov 26 01:17:46 compute-0 podman[223330]: 2025-11-26 01:17:46.051990642 +0000 UTC m=+0.032321932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a1554c54adeff758670051ef347c69ed681732623ceaddca9a67b7b582b2c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a1554c54adeff758670051ef347c69ed681732623ceaddca9a67b7b582b2c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a1554c54adeff758670051ef347c69ed681732623ceaddca9a67b7b582b2c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a1554c54adeff758670051ef347c69ed681732623ceaddca9a67b7b582b2c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:46 compute-0 podman[223330]: 2025-11-26 01:17:46.200302782 +0000 UTC m=+0.180634072 container init 01cd71060342f6cbd8caed7558b7df98a3b10053d19de8e0447d98c7d3474093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:46 compute-0 podman[223330]: 2025-11-26 01:17:46.220182462 +0000 UTC m=+0.200513772 container start 01cd71060342f6cbd8caed7558b7df98a3b10053d19de8e0447d98c7d3474093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 01:17:46 compute-0 podman[223330]: 2025-11-26 01:17:46.226277284 +0000 UTC m=+0.206608574 container attach 01cd71060342f6cbd8caed7558b7df98a3b10053d19de8e0447d98c7d3474093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 01:17:46 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Nov 26 01:17:46 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Nov 26 01:17:46 compute-0 elated_jepsen[223365]: {
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:    "0": [
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:        {
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "devices": [
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "/dev/loop3"
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            ],
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "lv_name": "ceph_lv0",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "lv_size": "21470642176",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "name": "ceph_lv0",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "tags": {
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.cluster_name": "ceph",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.crush_device_class": "",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.encrypted": "0",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.osd_id": "0",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.type": "block",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.vdo": "0"
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            },
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "type": "block",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "vg_name": "ceph_vg0"
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:        }
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:    ],
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:    "1": [
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:        {
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "devices": [
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "/dev/loop4"
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            ],
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "lv_name": "ceph_lv1",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "lv_size": "21470642176",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "name": "ceph_lv1",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "tags": {
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.cluster_name": "ceph",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.crush_device_class": "",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.encrypted": "0",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.osd_id": "1",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.type": "block",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.vdo": "0"
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            },
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "type": "block",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "vg_name": "ceph_vg1"
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:        }
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:    ],
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:    "2": [
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:        {
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "devices": [
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "/dev/loop5"
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            ],
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "lv_name": "ceph_lv2",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "lv_size": "21470642176",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "name": "ceph_lv2",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "tags": {
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.cluster_name": "ceph",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.crush_device_class": "",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.encrypted": "0",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.osd_id": "2",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.type": "block",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:                "ceph.vdo": "0"
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            },
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "type": "block",
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:            "vg_name": "ceph_vg2"
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:        }
Nov 26 01:17:46 compute-0 elated_jepsen[223365]:    ]
Nov 26 01:17:46 compute-0 elated_jepsen[223365]: }
Nov 26 01:17:46 compute-0 systemd[1]: libpod-01cd71060342f6cbd8caed7558b7df98a3b10053d19de8e0447d98c7d3474093.scope: Deactivated successfully.
Nov 26 01:17:47 compute-0 podman[223374]: 2025-11-26 01:17:47.061143023 +0000 UTC m=+0.045935956 container died 01cd71060342f6cbd8caed7558b7df98a3b10053d19de8e0447d98c7d3474093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 1.6 KiB/s wr, 126 op/s
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Nov 26 01:17:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 01:17:47 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:17:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-53a1554c54adeff758670051ef347c69ed681732623ceaddca9a67b7b582b2c3-merged.mount: Deactivated successfully.
Nov 26 01:17:47 compute-0 podman[223374]: 2025-11-26 01:17:47.157702764 +0000 UTC m=+0.142495677 container remove 01cd71060342f6cbd8caed7558b7df98a3b10053d19de8e0447d98c7d3474093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 01:17:47 compute-0 systemd[1]: libpod-conmon-01cd71060342f6cbd8caed7558b7df98a3b10053d19de8e0447d98c7d3474093.scope: Deactivated successfully.
Nov 26 01:17:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 26 01:17:47 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:17:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 26 01:17:47 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:17:47 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 26 01:17:47 compute-0 ceph-mgr[193049]: [progress INFO root] update: starting ev e26ee2f3-d9f6-4746-9ab1-19cc8377aab7 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 26 01:17:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 01:17:47 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:17:47 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Nov 26 01:17:47 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Nov 26 01:17:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 26 01:17:48 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:17:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 26 01:17:48 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 26 01:17:48 compute-0 ceph-mgr[193049]: [progress INFO root] update: starting ev d7be03f6-689a-4511-ae21-7ffce754cc76 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 26 01:17:48 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:17:48 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:17:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 01:17:48 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:17:48 compute-0 podman[223525]: 2025-11-26 01:17:48.368048085 +0000 UTC m=+0.095058370 container create 6c8bf21fb1cc78a956286d1f40512299d1f05da08b24e7b11dd5134fcf387f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:17:48 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Nov 26 01:17:48 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Nov 26 01:17:48 compute-0 podman[223525]: 2025-11-26 01:17:48.338027729 +0000 UTC m=+0.065038054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:48 compute-0 systemd[1]: Started libpod-conmon-6c8bf21fb1cc78a956286d1f40512299d1f05da08b24e7b11dd5134fcf387f62.scope.
Nov 26 01:17:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:48 compute-0 podman[223525]: 2025-11-26 01:17:48.521757527 +0000 UTC m=+0.248767862 container init 6c8bf21fb1cc78a956286d1f40512299d1f05da08b24e7b11dd5134fcf387f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:17:48 compute-0 podman[223525]: 2025-11-26 01:17:48.5339123 +0000 UTC m=+0.260922575 container start 6c8bf21fb1cc78a956286d1f40512299d1f05da08b24e7b11dd5134fcf387f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:17:48 compute-0 podman[223525]: 2025-11-26 01:17:48.540154806 +0000 UTC m=+0.267165121 container attach 6c8bf21fb1cc78a956286d1f40512299d1f05da08b24e7b11dd5134fcf387f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:48 compute-0 exciting_maxwell[223540]: 167 167
Nov 26 01:17:48 compute-0 systemd[1]: libpod-6c8bf21fb1cc78a956286d1f40512299d1f05da08b24e7b11dd5134fcf387f62.scope: Deactivated successfully.
Nov 26 01:17:48 compute-0 podman[223525]: 2025-11-26 01:17:48.547992767 +0000 UTC m=+0.275003042 container died 6c8bf21fb1cc78a956286d1f40512299d1f05da08b24e7b11dd5134fcf387f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 01:17:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b5330bc03525e788ce30672b9b644bced64e415026d4a7ab395e0653ca7a5bd-merged.mount: Deactivated successfully.
Nov 26 01:17:48 compute-0 podman[223525]: 2025-11-26 01:17:48.639081804 +0000 UTC m=+0.366092059 container remove 6c8bf21fb1cc78a956286d1f40512299d1f05da08b24e7b11dd5134fcf387f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_maxwell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 01:17:48 compute-0 systemd[1]: libpod-conmon-6c8bf21fb1cc78a956286d1f40512299d1f05da08b24e7b11dd5134fcf387f62.scope: Deactivated successfully.
Nov 26 01:17:48 compute-0 podman[223565]: 2025-11-26 01:17:48.94480686 +0000 UTC m=+0.097757176 container create 1dcaeca540fcf7e22f93cfe0ee79aab7f154a99402dbcff834af997a2fbc1ee4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_banzai, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:17:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:17:48 compute-0 podman[223565]: 2025-11-26 01:17:48.908784735 +0000 UTC m=+0.061735101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:17:49 compute-0 systemd[1]: Started libpod-conmon-1dcaeca540fcf7e22f93cfe0ee79aab7f154a99402dbcff834af997a2fbc1ee4.scope.
Nov 26 01:17:49 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:17:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743df78d0d02b335dae5d11271a061ad015444224f75adea1776493688b8cb97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743df78d0d02b335dae5d11271a061ad015444224f75adea1776493688b8cb97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743df78d0d02b335dae5d11271a061ad015444224f75adea1776493688b8cb97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743df78d0d02b335dae5d11271a061ad015444224f75adea1776493688b8cb97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:17:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Nov 26 01:17:49 compute-0 podman[223565]: 2025-11-26 01:17:49.102903585 +0000 UTC m=+0.255853961 container init 1dcaeca540fcf7e22f93cfe0ee79aab7f154a99402dbcff834af997a2fbc1ee4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_banzai, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 01:17:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 01:17:49 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:17:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 01:17:49 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:17:49 compute-0 podman[223565]: 2025-11-26 01:17:49.119570584 +0000 UTC m=+0.272520910 container start 1dcaeca540fcf7e22f93cfe0ee79aab7f154a99402dbcff834af997a2fbc1ee4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_banzai, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:17:49 compute-0 podman[223565]: 2025-11-26 01:17:49.126100369 +0000 UTC m=+0.279050745 container attach 1dcaeca540fcf7e22f93cfe0ee79aab7f154a99402dbcff834af997a2fbc1ee4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_banzai, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:17:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 26 01:17:49 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:17:49 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:17:49 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:17:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 26 01:17:49 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 26 01:17:49 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 54 pg[8.0( v 44'4 (0'0,44'4] local-lis/les=43/44 n=4 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=14.078663826s) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 44'3 mlcod 44'3 active pruub 124.375396729s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:49 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 54 pg[9.0( v 51'590 (0'0,51'590] local-lis/les=45/46 n=209 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.114114761s) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 51'589 mlcod 51'589 active pruub 118.412193298s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:49 compute-0 ceph-mgr[193049]: [progress INFO root] update: starting ev b19a48ae-3282-4293-9586-5a49a555c6d1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 26 01:17:49 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:17:49 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:17:49 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:17:49 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:17:49 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 54 pg[8.0( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=14.078663826s) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 44'3 mlcod 0'0 unknown pruub 124.375396729s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 26 01:17:49 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:17:49 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 54 pg[9.0( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=6 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.114114761s) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 51'589 mlcod 0'0 unknown pruub 118.412193298s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:49 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Nov 26 01:17:49 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Nov 26 01:17:50 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 26 01:17:50 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 26 01:17:50 compute-0 reverent_banzai[223582]: {
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:        "osd_id": 0,
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:        "type": "bluestore"
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:    },
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:        "osd_id": 2,
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:        "type": "bluestore"
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:    },
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:        "osd_id": 1,
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:        "type": "bluestore"
Nov 26 01:17:50 compute-0 reverent_banzai[223582]:    }
Nov 26 01:17:50 compute-0 reverent_banzai[223582]: }
Nov 26 01:17:50 compute-0 systemd[1]: libpod-1dcaeca540fcf7e22f93cfe0ee79aab7f154a99402dbcff834af997a2fbc1ee4.scope: Deactivated successfully.
Nov 26 01:17:50 compute-0 systemd[1]: libpod-1dcaeca540fcf7e22f93cfe0ee79aab7f154a99402dbcff834af997a2fbc1ee4.scope: Consumed 1.233s CPU time.
Nov 26 01:17:50 compute-0 podman[223565]: 2025-11-26 01:17:50.351393261 +0000 UTC m=+1.504343577 container died 1dcaeca540fcf7e22f93cfe0ee79aab7f154a99402dbcff834af997a2fbc1ee4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 01:17:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 26 01:17:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:17:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 26 01:17:50 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 26 01:17:50 compute-0 ceph-mgr[193049]: [progress INFO root] update: starting ev c9af4f46-024c-45c8-b2cf-9fa13493288b (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 26 01:17:50 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:17:50 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:17:50 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:17:50 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 26 01:17:50 compute-0 ceph-mgr[193049]: [progress INFO root] complete: finished ev e26ee2f3-d9f6-4746-9ab1-19cc8377aab7 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 26 01:17:50 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event e26ee2f3-d9f6-4746-9ab1-19cc8377aab7 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Nov 26 01:17:50 compute-0 ceph-mgr[193049]: [progress INFO root] complete: finished ev d7be03f6-689a-4511-ae21-7ffce754cc76 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 26 01:17:50 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event d7be03f6-689a-4511-ae21-7ffce754cc76 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 26 01:17:50 compute-0 ceph-mgr[193049]: [progress INFO root] complete: finished ev b19a48ae-3282-4293-9586-5a49a555c6d1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 26 01:17:50 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event b19a48ae-3282-4293-9586-5a49a555c6d1 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 26 01:17:50 compute-0 ceph-mgr[193049]: [progress INFO root] complete: finished ev c9af4f46-024c-45c8-b2cf-9fa13493288b (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 26 01:17:50 compute-0 ceph-mgr[193049]: [progress INFO root] Completed event c9af4f46-024c-45c8-b2cf-9fa13493288b (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.15( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.14( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.15( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.16( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.17( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.17( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.10( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.11( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.14( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.2( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.1( v 44'4 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.3( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.3( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.2( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.c( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.d( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.d( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.c( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.e( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.8( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.9( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.a( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.b( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.16( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.b( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.a( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.9( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.e( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.1( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.7( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.6( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.8( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-743df78d0d02b335dae5d11271a061ad015444224f75adea1776493688b8cb97-merged.mount: Deactivated successfully.
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.7( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.5( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.4( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.4( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.5( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.1a( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.1b( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.18( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.6( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.19( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.18( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.1e( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.1f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.1f( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.1e( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.1c( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.1d( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.1d( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.1c( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.13( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.12( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.12( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.19( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.10( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.13( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.1b( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.1a( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.11( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.f( v 51'590 lc 0'0 (0'0,51'590] local-lis/les=45/46 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.16( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.17( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.11( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.14( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.1( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.3( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.3( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.d( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.8( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.9( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.b( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.a( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.e( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.0( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 44'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.6( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.7( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.1( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.8( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.5( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.4( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.5( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.1a( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.18( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.1e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.1d( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.0( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 51'589 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.13( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.12( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.19( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.10( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.c( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.2( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 55 pg[9.1b( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=51'590 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:50 compute-0 podman[223565]: 2025-11-26 01:17:50.440376279 +0000 UTC m=+1.593326575 container remove 1dcaeca540fcf7e22f93cfe0ee79aab7f154a99402dbcff834af997a2fbc1ee4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_banzai, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:17:50 compute-0 systemd[1]: libpod-conmon-1dcaeca540fcf7e22f93cfe0ee79aab7f154a99402dbcff834af997a2fbc1ee4.scope: Deactivated successfully.
Nov 26 01:17:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:17:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:17:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:50 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5d73f778-de78-46b0-bd45-6c52732ae9c1 does not exist
Nov 26 01:17:50 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 37ffd168-58eb-4530-8a5d-30601aefa3fc does not exist
Nov 26 01:17:51 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 26 01:17:51 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 26 01:17:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v141: 259 pgs: 259 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:17:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 01:17:51 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:17:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 26 01:17:51 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:17:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 01:17:51 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:17:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 26 01:17:51 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 26 01:17:51 compute-0 ceph-mgr[193049]: [progress INFO root] Writing back 16 completed events
Nov 26 01:17:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 26 01:17:51 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:51 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 26 01:17:51 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:51 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:51 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:17:51 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 26 01:17:51 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:17:51 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 26 01:17:51 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:17:51 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Nov 26 01:17:51 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Nov 26 01:17:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 26 01:17:51 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:17:51 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:17:51 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:17:51 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 26 01:17:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 26 01:17:51 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 26 01:17:51 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 56 pg[10.0( v 51'64 (0'0,51'64] local-lis/les=47/48 n=8 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=15.931165695s) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 51'63 mlcod 51'63 active pruub 121.647651672s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 56 pg[10.0( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=15.931165695s) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 51'63 mlcod 0'0 unknown pruub 121.647651672s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 podman[223678]: 2025-11-26 01:17:51.582971931 +0000 UTC m=+0.119597622 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:17:51 compute-0 podman[223677]: 2025-11-26 01:17:51.598608871 +0000 UTC m=+0.136853278 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.728066444s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.367851257s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.727966309s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.367851257s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.727993011s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.367912292s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.727947235s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.367912292s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.728569984s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.368637085s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.728473663s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.368637085s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.727767944s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.368103027s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.727737427s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.368103027s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.727486610s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.368209839s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.727443695s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.368209839s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.11( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.727088928s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.368164062s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.11( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726981163s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.368164062s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.3( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726756096s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.368270874s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726811409s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.368339539s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[11.0( v 51'2 (0'0,51'2] local-lis/les=49/50 n=2 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=9.826102257s) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 51'1 mlcod 51'1 active pruub 122.467720032s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726709366s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.368339539s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.727749825s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.369148254s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.727385521s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.369148254s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.d( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726613998s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.368415833s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.d( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726581573s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.368415833s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.3( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726629257s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.368270874s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726530075s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.368499756s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726501465s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.368499756s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726849556s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.368881226s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.735284805s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.377357483s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726820946s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.368881226s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.735249519s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.377357483s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.b( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726660728s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.368980408s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.b( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726628304s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.368980408s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726543427s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.369003296s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726364136s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.369049072s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726330757s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.369049072s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.9( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726104736s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.368927002s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.9( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726068497s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.368927002s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[8.10( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726205826s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.369178772s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726175308s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.369178772s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.1( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726154327s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.369407654s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726334572s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.369003296s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.1( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.726115227s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.369407654s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.724986076s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.369773865s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[8.c( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.724946022s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.369773865s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[8.14( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.724431992s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.369438171s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.724402428s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.369438171s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.724377632s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.369491577s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.724336624s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.369491577s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.5( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.724413872s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.369606018s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[8.e( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[8.b( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.5( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.724384308s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.369606018s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[8.9( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 56 pg[8.15( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.724314690s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.369697571s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.724019051s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.369812012s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.723985672s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.369812012s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.723787308s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.369796753s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.723761559s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.369796753s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.723596573s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.369903564s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.723564148s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.369903564s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[8.f( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.723686218s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.370178223s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.723540306s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.370178223s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.723321915s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.370254517s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.723280907s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.370254517s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 56 pg[8.2( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.1d( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.723055840s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.370231628s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.1d( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.723018646s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.370231628s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 56 pg[8.d( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.724287033s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.369697571s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.722833633s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.370292664s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.722793579s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.370292664s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 56 pg[8.4( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.722604752s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.370346069s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.722606659s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.370407104s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.722567558s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.370346069s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.722567558s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.370407104s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 56 pg[8.1b( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.1b( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.729363441s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 127.377479553s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[9.1b( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.729329109s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.377479553s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.722017288s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.370414734s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.721984863s) [0] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.370414734s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[8.6( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 56 pg[8.1c( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 56 pg[8.12( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 56 pg[8.11( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.719537735s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 127.370437622s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[8.18( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=14.719488144s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.370437622s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:51 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 56 pg[11.0( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=9.826102257s) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 51'1 mlcod 0'0 unknown pruub 122.467720032s@ mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[8.1f( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[8.1d( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:51 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 56 pg[8.1a( empty local-lis/les=0/0 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Nov 26 01:17:52 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Nov 26 01:17:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 26 01:17:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 26 01:17:52 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.1b( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.d( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.15( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.b( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.a( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.16( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.14( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.13( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.12( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.11( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:17:52 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 26 01:17:52 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:17:52 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.10( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.1f( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.1d( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.1a( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.19( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.18( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.7( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.1c( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.6( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.5( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.1e( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.4( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.8( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.f( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.9( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.c( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.e( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.2( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.3( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.15( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.16( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.14( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.17( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.17( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.13( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.11( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.3( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.9( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.1d( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.1( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.1b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.5( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.11( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=-1 lpr=57 pi=[54,57)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.11( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.2( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=49/50 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.3( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.3( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.f( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.e( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.d( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.d( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.d( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.b( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.9( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.9( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.9( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.b( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.c( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.b( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.8( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.a( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.3( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.1( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.1( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.4( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.5( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.6( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.7( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.5( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.18( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.5( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.1a( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.1b( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.1c( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.1d( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.1e( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.1b( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.1b( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.1d( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.1d( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.1f( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.10( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.12( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.11( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.19( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[8.14( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=44'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.16( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.13( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.0( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 51'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.c( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.9( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.a( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.5( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.7( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.d( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.1f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.1d( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=56/57 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.1c( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.18( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.5( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.c( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=56/57 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.9( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.0( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 51'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.15( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [2] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.14( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 57 pg[10.3( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [2] r=0 lpr=56 pi=[47,56)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[8.f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=44'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 57 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=56/57 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.1d( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:52 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 57 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [1] r=0 lpr=56 pi=[49,56)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v144: 321 pgs: 9 peering, 62 unknown, 250 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:17:53 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 26 01:17:53 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 26 01:17:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 26 01:17:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 26 01:17:53 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.1b( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.5( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.1d( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.d( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.3( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.9( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.1( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.11( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 58 pg[9.b( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=57) [0]/[1] async=[0] r=0 lpr=57 pi=[54,57)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:17:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 26 01:17:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 26 01:17:54 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 26 01:17:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 59 pg[9.d( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59 pruub=15.583885193s) [0] async=[0] r=-1 lpr=59 pi=[54,59)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.558898926s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 59 pg[9.d( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59 pruub=15.583783150s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.558898926s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 59 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59 pruub=15.581692696s) [0] async=[0] r=-1 lpr=59 pi=[54,59)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.558166504s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 59 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59 pruub=15.581494331s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.558166504s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 59 pg[9.1b( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59 pruub=15.568811417s) [0] async=[0] r=-1 lpr=59 pi=[54,59)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.547714233s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 59 pg[9.1b( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59 pruub=15.568680763s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.547714233s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 59 pg[9.1d( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59 pruub=15.577363014s) [0] async=[0] r=-1 lpr=59 pi=[54,59)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.558807373s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:54 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 59 pg[9.1d( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59 pruub=15.575766563s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.558807373s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 59 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 59 pg[9.d( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 59 pg[9.d( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 59 pg[9.1d( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 59 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 59 pg[9.1d( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 59 pg[9.1b( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:54 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 59 pg[9.1b( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:54 compute-0 systemd-logind[800]: New session 40 of user zuul.
Nov 26 01:17:54 compute-0 systemd[1]: Started Session 40 of User zuul.
Nov 26 01:17:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 26 01:17:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 26 01:17:55 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.575355530s) [0] async=[0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.559112549s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.575244904s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.559112549s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.575286865s) [0] async=[0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.559524536s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.575234413s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.559524536s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.11( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.574582100s) [0] async=[0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.559417725s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.11( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.574531555s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.559417725s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.3( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.573797226s) [0] async=[0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.559036255s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.3( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.573711395s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.559036255s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.9( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.572688103s) [0] async=[0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.558242798s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.9( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.572613716s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.558242798s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.b( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.574062347s) [0] async=[0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.559768677s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.b( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.574000359s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.559768677s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.1( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.573070526s) [0] async=[0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.559204102s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.1( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.573003769s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.559204102s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.5( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.571186066s) [0] async=[0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.558090210s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.571323395s) [0] async=[0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.558349609s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.5( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.571064949s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.558090210s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.571272850s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.558349609s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.571742058s) [0] async=[0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.558975220s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.571681976s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.558975220s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.572193146s) [0] async=[0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.559677124s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 60 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=57/58 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60 pruub=14.572076797s) [0] r=-1 lpr=60 pi=[54,60)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.559677124s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.11( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.11( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.3( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.3( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.9( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.b( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.b( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.9( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.1( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.1( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.5( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.5( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.d( v 51'590 (0'0,51'590] local-lis/les=59/60 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.1b( v 51'590 (0'0,51'590] local-lis/les=59/60 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=59/60 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:55 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 60 pg[9.1d( v 51'590 (0'0,51'590] local-lis/les=59/60 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=59) [0] r=0 lpr=59 pi=[54,59)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v148: 321 pgs: 13 peering, 62 unknown, 246 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 359 B/s, 7 objects/s recovering
Nov 26 01:17:55 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 26 01:17:55 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 26 01:17:55 compute-0 python3.9[223873]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:17:55 compute-0 podman[223878]: 2025-11-26 01:17:55.929766935 +0000 UTC m=+0.145711617 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 01:17:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 26 01:17:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 26 01:17:56 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 26 01:17:56 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 61 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=61 pruub=13.555001259s) [0] async=[0] r=-1 lpr=61 pi=[54,61)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 130.558303833s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:56 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 61 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=57/58 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=61 pruub=13.554895401s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.558303833s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:17:56 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 61 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:17:56 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 61 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:17:56 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 61 pg[9.11( v 51'590 (0'0,51'590] local-lis/les=60/61 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:56 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 61 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:56 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 61 pg[9.3( v 51'590 (0'0,51'590] local-lis/les=60/61 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:56 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 61 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:56 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 61 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:56 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 61 pg[9.9( v 51'590 (0'0,51'590] local-lis/les=60/61 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:56 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 61 pg[9.1( v 51'590 (0'0,51'590] local-lis/les=60/61 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:56 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 61 pg[9.b( v 51'590 (0'0,51'590] local-lis/les=60/61 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:56 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 61 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:56 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 61 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:56 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 61 pg[9.5( v 51'590 (0'0,51'590] local-lis/les=60/61 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=60) [0] r=0 lpr=60 pi=[54,60)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 26 01:17:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 26 01:17:57 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 26 01:17:57 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 62 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=61/62 n=7 ec=54/45 lis/c=57/54 les/c/f=58/55/0 sis=61) [0] r=0 lpr=61 pi=[54,61)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:17:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v151: 321 pgs: 1 active+remapped, 4 peering, 316 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 394 B/s, 9 objects/s recovering
Nov 26 01:17:57 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 26 01:17:57 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 26 01:17:57 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 26 01:17:57 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 26 01:17:57 compute-0 podman[224042]: 2025-11-26 01:17:57.588522743 +0000 UTC m=+0.135475249 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, version=9.4, architecture=x86_64, name=ubi9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 01:17:58 compute-0 python3.9[224137]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:17:58 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 26 01:17:58 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 26 01:17:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:17:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v152: 321 pgs: 1 active+remapped, 4 peering, 316 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 273 B/s, 6 objects/s recovering
Nov 26 01:17:59 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 26 01:17:59 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 26 01:17:59 compute-0 podman[158021]: time="2025-11-26T01:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:17:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:17:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6803 "" "Go-http-client/1.1"
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.776 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.777 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feff248b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff25140e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248a270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff35fda90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff5310410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.782 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feff25140b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feff248b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feff248b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feff248b740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feff248b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff2489520>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feff248b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feff248b1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feff248ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feff248b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feff248b0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feff248ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feff248bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feff248bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff4ce75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feff24894f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feff248b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feff248bc20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feff248b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feff248bcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feff55e84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feff248bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feff248b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feff248bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feff248a2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feff248aea0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feff248afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:17:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:17:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:18:00 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 26 01:18:00 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 26 01:18:00 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 26 01:18:00 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 26 01:18:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v153: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 242 B/s, 13 objects/s recovering
Nov 26 01:18:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 01:18:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:18:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 26 01:18:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 26 01:18:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 01:18:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:18:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 26 01:18:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:18:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 26 01:18:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:18:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 26 01:18:01 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.433103561s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.753829956s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.434961319s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.755722046s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.433056831s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.753829956s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.d( v 61'65 (0'0,61'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.432786942s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 130.753570557s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.d( v 61'65 (0'0,61'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.432741165s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 130.753570557s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.434886932s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.755722046s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.432522774s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.753601074s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.432502747s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.753601074s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.432414055s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.753616333s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.432213783s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.753616333s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.432090759s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.753646851s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.432065964s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.753646851s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.432801247s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.754547119s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.432731628s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.754623413s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.432703972s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.754623413s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.432506561s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.754623413s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.432479858s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.754623413s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431046486s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.754623413s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431022644s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.754623413s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431981087s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.755737305s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431965828s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.755737305s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431983948s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.755859375s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431967735s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.755859375s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431901932s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.755874634s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431869507s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.755874634s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.9( v 61'65 (0'0,61'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431560516s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 130.755767822s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.9( v 61'65 (0'0,61'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431523323s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 130.755767822s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.e( v 61'65 (0'0,61'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431566238s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 130.755828857s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.e( v 61'65 (0'0,61'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431536674s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 130.755828857s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431382179s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.755844116s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431367874s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.755844116s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431288719s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.755844116s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.14( v 61'65 (0'0,61'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431275368s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 130.755874634s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431258202s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.755844116s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.15( v 61'65 (0'0,61'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431033134s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 130.755859375s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.15( v 61'65 (0'0,61'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.430994987s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 130.755859375s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.14( v 61'65 (0'0,61'65] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.431256294s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 130.755874634s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.430982590s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.755996704s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.430957794s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.755996704s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[10.13( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.430672646s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.755889893s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.430647850s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.755889893s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=56/57 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.428255081s) [1] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.754547119s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[10.10( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[10.11( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[10.1a( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[10.6( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[10.2( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[10.b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[10.f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[10.12( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[10.14( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.411846161s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.532073975s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.411828041s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.532073975s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.411674500s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.532043457s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.411661148s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.532043457s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.411581993s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.532058716s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.411569595s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.532058716s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.411514282s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.532180786s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.411499977s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.532180786s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.411437988s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.532226562s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=56/57 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.411423683s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.532226562s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412312508s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.533248901s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412300110s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.533248901s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412746429s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.533767700s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412715912s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.533767700s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412646294s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.533782959s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412631989s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.533782959s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412560463s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.533782959s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412549019s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.533782959s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.9( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412460327s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.533798218s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.9( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412446976s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.533798218s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412319183s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.533798218s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412306786s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.533798218s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412160873s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.533798218s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412147522s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.533798218s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412065506s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.533813477s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.412024498s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.533813477s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.411979675s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.533859253s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.411968231s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.533859253s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426451683s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.548461914s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426440239s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.548461914s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426651001s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.548782349s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426638603s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.548782349s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426639557s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.548858643s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426629066s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.548858643s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426915169s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.549240112s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426904678s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.549240112s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426901817s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.549362183s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426889420s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.549362183s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426747322s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.549362183s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426734924s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.549362183s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426646233s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.549377441s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426632881s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.549377441s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426956177s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.549789429s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426935196s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.549789429s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426458359s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.549377441s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426446915s) [2] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.549377441s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426337242s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.549377441s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 63 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.426324844s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.549377441s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.410503387s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 130.755722046s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[11.10( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[10.9( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[11.4( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[10.8( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=56/57 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=15.407553673s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.755722046s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[11.15( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[11.2( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[10.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[11.14( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[10.4( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[11.d( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[10.7( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[11.6( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[10.17( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[11.b( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[10.d( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[11.e( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[10.e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[11.f( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[11.1( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[10.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[11.19( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[10.16( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[11.9( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[11.8( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[11.3( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[11.18( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[11.1a( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[11.1b( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[11.1c( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[11.1e( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[11.1f( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[11.11( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[11.17( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 63 pg[11.12( empty local-lis/les=0/0 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 63 pg[10.1( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:01 compute-0 openstack_network_exporter[160178]: ERROR   01:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:18:01 compute-0 openstack_network_exporter[160178]: ERROR   01:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:18:01 compute-0 openstack_network_exporter[160178]: ERROR   01:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:18:01 compute-0 openstack_network_exporter[160178]: ERROR   01:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:18:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:18:01 compute-0 openstack_network_exporter[160178]: ERROR   01:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:18:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:18:01 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 26 01:18:01 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 26 01:18:01 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 26 01:18:01 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 26 01:18:02 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 26 01:18:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:18:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 26 01:18:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:18:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:18:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 26 01:18:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:18:02 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 26 01:18:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 26 01:18:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 26 01:18:02 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 26 01:18:02 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 64 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 64 pg[10.14( v 61'65 lc 51'54 (0'0,61'65] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=61'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 64 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 64 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 64 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=63/64 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 64 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=63/64 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=63/64 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 64 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 64 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 64 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 64 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 64 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 64 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [1] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 64 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 64 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 64 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 64 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 64 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 64 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 64 pg[11.9( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=51'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 64 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 64 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 64 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 64 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 64 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=63/64 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 64 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 64 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [2] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[10.d( v 61'65 lc 51'50 (0'0,61'65] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=61'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=63/64 n=1 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[10.e( v 61'65 lc 51'48 (0'0,61'65] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=61'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=63/64 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=63/64 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[10.15( v 61'65 lc 51'46 (0'0,61'65] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=61'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=63/64 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[10.9( v 61'65 lc 51'56 (0'0,61'65] local-lis/les=63/64 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=61'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 64 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=63/64 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:02 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.f deep-scrub starts
Nov 26 01:18:02 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.f deep-scrub ok
Nov 26 01:18:02 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 26 01:18:02 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 26 01:18:03 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 26 01:18:03 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 26 01:18:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v156: 321 pgs: 15 peering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 225 B/s, 12 objects/s recovering
Nov 26 01:18:03 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 26 01:18:03 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 26 01:18:03 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 26 01:18:03 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 26 01:18:03 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:18:04 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 26 01:18:04 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 26 01:18:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v157: 321 pgs: 15 peering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s, 9 objects/s recovering
Nov 26 01:18:05 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Nov 26 01:18:05 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Nov 26 01:18:05 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Nov 26 01:18:05 compute-0 systemd[1]: session-40.scope: Consumed 9.940s CPU time.
Nov 26 01:18:05 compute-0 systemd-logind[800]: Session 40 logged out. Waiting for processes to exit.
Nov 26 01:18:05 compute-0 systemd-logind[800]: Removed session 40.
Nov 26 01:18:06 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 26 01:18:06 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 26 01:18:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v158: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 215 B/s, 9 objects/s recovering
Nov 26 01:18:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 26 01:18:07 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 26 01:18:07 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 6.8 deep-scrub starts
Nov 26 01:18:07 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 6.8 deep-scrub ok
Nov 26 01:18:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 26 01:18:07 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 26 01:18:07 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 26 01:18:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 26 01:18:07 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 26 01:18:07 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 26 01:18:07 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 26 01:18:08 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 26 01:18:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:18:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 45 B/s, 0 objects/s recovering
Nov 26 01:18:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 26 01:18:09 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 26 01:18:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 26 01:18:09 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 26 01:18:09 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 26 01:18:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 26 01:18:09 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 26 01:18:09 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 26 01:18:09 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 26 01:18:10 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 26 01:18:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:18:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:18:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:18:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:18:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:18:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:18:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 45 B/s, 0 objects/s recovering
Nov 26 01:18:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 26 01:18:11 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 26 01:18:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 26 01:18:11 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 26 01:18:11 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 26 01:18:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 26 01:18:11 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 26 01:18:11 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.4 deep-scrub starts
Nov 26 01:18:11 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.4 deep-scrub ok
Nov 26 01:18:12 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 26 01:18:12 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 26 01:18:12 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 26 01:18:12 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 26 01:18:12 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 26 01:18:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v164: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:18:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 26 01:18:13 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 26 01:18:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 26 01:18:13 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 26 01:18:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 26 01:18:13 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 26 01:18:13 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 68 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68 pruub=9.121259689s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 143.370941162s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:13 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 68 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68 pruub=9.121155739s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.370941162s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:13 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 26 01:18:13 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 68 pg[9.e( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68 pruub=9.119937897s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 143.370956421s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:13 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 68 pg[9.e( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68 pruub=9.119902611s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.370956421s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:13 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 68 pg[9.6( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68 pruub=9.119215965s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 143.370956421s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:13 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 68 pg[9.6( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68 pruub=9.119119644s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.370956421s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:13 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 68 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68 pruub=9.120297432s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 143.372604370s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:13 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 68 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68 pruub=9.120254517s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.372604370s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:13 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:13 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:13 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:13 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [2] r=0 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:13 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 26 01:18:13 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 26 01:18:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:18:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 26 01:18:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 26 01:18:14 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 26 01:18:14 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 69 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[54,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:14 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 69 pg[9.e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[54,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:14 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 69 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[54,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:14 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 69 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[54,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:14 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 69 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[54,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:14 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 69 pg[9.6( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[54,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:14 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 69 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[54,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:14 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 69 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[54,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:14 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 69 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=0 lpr=69 pi=[54,69)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:14 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 69 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=0 lpr=69 pi=[54,69)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:14 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 69 pg[9.6( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=0 lpr=69 pi=[54,69)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:14 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 69 pg[9.6( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=0 lpr=69 pi=[54,69)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:14 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 69 pg[9.e( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=0 lpr=69 pi=[54,69)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:14 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 69 pg[9.e( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=0 lpr=69 pi=[54,69)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:14 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 69 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=0 lpr=69 pi=[54,69)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:14 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 69 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] r=0 lpr=69 pi=[54,69)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:14 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 26 01:18:14 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 26 01:18:14 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 26 01:18:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 26 01:18:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 26 01:18:15 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 26 01:18:15 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Nov 26 01:18:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v168: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:18:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 26 01:18:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 26 01:18:15 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Nov 26 01:18:15 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 70 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=69/70 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[54,69)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:15 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 70 pg[9.6( v 51'590 (0'0,51'590] local-lis/les=69/70 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[54,69)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:15 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 70 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=69/70 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[54,69)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:15 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 70 pg[9.e( v 51'590 (0'0,51'590] local-lis/les=69/70 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[54,69)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 26 01:18:16 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 26 01:18:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 26 01:18:16 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 26 01:18:16 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 71 pg[9.e( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:16 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 71 pg[9.e( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:16 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 71 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:16 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 71 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:16 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 71 pg[9.6( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:16 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 71 pg[9.6( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:16 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 71 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=69/70 n=6 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71 pruub=15.091287613s) [2] async=[2] r=-1 lpr=71 pi=[54,71)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 152.088211060s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:16 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 71 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=69/70 n=6 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71 pruub=15.091222763s) [2] r=-1 lpr=71 pi=[54,71)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.088211060s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:16 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 71 pg[9.6( v 51'590 (0'0,51'590] local-lis/les=69/70 n=7 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71 pruub=15.091023445s) [2] async=[2] r=-1 lpr=71 pi=[54,71)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 152.088165283s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:16 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 71 pg[9.6( v 51'590 (0'0,51'590] local-lis/les=69/70 n=7 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71 pruub=15.090911865s) [2] r=-1 lpr=71 pi=[54,71)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.088165283s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:16 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 71 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=69/70 n=6 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71 pruub=15.090394974s) [2] async=[2] r=-1 lpr=71 pi=[54,71)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 152.088088989s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:16 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 71 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=69/70 n=6 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71 pruub=15.090331078s) [2] r=-1 lpr=71 pi=[54,71)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.088088989s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:16 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 71 pg[9.e( v 51'590 (0'0,51'590] local-lis/les=69/70 n=7 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71 pruub=15.090559959s) [2] async=[2] r=-1 lpr=71 pi=[54,71)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 152.088363647s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:16 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 71 pg[9.e( v 51'590 (0'0,51'590] local-lis/les=69/70 n=7 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71 pruub=15.090293884s) [2] r=-1 lpr=71 pi=[54,71)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.088363647s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:16 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 71 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:16 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 71 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:16 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 26 01:18:16 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 26 01:18:16 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 26 01:18:16 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts
Nov 26 01:18:16 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok
Nov 26 01:18:16 compute-0 podman[224195]: 2025-11-26 01:18:16.580537632 +0000 UTC m=+0.124231502 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Nov 26 01:18:16 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 71 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=71 pruub=11.472172737s) [2] r=-1 lpr=71 pi=[60,71)/1 crt=51'590 mlcod 0'0 active pruub 155.313751221s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:16 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 71 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=71 pruub=11.472084045s) [2] r=-1 lpr=71 pi=[60,71)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 155.313751221s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:16 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 71 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=61/62 n=7 ec=54/45 lis/c=61/61 les/c/f=62/62/0 sis=71 pruub=12.488200188s) [2] r=-1 lpr=71 pi=[61,71)/1 crt=51'590 mlcod 0'0 active pruub 156.330718994s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:16 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 71 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=61/62 n=7 ec=54/45 lis/c=61/61 les/c/f=62/62/0 sis=71 pruub=12.488148689s) [2] r=-1 lpr=71 pi=[61,71)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 156.330718994s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:16 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 71 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=71 pruub=11.463454247s) [2] r=-1 lpr=71 pi=[60,71)/1 crt=51'590 mlcod 0'0 active pruub 155.306442261s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:16 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 71 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=71 pruub=11.463433266s) [2] r=-1 lpr=71 pi=[60,71)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 155.306442261s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:16 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 71 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/59 les/c/f=60/60/0 sis=71 pruub=10.465743065s) [2] r=-1 lpr=71 pi=[59,71)/1 crt=51'590 mlcod 0'0 active pruub 154.308914185s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:16 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 71 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/59 les/c/f=60/60/0 sis=71 pruub=10.465725899s) [2] r=-1 lpr=71 pi=[59,71)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 154.308914185s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:16 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 71 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:16 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 71 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=61/61 les/c/f=62/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:16 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 71 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=71) [2] r=0 lpr=71 pi=[60,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:16 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 71 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=59/59 les/c/f=60/60/0 sis=71) [2] r=0 lpr=71 pi=[59,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:16 compute-0 podman[224196]: 2025-11-26 01:18:16.609722904 +0000 UTC m=+0.150186093 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 01:18:16 compute-0 podman[224197]: 2025-11-26 01:18:16.630312435 +0000 UTC m=+0.168760117 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 01:18:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 26 01:18:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 26 01:18:17 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 26 01:18:17 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 72 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=72) [2]/[0] r=0 lpr=72 pi=[60,72)/1 crt=51'590 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:17 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 72 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=72) [2]/[0] r=0 lpr=72 pi=[60,72)/1 crt=51'590 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:17 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 72 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=61/62 n=7 ec=54/45 lis/c=61/61 les/c/f=62/62/0 sis=72) [2]/[0] r=0 lpr=72 pi=[61,72)/1 crt=51'590 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:17 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 72 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=61/62 n=7 ec=54/45 lis/c=61/61 les/c/f=62/62/0 sis=72) [2]/[0] r=0 lpr=72 pi=[61,72)/1 crt=51'590 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:17 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 72 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=72) [2]/[0] r=0 lpr=72 pi=[60,72)/1 crt=51'590 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:17 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 72 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=72) [2]/[0] r=0 lpr=72 pi=[60,72)/1 crt=51'590 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:17 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 72 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/59 les/c/f=60/60/0 sis=72) [2]/[0] r=0 lpr=72 pi=[59,72)/1 crt=51'590 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:17 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 72 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=59/60 n=7 ec=54/45 lis/c=59/59 les/c/f=60/60/0 sis=72) [2]/[0] r=0 lpr=72 pi=[59,72)/1 crt=51'590 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:17 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 72 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=61/61 les/c/f=62/62/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[61,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:17 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 72 pg[9.f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=61/61 les/c/f=62/62/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[61,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:17 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 72 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[60,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:17 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 72 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[60,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:17 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 72 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[60,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:17 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 72 pg[9.17( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[60,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:17 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 72 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=59/59 les/c/f=60/60/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:17 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 72 pg[9.7( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=59/59 les/c/f=60/60/0 sis=72) [2]/[0] r=-1 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:17 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 26 01:18:17 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 72 pg[9.e( v 51'590 (0'0,51'590] local-lis/les=71/72 n=7 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:17 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 72 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=71/72 n=6 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:17 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 72 pg[9.6( v 51'590 (0'0,51'590] local-lis/les=71/72 n=7 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:17 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 72 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=71/72 n=6 ec=54/45 lis/c=69/54 les/c/f=70/55/0 sis=71) [2] r=0 lpr=71 pi=[54,71)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v171: 321 pgs: 4 unknown, 4 peering, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 26 01:18:17 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 26 01:18:17 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 26 01:18:17 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 26 01:18:17 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 26 01:18:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 26 01:18:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 26 01:18:18 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 26 01:18:18 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 73 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=72/73 n=7 ec=54/45 lis/c=59/59 les/c/f=60/60/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[59,72)/1 crt=51'590 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:18 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 73 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=72/73 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[60,72)/1 crt=51'590 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:18 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 73 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=72/73 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[60,72)/1 crt=51'590 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:18 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 73 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=72/73 n=7 ec=54/45 lis/c=61/61 les/c/f=62/62/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[61,72)/1 crt=51'590 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:18:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 26 01:18:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 26 01:18:19 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 26 01:18:19 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 74 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=72/60 les/c/f=73/61/0 sis=74) [2] r=0 lpr=74 pi=[60,74)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:19 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 74 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=72/60 les/c/f=73/61/0 sis=74) [2] r=0 lpr=74 pi=[60,74)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:19 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 74 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=72/59 les/c/f=73/60/0 sis=74) [2] r=0 lpr=74 pi=[59,74)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:19 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 74 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=72/59 les/c/f=73/60/0 sis=74) [2] r=0 lpr=74 pi=[59,74)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:19 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 74 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=72/61 les/c/f=73/62/0 sis=74) [2] r=0 lpr=74 pi=[61,74)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:19 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 74 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=72/61 les/c/f=73/62/0 sis=74) [2] r=0 lpr=74 pi=[61,74)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:19 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 74 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=72/60 les/c/f=73/61/0 sis=74) [2] r=0 lpr=74 pi=[60,74)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:19 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 74 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=72/60 les/c/f=73/61/0 sis=74) [2] r=0 lpr=74 pi=[60,74)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:19 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 74 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=72/73 n=7 ec=54/45 lis/c=72/59 les/c/f=73/60/0 sis=74 pruub=15.044567108s) [2] async=[2] r=-1 lpr=74 pi=[59,74)/1 crt=51'590 mlcod 51'590 active pruub 161.324188232s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:19 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 74 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=72/73 n=6 ec=54/45 lis/c=72/60 les/c/f=73/61/0 sis=74 pruub=15.050904274s) [2] async=[2] r=-1 lpr=74 pi=[60,74)/1 crt=51'590 mlcod 51'590 active pruub 161.330596924s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:19 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 74 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=72/73 n=7 ec=54/45 lis/c=72/59 les/c/f=73/60/0 sis=74 pruub=15.044471741s) [2] r=-1 lpr=74 pi=[59,74)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 161.324188232s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:19 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 74 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=72/73 n=6 ec=54/45 lis/c=72/60 les/c/f=73/61/0 sis=74 pruub=15.049913406s) [2] async=[2] r=-1 lpr=74 pi=[60,74)/1 crt=51'590 mlcod 51'590 active pruub 161.330352783s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:19 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 74 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=72/73 n=6 ec=54/45 lis/c=72/60 les/c/f=73/61/0 sis=74 pruub=15.049695969s) [2] r=-1 lpr=74 pi=[60,74)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 161.330352783s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:19 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 74 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=72/73 n=6 ec=54/45 lis/c=72/60 les/c/f=73/61/0 sis=74 pruub=15.049493790s) [2] r=-1 lpr=74 pi=[60,74)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 161.330596924s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:19 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 74 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=72/73 n=7 ec=54/45 lis/c=72/61 les/c/f=73/62/0 sis=74 pruub=15.046974182s) [2] async=[2] r=-1 lpr=74 pi=[61,74)/1 crt=51'590 mlcod 51'590 active pruub 161.330947876s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:19 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 74 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=72/73 n=7 ec=54/45 lis/c=72/61 les/c/f=73/62/0 sis=74 pruub=15.046311378s) [2] r=-1 lpr=74 pi=[61,74)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 161.330947876s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v174: 321 pgs: 4 unknown, 4 peering, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 26 01:18:19 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 26 01:18:19 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 26 01:18:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 26 01:18:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 26 01:18:20 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 26 01:18:20 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 75 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=74/75 n=6 ec=54/45 lis/c=72/60 les/c/f=73/61/0 sis=74) [2] r=0 lpr=74 pi=[60,74)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:20 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 75 pg[9.7( v 51'590 (0'0,51'590] local-lis/les=74/75 n=7 ec=54/45 lis/c=72/59 les/c/f=73/60/0 sis=74) [2] r=0 lpr=74 pi=[59,74)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:20 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 75 pg[9.f( v 51'590 (0'0,51'590] local-lis/les=74/75 n=7 ec=54/45 lis/c=72/61 les/c/f=73/62/0 sis=74) [2] r=0 lpr=74 pi=[61,74)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:20 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 75 pg[9.17( v 51'590 (0'0,51'590] local-lis/les=74/75 n=6 ec=54/45 lis/c=72/60 les/c/f=73/61/0 sis=74) [2] r=0 lpr=74 pi=[60,74)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:20 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 26 01:18:20 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 26 01:18:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v176: 321 pgs: 4 unknown, 4 peering, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:18:21 compute-0 systemd-logind[800]: New session 41 of user zuul.
Nov 26 01:18:21 compute-0 systemd[1]: Started Session 41 of User zuul.
Nov 26 01:18:21 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 26 01:18:21 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 26 01:18:21 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Nov 26 01:18:21 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Nov 26 01:18:22 compute-0 podman[224391]: 2025-11-26 01:18:22.481701343 +0000 UTC m=+0.131046314 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:18:22 compute-0 podman[224390]: 2025-11-26 01:18:22.498162787 +0000 UTC m=+0.146983343 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, release=1755695350)
Nov 26 01:18:22 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 26 01:18:22 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 26 01:18:22 compute-0 python3.9[224446]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 26 01:18:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v177: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 682 B/s wr, 32 op/s; 183 B/s, 7 objects/s recovering
Nov 26 01:18:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 26 01:18:23 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 26 01:18:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 26 01:18:23 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 26 01:18:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 26 01:18:23 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 26 01:18:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:18:24 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Nov 26 01:18:24 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 26 01:18:24 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 26 01:18:24 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Nov 26 01:18:24 compute-0 python3.9[224633]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:18:25 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 26 01:18:25 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 26 01:18:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v179: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 671 B/s wr, 32 op/s; 180 B/s, 7 objects/s recovering
Nov 26 01:18:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 26 01:18:25 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 26 01:18:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 26 01:18:25 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 26 01:18:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 26 01:18:25 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 26 01:18:25 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 26 01:18:25 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 76 pg[9.8( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76 pruub=13.258108139s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 159.372360229s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:25 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 77 pg[9.8( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76 pruub=13.258035660s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.372360229s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:25 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 76 pg[9.18( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76 pruub=13.253533363s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 159.372360229s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:25 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 77 pg[9.18( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76 pruub=13.253426552s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.372360229s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:25 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 77 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2] r=0 lpr=77 pi=[54,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:25 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 77 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2] r=0 lpr=77 pi=[54,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:25 compute-0 python3.9[224789]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:18:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 26 01:18:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 26 01:18:26 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 26 01:18:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 26 01:18:26 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 78 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:26 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 78 pg[9.8( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:26 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 78 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:26 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 78 pg[9.18( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[54,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:26 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 78 pg[9.18( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [2]/[1] r=0 lpr=78 pi=[54,78)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:26 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 78 pg[9.18( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [2]/[1] r=0 lpr=78 pi=[54,78)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:26 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 78 pg[9.8( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [2]/[1] r=0 lpr=78 pi=[54,78)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:26 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 78 pg[9.8( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [2]/[1] r=0 lpr=78 pi=[54,78)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:26 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 26 01:18:26 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 26 01:18:26 compute-0 podman[224815]: 2025-11-26 01:18:26.61210431 +0000 UTC m=+0.160368401 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:18:26 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 26 01:18:26 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 26 01:18:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v182: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 511 B/s wr, 32 op/s; 183 B/s, 7 objects/s recovering
Nov 26 01:18:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 26 01:18:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 26 01:18:27 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 6.11 deep-scrub starts
Nov 26 01:18:27 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 6.11 deep-scrub ok
Nov 26 01:18:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 26 01:18:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 26 01:18:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 26 01:18:27 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 26 01:18:27 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 26 01:18:27 compute-0 python3.9[224960]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:18:27 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 79 pg[9.18( v 51'590 (0'0,51'590] local-lis/les=78/79 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[54,78)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:27 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 79 pg[9.8( v 51'590 (0'0,51'590] local-lis/les=78/79 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[54,78)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:28 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 26 01:18:28 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 26 01:18:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 26 01:18:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 26 01:18:28 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 26 01:18:28 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 26 01:18:28 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 80 pg[9.8( v 51'590 (0'0,51'590] local-lis/les=78/79 n=7 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=15.507519722s) [2] async=[2] r=-1 lpr=80 pi=[54,80)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 164.698318481s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:28 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 80 pg[9.8( v 51'590 (0'0,51'590] local-lis/les=78/79 n=7 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=15.507397652s) [2] r=-1 lpr=80 pi=[54,80)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.698318481s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:28 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 80 pg[9.18( v 51'590 (0'0,51'590] local-lis/les=78/79 n=6 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=15.497020721s) [2] async=[2] r=-1 lpr=80 pi=[54,80)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 164.688278198s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:28 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 80 pg[9.18( v 51'590 (0'0,51'590] local-lis/les=78/79 n=6 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=15.496935844s) [2] r=-1 lpr=80 pi=[54,80)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.688278198s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:28 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 80 pg[9.8( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[54,80)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:28 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 80 pg[9.8( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[54,80)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:28 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 80 pg[9.18( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[54,80)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:28 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 80 pg[9.18( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[54,80)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:28 compute-0 podman[225067]: 2025-11-26 01:18:28.5487677 +0000 UTC m=+0.107071139 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, com.redhat.component=ubi9-container, config_id=edpm, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, io.buildah.version=1.29.0, vendor=Red Hat, Inc.)
Nov 26 01:18:28 compute-0 python3.9[225131]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:18:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:18:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v185: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:18:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 26 01:18:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 26 01:18:29 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Nov 26 01:18:29 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Nov 26 01:18:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 26 01:18:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 26 01:18:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 26 01:18:29 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 26 01:18:29 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 26 01:18:29 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 81 pg[9.8( v 51'590 (0'0,51'590] local-lis/les=80/81 n=7 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[54,80)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:29 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 81 pg[9.18( v 51'590 (0'0,51'590] local-lis/les=80/81 n=6 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[54,80)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:29 compute-0 podman[158021]: time="2025-11-26T01:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:18:29 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 26 01:18:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:18:29 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 26 01:18:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6802 "" "Go-http-client/1.1"
Nov 26 01:18:30 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 26 01:18:30 compute-0 python3.9[225283]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:18:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v187: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:18:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 26 01:18:31 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 26 01:18:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 26 01:18:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 26 01:18:31 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 26 01:18:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 26 01:18:31 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 26 01:18:31 compute-0 openstack_network_exporter[160178]: ERROR   01:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:18:31 compute-0 openstack_network_exporter[160178]: ERROR   01:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:18:31 compute-0 openstack_network_exporter[160178]: ERROR   01:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:18:31 compute-0 openstack_network_exporter[160178]: ERROR   01:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:18:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:18:31 compute-0 openstack_network_exporter[160178]: ERROR   01:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:18:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:18:31 compute-0 python3.9[225433]: ansible-ansible.builtin.service_facts Invoked
Nov 26 01:18:31 compute-0 network[225450]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 01:18:31 compute-0 network[225451]: 'network-scripts' will be removed from distribution in near future.
Nov 26 01:18:31 compute-0 network[225452]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 01:18:31 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 26 01:18:31 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 26 01:18:32 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 26 01:18:32 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 26 01:18:32 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 26 01:18:32 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Nov 26 01:18:32 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Nov 26 01:18:32 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 82 pg[9.c( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=82 pruub=13.494399071s) [2] r=-1 lpr=82 pi=[54,82)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 167.372222900s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:32 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 82 pg[9.c( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=82 pruub=13.494356155s) [2] r=-1 lpr=82 pi=[54,82)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.372222900s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:32 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 82 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=82) [2] r=0 lpr=82 pi=[54,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:32 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 82 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=82 pruub=13.493927002s) [2] r=-1 lpr=82 pi=[54,82)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 167.374130249s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:32 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 82 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=82 pruub=13.493892670s) [2] r=-1 lpr=82 pi=[54,82)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.374130249s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:32 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 82 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=82) [2] r=0 lpr=82 pi=[54,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v189: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 2 objects/s recovering
Nov 26 01:18:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 26 01:18:33 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 26 01:18:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 26 01:18:33 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 26 01:18:33 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 26 01:18:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 26 01:18:33 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 26 01:18:33 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:33 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:33 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:33 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:33 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 83 pg[9.c( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83) [2]/[1] r=0 lpr=83 pi=[54,83)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:33 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 83 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83) [2]/[1] r=0 lpr=83 pi=[54,83)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:33 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 83 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83) [2]/[1] r=0 lpr=83 pi=[54,83)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:33 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 83 pg[9.c( v 51'590 (0'0,51'590] local-lis/les=54/55 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83) [2]/[1] r=0 lpr=83 pi=[54,83)/1 crt=51'590 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:18:34 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 26 01:18:34 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 26 01:18:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 26 01:18:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 26 01:18:34 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 26 01:18:34 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 26 01:18:34 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 84 pg[9.c( v 51'590 (0'0,51'590] local-lis/les=83/84 n=7 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[54,83)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:34 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 84 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=83/84 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[54,83)/1 crt=51'590 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:34 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Nov 26 01:18:34 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Nov 26 01:18:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v192: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 2 objects/s recovering
Nov 26 01:18:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 26 01:18:35 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 26 01:18:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 26 01:18:35 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 26 01:18:35 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 26 01:18:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 26 01:18:35 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 26 01:18:35 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 85 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=83/54 les/c/f=84/55/0 sis=85) [2] r=0 lpr=85 pi=[54,85)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:35 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 85 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=83/54 les/c/f=84/55/0 sis=85) [2] r=0 lpr=85 pi=[54,85)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:35 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 85 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=83/84 n=6 ec=54/45 lis/c=83/54 les/c/f=84/55/0 sis=85 pruub=15.251470566s) [2] async=[2] r=-1 lpr=85 pi=[54,85)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 171.592437744s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:35 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 85 pg[9.c( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=83/54 les/c/f=84/55/0 sis=85) [2] r=0 lpr=85 pi=[54,85)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:35 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 85 pg[9.c( v 51'590 (0'0,51'590] local-lis/les=0/0 n=7 ec=54/45 lis/c=83/54 les/c/f=84/55/0 sis=85) [2] r=0 lpr=85 pi=[54,85)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:35 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 85 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=83/84 n=6 ec=54/45 lis/c=83/54 les/c/f=84/55/0 sis=85 pruub=15.251380920s) [2] r=-1 lpr=85 pi=[54,85)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.592437744s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:35 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 85 pg[9.c( v 51'590 (0'0,51'590] local-lis/les=83/84 n=7 ec=54/45 lis/c=83/54 les/c/f=84/55/0 sis=85 pruub=15.245152473s) [2] async=[2] r=-1 lpr=85 pi=[54,85)/1 crt=51'590 lcod 0'0 mlcod 0'0 active pruub 171.586975098s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:35 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 85 pg[9.c( v 51'590 (0'0,51'590] local-lis/les=83/84 n=7 ec=54/45 lis/c=83/54 les/c/f=84/55/0 sis=85 pruub=15.245067596s) [2] r=-1 lpr=85 pi=[54,85)/1 crt=51'590 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.586975098s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:35 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 26 01:18:35 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 26 01:18:36 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 26 01:18:36 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 26 01:18:36 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 26 01:18:36 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 26 01:18:36 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 86 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=85/86 n=6 ec=54/45 lis/c=83/54 les/c/f=84/55/0 sis=85) [2] r=0 lpr=85 pi=[54,85)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:36 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 86 pg[9.c( v 51'590 (0'0,51'590] local-lis/les=85/86 n=7 ec=54/45 lis/c=83/54 les/c/f=84/55/0 sis=85) [2] r=0 lpr=85 pi=[54,85)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v195: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 26 01:18:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 26 01:18:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 26 01:18:37 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Nov 26 01:18:37 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Nov 26 01:18:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 26 01:18:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 26 01:18:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 26 01:18:37 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 26 01:18:37 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 26 01:18:37 compute-0 python3.9[225721]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:18:37 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 26 01:18:37 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 26 01:18:38 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 26 01:18:38 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 26 01:18:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 26 01:18:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:18:39 compute-0 python3.9[225871]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:18:39 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 26 01:18:39 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 26 01:18:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v197: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 2 objects/s recovering
Nov 26 01:18:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 26 01:18:39 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 26 01:18:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 26 01:18:39 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 26 01:18:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 26 01:18:39 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 26 01:18:39 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 26 01:18:40 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 26 01:18:40 compute-0 python3.9[226025]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:18:40 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 26 01:18:40 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 26 01:18:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:18:40
Nov 26 01:18:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:18:40 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:18:40 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', '.rgw.root']
Nov 26 01:18:40 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:18:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v199: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 2 objects/s recovering
Nov 26 01:18:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 26 01:18:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 26 01:18:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 26 01:18:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 26 01:18:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 26 01:18:41 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 26 01:18:41 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 26 01:18:42 compute-0 python3.9[226183]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 01:18:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 26 01:18:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v201: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:18:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 26 01:18:43 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 26 01:18:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 26 01:18:43 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 26 01:18:43 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 26 01:18:43 compute-0 python3.9[226267]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 01:18:43 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 26 01:18:43 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 26 01:18:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 26 01:18:43 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 26 01:18:43 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 26 01:18:43 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 26 01:18:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:18:44 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 26 01:18:44 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.e deep-scrub starts
Nov 26 01:18:44 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.e deep-scrub ok
Nov 26 01:18:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v203: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:18:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 26 01:18:45 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 26 01:18:45 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 26 01:18:45 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 26 01:18:45 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 26 01:18:45 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 26 01:18:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 26 01:18:45 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 26 01:18:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 26 01:18:45 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 26 01:18:45 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 26 01:18:45 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 91 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=91 pruub=14.145223618s) [2] r=-1 lpr=91 pi=[60,91)/1 crt=51'590 mlcod 0'0 active pruub 187.315444946s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:45 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 91 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=91 pruub=14.145100594s) [2] r=-1 lpr=91 pi=[60,91)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 187.315444946s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:45 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 91 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=91) [2] r=0 lpr=91 pi=[60,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 26 01:18:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 26 01:18:46 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 26 01:18:46 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 26 01:18:46 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=92) [2]/[0] r=-1 lpr=92 pi=[60,92)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:46 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=92) [2]/[0] r=-1 lpr=92 pi=[60,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 92 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=92) [2]/[0] r=0 lpr=92 pi=[60,92)/1 crt=51'590 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:46 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 92 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=92) [2]/[0] r=0 lpr=92 pi=[60,92)/1 crt=51'590 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v206: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:18:47 compute-0 podman[226332]: 2025-11-26 01:18:47.574682246 +0000 UTC m=+0.112351367 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:18:47 compute-0 podman[226331]: 2025-11-26 01:18:47.574761058 +0000 UTC m=+0.119312253 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 01:18:47 compute-0 podman[226333]: 2025-11-26 01:18:47.628640927 +0000 UTC m=+0.161565204 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:18:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 26 01:18:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 26 01:18:47 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 26 01:18:47 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 93 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=92/93 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=92) [2]/[0] async=[2] r=0 lpr=92 pi=[60,92)/1 crt=51'590 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 26 01:18:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 26 01:18:48 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 26 01:18:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 94 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=92/93 n=6 ec=54/45 lis/c=92/60 les/c/f=93/61/0 sis=94 pruub=14.969014168s) [2] async=[2] r=-1 lpr=94 pi=[60,94)/1 crt=51'590 mlcod 51'590 active pruub 191.226181030s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:18:49 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 94 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=92/93 n=6 ec=54/45 lis/c=92/60 les/c/f=93/61/0 sis=94 pruub=14.967408180s) [2] r=-1 lpr=94 pi=[60,94)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 191.226181030s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 94 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=92/60 les/c/f=93/61/0 sis=94) [2] r=0 lpr=94 pi=[60,94)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:49 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 94 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=92/60 les/c/f=93/61/0 sis=94) [2] r=0 lpr=94 pi=[60,94)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v209: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:18:49 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 26 01:18:49 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 26 01:18:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 26 01:18:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 26 01:18:50 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 26 01:18:50 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 95 pg[9.13( v 51'590 (0'0,51'590] local-lis/les=94/95 n=6 ec=54/45 lis/c=92/60 les/c/f=93/61/0 sis=94) [2] r=0 lpr=94 pi=[60,94)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:18:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:18:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v211: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:18:51 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 26 01:18:51 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 26 01:18:51 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.a deep-scrub starts
Nov 26 01:18:51 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.a deep-scrub ok
Nov 26 01:18:52 compute-0 podman[226569]: 2025-11-26 01:18:52.288429194 +0000 UTC m=+0.162311036 container exec 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:18:52 compute-0 podman[226569]: 2025-11-26 01:18:52.382057012 +0000 UTC m=+0.255938744 container exec_died 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:18:52 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 26 01:18:52 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 26 01:18:52 compute-0 podman[226617]: 2025-11-26 01:18:52.747981255 +0000 UTC m=+0.123617845 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 26 01:18:52 compute-0 podman[226618]: 2025-11-26 01:18:52.766917339 +0000 UTC m=+0.136971862 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:18:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v212: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 8 op/s; 36 B/s, 1 objects/s recovering
Nov 26 01:18:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 26 01:18:53 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 26 01:18:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:18:53 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:18:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:18:53 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:18:53 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 26 01:18:53 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 26 01:18:53 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 26 01:18:53 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 26 01:18:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:18:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 26 01:18:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 26 01:18:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 26 01:18:54 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 26 01:18:54 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 26 01:18:54 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:18:54 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:18:54 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 26 01:18:54 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 26 01:18:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:18:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:18:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:18:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:18:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:18:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:18:54 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 9140e4ed-9ba4-463f-8a3d-ff492f2640ef does not exist
Nov 26 01:18:54 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 94732dbe-56e7-4134-8bc5-61506203a373 does not exist
Nov 26 01:18:54 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev de38bcea-43ec-4f9f-b9d7-41ff8962584f does not exist
Nov 26 01:18:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:18:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:18:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:18:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:18:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:18:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:18:55 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 26 01:18:55 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:18:55 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:18:55 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:18:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v214: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 166 B/s wr, 8 op/s; 35 B/s, 1 objects/s recovering
Nov 26 01:18:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 26 01:18:55 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 26 01:18:55 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 26 01:18:55 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 26 01:18:56 compute-0 podman[227032]: 2025-11-26 01:18:56.037280007 +0000 UTC m=+0.082122606 container create fe4171cccd7a5ff9b2bcbc2fc63512b0b5137f232a84bd37e3f287fb0b7b081f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:18:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 26 01:18:56 compute-0 podman[227032]: 2025-11-26 01:18:56.005575633 +0000 UTC m=+0.050418302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:18:56 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 26 01:18:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 26 01:18:56 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 26 01:18:56 compute-0 systemd[1]: Started libpod-conmon-fe4171cccd7a5ff9b2bcbc2fc63512b0b5137f232a84bd37e3f287fb0b7b081f.scope.
Nov 26 01:18:56 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 26 01:18:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:18:56 compute-0 podman[227032]: 2025-11-26 01:18:56.176673335 +0000 UTC m=+0.221515914 container init fe4171cccd7a5ff9b2bcbc2fc63512b0b5137f232a84bd37e3f287fb0b7b081f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 01:18:56 compute-0 podman[227032]: 2025-11-26 01:18:56.191729529 +0000 UTC m=+0.236572108 container start fe4171cccd7a5ff9b2bcbc2fc63512b0b5137f232a84bd37e3f287fb0b7b081f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 26 01:18:56 compute-0 podman[227032]: 2025-11-26 01:18:56.19636105 +0000 UTC m=+0.241203709 container attach fe4171cccd7a5ff9b2bcbc2fc63512b0b5137f232a84bd37e3f287fb0b7b081f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_carson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 01:18:56 compute-0 happy_carson[227048]: 167 167
Nov 26 01:18:56 compute-0 systemd[1]: libpod-fe4171cccd7a5ff9b2bcbc2fc63512b0b5137f232a84bd37e3f287fb0b7b081f.scope: Deactivated successfully.
Nov 26 01:18:56 compute-0 podman[227032]: 2025-11-26 01:18:56.205345163 +0000 UTC m=+0.250187772 container died fe4171cccd7a5ff9b2bcbc2fc63512b0b5137f232a84bd37e3f287fb0b7b081f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_carson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:18:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-45b75979d3b535db1670b20c4d4f341e5129e73c705112b1d19d988c7055e889-merged.mount: Deactivated successfully.
Nov 26 01:18:56 compute-0 podman[227032]: 2025-11-26 01:18:56.285401469 +0000 UTC m=+0.330244048 container remove fe4171cccd7a5ff9b2bcbc2fc63512b0b5137f232a84bd37e3f287fb0b7b081f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:18:56 compute-0 systemd[1]: libpod-conmon-fe4171cccd7a5ff9b2bcbc2fc63512b0b5137f232a84bd37e3f287fb0b7b081f.scope: Deactivated successfully.
Nov 26 01:18:56 compute-0 podman[227074]: 2025-11-26 01:18:56.526271718 +0000 UTC m=+0.061375561 container create 61f5e30b34d4f2a2e318a6c68893b2cf07cac57d9c7111812a9c8046f8964bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 01:18:56 compute-0 systemd[1]: Started libpod-conmon-61f5e30b34d4f2a2e318a6c68893b2cf07cac57d9c7111812a9c8046f8964bb0.scope.
Nov 26 01:18:56 compute-0 podman[227074]: 2025-11-26 01:18:56.505631076 +0000 UTC m=+0.040734869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:18:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:18:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79162e028af2c457cb529de0471ee4123c3daed85a96e4e63e438973ead1140b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:18:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79162e028af2c457cb529de0471ee4123c3daed85a96e4e63e438973ead1140b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:18:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79162e028af2c457cb529de0471ee4123c3daed85a96e4e63e438973ead1140b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:18:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79162e028af2c457cb529de0471ee4123c3daed85a96e4e63e438973ead1140b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:18:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79162e028af2c457cb529de0471ee4123c3daed85a96e4e63e438973ead1140b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:18:56 compute-0 podman[227074]: 2025-11-26 01:18:56.717809946 +0000 UTC m=+0.252913759 container init 61f5e30b34d4f2a2e318a6c68893b2cf07cac57d9c7111812a9c8046f8964bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:18:56 compute-0 podman[227074]: 2025-11-26 01:18:56.745251239 +0000 UTC m=+0.280355042 container start 61f5e30b34d4f2a2e318a6c68893b2cf07cac57d9c7111812a9c8046f8964bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:18:56 compute-0 podman[227074]: 2025-11-26 01:18:56.752802892 +0000 UTC m=+0.287906705 container attach 61f5e30b34d4f2a2e318a6c68893b2cf07cac57d9c7111812a9c8046f8964bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 01:18:56 compute-0 podman[227093]: 2025-11-26 01:18:56.863604565 +0000 UTC m=+0.174326704 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Nov 26 01:18:57 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 26 01:18:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v216: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 143 B/s wr, 7 op/s; 30 B/s, 1 objects/s recovering
Nov 26 01:18:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 26 01:18:57 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 26 01:18:57 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 97 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=97 pruub=10.556965828s) [1] r=-1 lpr=97 pi=[60,97)/1 crt=51'590 mlcod 0'0 active pruub 195.308853149s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:57 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 97 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=97 pruub=10.556887627s) [1] r=-1 lpr=97 pi=[60,97)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 195.308853149s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:57 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 97 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=97) [1] r=0 lpr=97 pi=[60,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:57 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 26 01:18:57 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 26 01:18:57 compute-0 funny_pasteur[227090]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:18:57 compute-0 funny_pasteur[227090]: --> relative data size: 1.0
Nov 26 01:18:57 compute-0 funny_pasteur[227090]: --> All data devices are unavailable
Nov 26 01:18:57 compute-0 systemd[1]: libpod-61f5e30b34d4f2a2e318a6c68893b2cf07cac57d9c7111812a9c8046f8964bb0.scope: Deactivated successfully.
Nov 26 01:18:57 compute-0 systemd[1]: libpod-61f5e30b34d4f2a2e318a6c68893b2cf07cac57d9c7111812a9c8046f8964bb0.scope: Consumed 1.170s CPU time.
Nov 26 01:18:57 compute-0 podman[227074]: 2025-11-26 01:18:57.978869535 +0000 UTC m=+1.513973318 container died 61f5e30b34d4f2a2e318a6c68893b2cf07cac57d9c7111812a9c8046f8964bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 01:18:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-79162e028af2c457cb529de0471ee4123c3daed85a96e4e63e438973ead1140b-merged.mount: Deactivated successfully.
Nov 26 01:18:58 compute-0 podman[227074]: 2025-11-26 01:18:58.054167362 +0000 UTC m=+1.589271135 container remove 61f5e30b34d4f2a2e318a6c68893b2cf07cac57d9c7111812a9c8046f8964bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_pasteur, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:18:58 compute-0 systemd[1]: libpod-conmon-61f5e30b34d4f2a2e318a6c68893b2cf07cac57d9c7111812a9c8046f8964bb0.scope: Deactivated successfully.
Nov 26 01:18:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 26 01:18:58 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 26 01:18:58 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 26 01:18:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 26 01:18:58 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 26 01:18:58 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=98) [1]/[0] r=-1 lpr=98 pi=[60,98)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:58 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=98) [1]/[0] r=-1 lpr=98 pi=[60,98)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:58 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 98 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=98) [1]/[0] r=0 lpr=98 pi=[60,98)/1 crt=51'590 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:58 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 98 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=98) [1]/[0] r=0 lpr=98 pi=[60,98)/1 crt=51'590 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:58 compute-0 podman[227251]: 2025-11-26 01:18:58.750065469 +0000 UTC m=+0.144859044 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, build-date=2024-09-18T21:23:30, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 01:18:58 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 26 01:18:58 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 26 01:18:58 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.2 deep-scrub starts
Nov 26 01:18:58 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.2 deep-scrub ok
Nov 26 01:18:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:18:59 compute-0 podman[227314]: 2025-11-26 01:18:59.061957274 +0000 UTC m=+0.070273977 container create 3a939c099848d76517f00fa7fc4812ebe4834304c7101e4b1bbe97175a17779e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:18:59 compute-0 systemd[1]: Started libpod-conmon-3a939c099848d76517f00fa7fc4812ebe4834304c7101e4b1bbe97175a17779e.scope.
Nov 26 01:18:59 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 98 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=71/72 n=6 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=98 pruub=13.943964005s) [0] r=-1 lpr=98 pi=[71,98)/1 crt=51'590 mlcod 0'0 active pruub 187.251419067s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:59 compute-0 podman[227314]: 2025-11-26 01:18:59.040187365 +0000 UTC m=+0.048504068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:18:59 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 98 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=71/72 n=6 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=98 pruub=13.942724228s) [0] r=-1 lpr=98 pi=[71,98)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 187.251419067s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:59 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 98 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=98) [0] r=0 lpr=98 pi=[71,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v218: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:18:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 26 01:18:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 26 01:18:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 26 01:18:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:18:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 26 01:18:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 26 01:18:59 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 26 01:18:59 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 26 01:18:59 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 26 01:18:59 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=99) [0]/[2] r=-1 lpr=99 pi=[71,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:59 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=99) [0]/[2] r=-1 lpr=99 pi=[71,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:18:59 compute-0 podman[227314]: 2025-11-26 01:18:59.186473387 +0000 UTC m=+0.194790150 container init 3a939c099848d76517f00fa7fc4812ebe4834304c7101e4b1bbe97175a17779e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:18:59 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 99 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=71/72 n=6 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=99) [0]/[2] r=0 lpr=99 pi=[71,99)/1 crt=51'590 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:18:59 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 99 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=71/72 n=6 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=99) [0]/[2] r=0 lpr=99 pi=[71,99)/1 crt=51'590 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:18:59 compute-0 podman[227314]: 2025-11-26 01:18:59.198809802 +0000 UTC m=+0.207126525 container start 3a939c099848d76517f00fa7fc4812ebe4834304c7101e4b1bbe97175a17779e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pasteur, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:18:59 compute-0 stupefied_pasteur[227330]: 167 167
Nov 26 01:18:59 compute-0 systemd[1]: libpod-3a939c099848d76517f00fa7fc4812ebe4834304c7101e4b1bbe97175a17779e.scope: Deactivated successfully.
Nov 26 01:18:59 compute-0 podman[227314]: 2025-11-26 01:18:59.217883225 +0000 UTC m=+0.226200008 container attach 3a939c099848d76517f00fa7fc4812ebe4834304c7101e4b1bbe97175a17779e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pasteur, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:18:59 compute-0 podman[227314]: 2025-11-26 01:18:59.2191139 +0000 UTC m=+0.227430623 container died 3a939c099848d76517f00fa7fc4812ebe4834304c7101e4b1bbe97175a17779e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pasteur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:18:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ee818a781f90d2651d66779b55cf7255af17f05bf9634176694f462546ebfe8-merged.mount: Deactivated successfully.
Nov 26 01:18:59 compute-0 podman[227314]: 2025-11-26 01:18:59.314078796 +0000 UTC m=+0.322395519 container remove 3a939c099848d76517f00fa7fc4812ebe4834304c7101e4b1bbe97175a17779e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pasteur, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:18:59 compute-0 systemd[1]: libpod-conmon-3a939c099848d76517f00fa7fc4812ebe4834304c7101e4b1bbe97175a17779e.scope: Deactivated successfully.
Nov 26 01:18:59 compute-0 podman[227357]: 2025-11-26 01:18:59.597746962 +0000 UTC m=+0.079126385 container create 59de248c9d4afdc914a90a5dca3810eee117edb337a8dbf7ffc7ba6ebb46771a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackburn, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 01:18:59 compute-0 podman[227357]: 2025-11-26 01:18:59.564286726 +0000 UTC m=+0.045666199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:18:59 compute-0 systemd[1]: Started libpod-conmon-59de248c9d4afdc914a90a5dca3810eee117edb337a8dbf7ffc7ba6ebb46771a.scope.
Nov 26 01:18:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a7d5da48657aaade5c4cafe277a5ca5a7a54a98496915750e9eb927ff8b6f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:18:59 compute-0 podman[158021]: time="2025-11-26T01:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a7d5da48657aaade5c4cafe277a5ca5a7a54a98496915750e9eb927ff8b6f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a7d5da48657aaade5c4cafe277a5ca5a7a54a98496915750e9eb927ff8b6f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a7d5da48657aaade5c4cafe277a5ca5a7a54a98496915750e9eb927ff8b6f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:18:59 compute-0 podman[227357]: 2025-11-26 01:18:59.799426404 +0000 UTC m=+0.280805897 container init 59de248c9d4afdc914a90a5dca3810eee117edb337a8dbf7ffc7ba6ebb46771a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackburn, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:18:59 compute-0 podman[227357]: 2025-11-26 01:18:59.818795875 +0000 UTC m=+0.300175288 container start 59de248c9d4afdc914a90a5dca3810eee117edb337a8dbf7ffc7ba6ebb46771a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackburn, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 01:18:59 compute-0 podman[227357]: 2025-11-26 01:18:59.826244074 +0000 UTC m=+0.307623557 container attach 59de248c9d4afdc914a90a5dca3810eee117edb337a8dbf7ffc7ba6ebb46771a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:18:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34390 "" "Go-http-client/1.1"
Nov 26 01:18:59 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 99 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=98/99 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=98) [1]/[0] async=[1] r=0 lpr=98 pi=[60,98)/1 crt=51'590 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:18:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7209 "" "Go-http-client/1.1"
Nov 26 01:19:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 26 01:19:00 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 26 01:19:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 26 01:19:00 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 26 01:19:00 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 100 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=98/99 n=6 ec=54/45 lis/c=98/60 les/c/f=99/61/0 sis=100 pruub=15.641495705s) [1] async=[1] r=-1 lpr=100 pi=[60,100)/1 crt=51'590 mlcod 51'590 active pruub 203.093444824s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:00 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 100 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=98/99 n=6 ec=54/45 lis/c=98/60 les/c/f=99/61/0 sis=100 pruub=15.640053749s) [1] r=-1 lpr=100 pi=[60,100)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 203.093444824s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:19:00 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 100 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=98/60 les/c/f=99/61/0 sis=100) [1] r=0 lpr=100 pi=[60,100)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:00 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 100 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=98/60 les/c/f=99/61/0 sis=100) [1] r=0 lpr=100 pi=[60,100)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:19:00 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 100 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=99/100 n=6 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=99) [0]/[2] async=[0] r=0 lpr=99 pi=[71,99)/1 crt=51'590 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]: {
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:    "0": [
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:        {
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "devices": [
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "/dev/loop3"
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            ],
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "lv_name": "ceph_lv0",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "lv_size": "21470642176",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "name": "ceph_lv0",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "tags": {
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.cluster_name": "ceph",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.crush_device_class": "",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.encrypted": "0",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.osd_id": "0",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.type": "block",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.vdo": "0"
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            },
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "type": "block",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "vg_name": "ceph_vg0"
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:        }
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:    ],
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:    "1": [
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:        {
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "devices": [
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "/dev/loop4"
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            ],
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "lv_name": "ceph_lv1",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "lv_size": "21470642176",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "name": "ceph_lv1",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "tags": {
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.cluster_name": "ceph",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.crush_device_class": "",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.encrypted": "0",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.osd_id": "1",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.type": "block",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.vdo": "0"
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            },
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "type": "block",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "vg_name": "ceph_vg1"
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:        }
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:    ],
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:    "2": [
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:        {
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "devices": [
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "/dev/loop5"
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            ],
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "lv_name": "ceph_lv2",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "lv_size": "21470642176",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "name": "ceph_lv2",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "tags": {
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.cluster_name": "ceph",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.crush_device_class": "",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.encrypted": "0",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.osd_id": "2",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.type": "block",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:                "ceph.vdo": "0"
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            },
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "type": "block",
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:            "vg_name": "ceph_vg2"
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:        }
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]:    ]
Nov 26 01:19:00 compute-0 awesome_blackburn[227374]: }
Nov 26 01:19:00 compute-0 systemd[1]: libpod-59de248c9d4afdc914a90a5dca3810eee117edb337a8dbf7ffc7ba6ebb46771a.scope: Deactivated successfully.
Nov 26 01:19:00 compute-0 podman[227357]: 2025-11-26 01:19:00.670980113 +0000 UTC m=+1.152359596 container died 59de248c9d4afdc914a90a5dca3810eee117edb337a8dbf7ffc7ba6ebb46771a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:19:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-38a7d5da48657aaade5c4cafe277a5ca5a7a54a98496915750e9eb927ff8b6f3-merged.mount: Deactivated successfully.
Nov 26 01:19:00 compute-0 podman[227357]: 2025-11-26 01:19:00.805177517 +0000 UTC m=+1.286556950 container remove 59de248c9d4afdc914a90a5dca3810eee117edb337a8dbf7ffc7ba6ebb46771a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:19:00 compute-0 systemd[1]: libpod-conmon-59de248c9d4afdc914a90a5dca3810eee117edb337a8dbf7ffc7ba6ebb46771a.scope: Deactivated successfully.
Nov 26 01:19:00 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 26 01:19:00 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 26 01:19:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v221: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Nov 26 01:19:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 26 01:19:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 26 01:19:01 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 26 01:19:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 101 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=99/100 n=6 ec=54/45 lis/c=99/71 les/c/f=100/72/0 sis=101 pruub=15.007158279s) [0] async=[0] r=-1 lpr=101 pi=[71,101)/1 crt=51'590 mlcod 51'590 active pruub 190.409988403s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:01 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 101 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=99/100 n=6 ec=54/45 lis/c=99/71 les/c/f=100/72/0 sis=101 pruub=15.006982803s) [0] r=-1 lpr=101 pi=[71,101)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 190.409988403s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:19:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 101 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=99/71 les/c/f=100/72/0 sis=101) [0] r=0 lpr=101 pi=[71,101)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:01 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 101 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=99/71 les/c/f=100/72/0 sis=101) [0] r=0 lpr=101 pi=[71,101)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:19:01 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 101 pg[9.15( v 51'590 (0'0,51'590] local-lis/les=100/101 n=6 ec=54/45 lis/c=98/60 les/c/f=99/61/0 sis=100) [1] r=0 lpr=100 pi=[60,100)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:19:01 compute-0 openstack_network_exporter[160178]: ERROR   01:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:19:01 compute-0 openstack_network_exporter[160178]: ERROR   01:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:19:01 compute-0 openstack_network_exporter[160178]: ERROR   01:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:19:01 compute-0 openstack_network_exporter[160178]: ERROR   01:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:19:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:19:01 compute-0 openstack_network_exporter[160178]: ERROR   01:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:19:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:19:01 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 26 01:19:01 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 26 01:19:01 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 26 01:19:01 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 26 01:19:01 compute-0 podman[227563]: 2025-11-26 01:19:01.942412 +0000 UTC m=+0.076447349 container create e4231da96f15d2ab6da2bf461924bd8affcca387bb4fb6a1397066787fcdc044 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:19:02 compute-0 podman[227563]: 2025-11-26 01:19:01.912149854 +0000 UTC m=+0.046185273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:19:02 compute-0 systemd[1]: Started libpod-conmon-e4231da96f15d2ab6da2bf461924bd8affcca387bb4fb6a1397066787fcdc044.scope.
Nov 26 01:19:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:19:02 compute-0 podman[227563]: 2025-11-26 01:19:02.081971025 +0000 UTC m=+0.216006474 container init e4231da96f15d2ab6da2bf461924bd8affcca387bb4fb6a1397066787fcdc044 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 01:19:02 compute-0 podman[227563]: 2025-11-26 01:19:02.101351137 +0000 UTC m=+0.235386516 container start e4231da96f15d2ab6da2bf461924bd8affcca387bb4fb6a1397066787fcdc044 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 01:19:02 compute-0 podman[227563]: 2025-11-26 01:19:02.107451027 +0000 UTC m=+0.241486456 container attach e4231da96f15d2ab6da2bf461924bd8affcca387bb4fb6a1397066787fcdc044 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 01:19:02 compute-0 confident_brahmagupta[227579]: 167 167
Nov 26 01:19:02 compute-0 systemd[1]: libpod-e4231da96f15d2ab6da2bf461924bd8affcca387bb4fb6a1397066787fcdc044.scope: Deactivated successfully.
Nov 26 01:19:02 compute-0 podman[227563]: 2025-11-26 01:19:02.113085395 +0000 UTC m=+0.247120734 container died e4231da96f15d2ab6da2bf461924bd8affcca387bb4fb6a1397066787fcdc044 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 26 01:19:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-83b4cc69977fd1e7d36a2b735ee44d4dd32ec8d2cceb8c568880eae03a456993-merged.mount: Deactivated successfully.
Nov 26 01:19:02 compute-0 podman[227563]: 2025-11-26 01:19:02.176271143 +0000 UTC m=+0.310306492 container remove e4231da96f15d2ab6da2bf461924bd8affcca387bb4fb6a1397066787fcdc044 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:19:02 compute-0 systemd[1]: libpod-conmon-e4231da96f15d2ab6da2bf461924bd8affcca387bb4fb6a1397066787fcdc044.scope: Deactivated successfully.
Nov 26 01:19:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 26 01:19:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 26 01:19:02 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 26 01:19:02 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 102 pg[9.16( v 51'590 (0'0,51'590] local-lis/les=101/102 n=6 ec=54/45 lis/c=99/71 les/c/f=100/72/0 sis=101) [0] r=0 lpr=101 pi=[71,101)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:19:02 compute-0 podman[227602]: 2025-11-26 01:19:02.448757105 +0000 UTC m=+0.089227207 container create fbaa4a2d72d72ab7e3e1a8cc2e108ddbe6ed8355f0656612d7361528cd9274eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:19:02 compute-0 podman[227602]: 2025-11-26 01:19:02.414702092 +0000 UTC m=+0.055172244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:19:02 compute-0 systemd[1]: Started libpod-conmon-fbaa4a2d72d72ab7e3e1a8cc2e108ddbe6ed8355f0656612d7361528cd9274eb.scope.
Nov 26 01:19:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:19:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98e41f76fc069a8c799b1da57daa5a73bde26f4c1076277c5f59267ba1c316df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:19:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98e41f76fc069a8c799b1da57daa5a73bde26f4c1076277c5f59267ba1c316df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:19:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98e41f76fc069a8c799b1da57daa5a73bde26f4c1076277c5f59267ba1c316df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:19:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98e41f76fc069a8c799b1da57daa5a73bde26f4c1076277c5f59267ba1c316df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:19:02 compute-0 podman[227602]: 2025-11-26 01:19:02.654172491 +0000 UTC m=+0.294642633 container init fbaa4a2d72d72ab7e3e1a8cc2e108ddbe6ed8355f0656612d7361528cd9274eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendel, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 01:19:02 compute-0 podman[227602]: 2025-11-26 01:19:02.670502128 +0000 UTC m=+0.310972220 container start fbaa4a2d72d72ab7e3e1a8cc2e108ddbe6ed8355f0656612d7361528cd9274eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:19:02 compute-0 podman[227602]: 2025-11-26 01:19:02.677159704 +0000 UTC m=+0.317629796 container attach fbaa4a2d72d72ab7e3e1a8cc2e108ddbe6ed8355f0656612d7361528cd9274eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendel, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:19:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v224: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 26 01:19:03 compute-0 keen_mendel[227619]: {
Nov 26 01:19:03 compute-0 keen_mendel[227619]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:19:03 compute-0 keen_mendel[227619]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:19:03 compute-0 keen_mendel[227619]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:19:03 compute-0 keen_mendel[227619]:        "osd_id": 0,
Nov 26 01:19:03 compute-0 keen_mendel[227619]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:19:03 compute-0 keen_mendel[227619]:        "type": "bluestore"
Nov 26 01:19:03 compute-0 keen_mendel[227619]:    },
Nov 26 01:19:03 compute-0 keen_mendel[227619]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:19:03 compute-0 keen_mendel[227619]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:19:03 compute-0 keen_mendel[227619]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:19:03 compute-0 keen_mendel[227619]:        "osd_id": 2,
Nov 26 01:19:03 compute-0 keen_mendel[227619]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:19:03 compute-0 keen_mendel[227619]:        "type": "bluestore"
Nov 26 01:19:03 compute-0 keen_mendel[227619]:    },
Nov 26 01:19:03 compute-0 keen_mendel[227619]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:19:03 compute-0 keen_mendel[227619]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:19:03 compute-0 keen_mendel[227619]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:19:03 compute-0 keen_mendel[227619]:        "osd_id": 1,
Nov 26 01:19:03 compute-0 keen_mendel[227619]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:19:03 compute-0 keen_mendel[227619]:        "type": "bluestore"
Nov 26 01:19:03 compute-0 keen_mendel[227619]:    }
Nov 26 01:19:03 compute-0 keen_mendel[227619]: }
Nov 26 01:19:03 compute-0 systemd[1]: libpod-fbaa4a2d72d72ab7e3e1a8cc2e108ddbe6ed8355f0656612d7361528cd9274eb.scope: Deactivated successfully.
Nov 26 01:19:03 compute-0 podman[227602]: 2025-11-26 01:19:03.867612816 +0000 UTC m=+1.508082928 container died fbaa4a2d72d72ab7e3e1a8cc2e108ddbe6ed8355f0656612d7361528cd9274eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 01:19:03 compute-0 systemd[1]: libpod-fbaa4a2d72d72ab7e3e1a8cc2e108ddbe6ed8355f0656612d7361528cd9274eb.scope: Consumed 1.195s CPU time.
Nov 26 01:19:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-98e41f76fc069a8c799b1da57daa5a73bde26f4c1076277c5f59267ba1c316df-merged.mount: Deactivated successfully.
Nov 26 01:19:03 compute-0 podman[227602]: 2025-11-26 01:19:03.989139926 +0000 UTC m=+1.629610028 container remove fbaa4a2d72d72ab7e3e1a8cc2e108ddbe6ed8355f0656612d7361528cd9274eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendel, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:19:04 compute-0 systemd[1]: libpod-conmon-fbaa4a2d72d72ab7e3e1a8cc2e108ddbe6ed8355f0656612d7361528cd9274eb.scope: Deactivated successfully.
Nov 26 01:19:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:19:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:19:04 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:19:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:19:04 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:19:04 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev d4b6d776-8f47-4b21-904b-17118b75d815 does not exist
Nov 26 01:19:04 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev da3dcf87-073f-4d83-a88d-dd8400eddaa3 does not exist
Nov 26 01:19:04 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 26 01:19:04 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 26 01:19:05 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:19:05 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:19:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v225: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 26 01:19:06 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 26 01:19:06 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 26 01:19:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v226: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 31 B/s, 1 objects/s recovering
Nov 26 01:19:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 26 01:19:07 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 26 01:19:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 26 01:19:07 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 26 01:19:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 26 01:19:07 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 26 01:19:07 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 26 01:19:07 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 26 01:19:07 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 26 01:19:07 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 26 01:19:07 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 26 01:19:08 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 26 01:19:08 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 26 01:19:08 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 26 01:19:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:19:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v228: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 26 01:19:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 26 01:19:09 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 26 01:19:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 26 01:19:09 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 26 01:19:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 26 01:19:09 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 26 01:19:09 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 26 01:19:09 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 104 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=104 pruub=14.646048546s) [2] r=-1 lpr=104 pi=[60,104)/1 crt=51'590 mlcod 0'0 active pruub 211.317199707s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:09 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 104 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=104 pruub=14.645963669s) [2] r=-1 lpr=104 pi=[60,104)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 211.317199707s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:19:09 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 104 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=104) [2] r=0 lpr=104 pi=[60,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:19:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 26 01:19:10 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 26 01:19:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 26 01:19:10 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 26 01:19:10 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 105 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=105) [2]/[0] r=-1 lpr=105 pi=[60,105)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:10 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 105 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=105) [2]/[0] r=-1 lpr=105 pi=[60,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:19:10 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 105 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=105) [2]/[0] r=0 lpr=105 pi=[60,105)/1 crt=51'590 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:10 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 105 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=60/61 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=105) [2]/[0] r=0 lpr=105 pi=[60,105)/1 crt=51'590 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:19:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:19:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:19:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:19:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:19:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:19:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:19:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:19:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 26 01:19:11 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 26 01:19:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 26 01:19:11 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 26 01:19:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 26 01:19:11 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 26 01:19:11 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 26 01:19:11 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 106 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=105/106 n=6 ec=54/45 lis/c=60/60 les/c/f=61/61/0 sis=105) [2]/[0] async=[2] r=0 lpr=105 pi=[60,105)/1 crt=51'590 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:19:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 26 01:19:12 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 26 01:19:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 26 01:19:12 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 26 01:19:12 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 107 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=105/106 n=6 ec=54/45 lis/c=105/60 les/c/f=106/61/0 sis=107 pruub=15.011804581s) [2] async=[2] r=-1 lpr=107 pi=[60,107)/1 crt=51'590 mlcod 51'590 active pruub 214.588745117s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:12 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 107 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=105/106 n=6 ec=54/45 lis/c=105/60 les/c/f=106/61/0 sis=107 pruub=15.010324478s) [2] r=-1 lpr=107 pi=[60,107)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 214.588745117s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:19:12 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 107 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=105/60 les/c/f=106/61/0 sis=107) [2] r=0 lpr=107 pi=[60,107)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:12 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 107 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=105/60 les/c/f=106/61/0 sis=107) [2] r=0 lpr=107 pi=[60,107)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:19:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v234: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:19:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 26 01:19:13 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 26 01:19:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 26 01:19:13 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 26 01:19:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 26 01:19:13 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 26 01:19:13 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 108 pg[9.19( v 51'590 (0'0,51'590] local-lis/les=107/108 n=6 ec=54/45 lis/c=105/60 les/c/f=106/61/0 sis=107) [2] r=0 lpr=107 pi=[60,107)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:19:13 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 26 01:19:13 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 26 01:19:13 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 26 01:19:13 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 26 01:19:13 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 26 01:19:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:19:14 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 26 01:19:14 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 26 01:19:14 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 26 01:19:14 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 26 01:19:14 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 26 01:19:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v236: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:19:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 26 01:19:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 26 01:19:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 26 01:19:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 26 01:19:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 26 01:19:15 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 26 01:19:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 26 01:19:16 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 26 01:19:16 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.f deep-scrub starts
Nov 26 01:19:16 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.f deep-scrub ok
Nov 26 01:19:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v238: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 26 01:19:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 26 01:19:17 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 26 01:19:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 26 01:19:17 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 26 01:19:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 26 01:19:17 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 26 01:19:17 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 26 01:19:17 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 109 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=85/86 n=6 ec=54/45 lis/c=85/85 les/c/f=86/86/0 sis=109 pruub=14.675189972s) [0] r=-1 lpr=109 pi=[85,109)/1 crt=51'590 mlcod 0'0 active pruub 206.579467773s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:17 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 109 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=85/86 n=6 ec=54/45 lis/c=85/85 les/c/f=86/86/0 sis=109 pruub=14.675021172s) [0] r=-1 lpr=109 pi=[85,109)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 206.579467773s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:19:17 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 109 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=85/85 les/c/f=86/86/0 sis=109) [0] r=0 lpr=109 pi=[85,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:19:17 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 26 01:19:17 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 26 01:19:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 26 01:19:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 26 01:19:18 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 26 01:19:18 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 26 01:19:18 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 111 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=85/86 n=6 ec=54/45 lis/c=85/85 les/c/f=86/86/0 sis=111) [0]/[2] r=0 lpr=111 pi=[85,111)/1 crt=51'590 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:18 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 111 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=85/86 n=6 ec=54/45 lis/c=85/85 les/c/f=86/86/0 sis=111) [0]/[2] r=0 lpr=111 pi=[85,111)/1 crt=51'590 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:19:18 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=85/85 les/c/f=86/86/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[85,111)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:18 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=85/85 les/c/f=86/86/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[85,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:19:18 compute-0 podman[227742]: 2025-11-26 01:19:18.572506225 +0000 UTC m=+0.103949298 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:19:18 compute-0 podman[227741]: 2025-11-26 01:19:18.599626674 +0000 UTC m=+0.138133315 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 26 01:19:18 compute-0 podman[227743]: 2025-11-26 01:19:18.644561831 +0000 UTC m=+0.177522206 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 26 01:19:18 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 26 01:19:18 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 26 01:19:18 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 26 01:19:18 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 26 01:19:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:19:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 26 01:19:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 26 01:19:19 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 26 01:19:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 26 01:19:19 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 26 01:19:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 26 01:19:19 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 26 01:19:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 26 01:19:19 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 112 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=71/72 n=6 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=112 pruub=9.586546898s) [0] r=-1 lpr=112 pi=[71,112)/1 crt=51'590 mlcod 0'0 active pruub 203.252716064s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:19 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 112 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=71/72 n=6 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=112 pruub=9.586428642s) [0] r=-1 lpr=112 pi=[71,112)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 203.252716064s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:19:19 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=112) [0] r=0 lpr=112 pi=[71,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:19:19 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 112 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=111/112 n=6 ec=54/45 lis/c=85/85 les/c/f=86/86/0 sis=111) [0]/[2] async=[0] r=0 lpr=111 pi=[85,111)/1 crt=51'590 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:19:19 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 26 01:19:19 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 26 01:19:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 26 01:19:20 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 26 01:19:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 26 01:19:20 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 26 01:19:20 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 113 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=111/112 n=6 ec=54/45 lis/c=111/85 les/c/f=112/86/0 sis=113 pruub=14.999724388s) [0] async=[0] r=-1 lpr=113 pi=[85,113)/1 crt=51'590 mlcod 51'590 active pruub 209.702072144s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:20 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 113 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=71/72 n=6 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=113) [0]/[2] r=0 lpr=113 pi=[71,113)/1 crt=51'590 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:20 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 113 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=71/72 n=6 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=113) [0]/[2] r=0 lpr=113 pi=[71,113)/1 crt=51'590 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:19:20 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 113 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=111/112 n=6 ec=54/45 lis/c=111/85 les/c/f=112/86/0 sis=113 pruub=14.998247147s) [0] r=-1 lpr=113 pi=[85,113)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 209.702072144s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:19:20 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 113 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=111/85 les/c/f=112/86/0 sis=113) [0] r=0 lpr=113 pi=[85,113)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:20 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 113 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=111/85 les/c/f=112/86/0 sis=113) [0] r=0 lpr=113 pi=[85,113)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:19:20 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[71,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:20 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[71,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:19:20 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.5 deep-scrub starts
Nov 26 01:19:20 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.5 deep-scrub ok
Nov 26 01:19:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v244: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:19:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 26 01:19:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 26 01:19:21 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 26 01:19:21 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 114 pg[9.1c( v 51'590 (0'0,51'590] local-lis/les=113/114 n=6 ec=54/45 lis/c=111/85 les/c/f=112/86/0 sis=113) [0] r=0 lpr=113 pi=[85,113)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:19:21 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 114 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=113/114 n=6 ec=54/45 lis/c=71/71 les/c/f=72/72/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[71,113)/1 crt=51'590 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:19:21 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 26 01:19:21 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 26 01:19:22 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 26 01:19:22 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 26 01:19:22 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 26 01:19:22 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 115 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=113/114 n=6 ec=54/45 lis/c=113/71 les/c/f=114/72/0 sis=115 pruub=15.018388748s) [0] async=[0] r=-1 lpr=115 pi=[71,115)/1 crt=51'590 mlcod 51'590 active pruub 211.742645264s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:22 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 115 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=113/114 n=6 ec=54/45 lis/c=113/71 les/c/f=114/72/0 sis=115 pruub=15.018222809s) [0] r=-1 lpr=115 pi=[71,115)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 211.742645264s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:19:22 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 115 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=113/71 les/c/f=114/72/0 sis=115) [0] r=0 lpr=115 pi=[71,115)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:22 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 115 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=113/71 les/c/f=114/72/0 sis=115) [0] r=0 lpr=115 pi=[71,115)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:19:22 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 26 01:19:22 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 26 01:19:22 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.b deep-scrub starts
Nov 26 01:19:22 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.b deep-scrub ok
Nov 26 01:19:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v247: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 26 01:19:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 26 01:19:23 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:19:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 26 01:19:23 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 26 01:19:23 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:19:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 26 01:19:23 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 26 01:19:23 compute-0 podman[227808]: 2025-11-26 01:19:23.574919482 +0000 UTC m=+0.121838280 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, name=ubi9-minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Nov 26 01:19:23 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 116 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=74/75 n=6 ec=54/45 lis/c=74/74 les/c/f=75/75/0 sis=116 pruub=8.469937325s) [1] r=-1 lpr=116 pi=[74,116)/1 crt=51'590 mlcod 0'0 active pruub 206.228698730s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:23 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 116 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=74/75 n=6 ec=54/45 lis/c=74/74 les/c/f=75/75/0 sis=116 pruub=8.469870567s) [1] r=-1 lpr=116 pi=[74,116)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 206.228698730s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:19:23 compute-0 podman[227809]: 2025-11-26 01:19:23.584495939 +0000 UTC m=+0.124075481 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:19:23 compute-0 ceph-osd[206645]: osd.0 pg_epoch: 116 pg[9.1e( v 51'590 (0'0,51'590] local-lis/les=115/116 n=6 ec=54/45 lis/c=113/71 les/c/f=114/72/0 sis=115) [0] r=0 lpr=115 pi=[71,115)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:19:23 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=74/74 les/c/f=75/75/0 sis=116) [1] r=0 lpr=116 pi=[74,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:19:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:19:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 26 01:19:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 26 01:19:24 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 26 01:19:24 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=74/74 les/c/f=75/75/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[74,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:24 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=74/74 les/c/f=75/75/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[74,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 26 01:19:24 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 117 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=74/75 n=6 ec=54/45 lis/c=74/74 les/c/f=75/75/0 sis=117) [1]/[2] r=0 lpr=117 pi=[74,117)/1 crt=51'590 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:24 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 117 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=74/75 n=6 ec=54/45 lis/c=74/74 les/c/f=75/75/0 sis=117) [1]/[2] r=0 lpr=117 pi=[74,117)/1 crt=51'590 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 26 01:19:24 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 26 01:19:24 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 26 01:19:24 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 26 01:19:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 26 01:19:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 26 01:19:25 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 26 01:19:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 282 B/s wr, 13 op/s; 30 B/s, 5 objects/s recovering
Nov 26 01:19:25 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 118 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=117/118 n=6 ec=54/45 lis/c=74/74 les/c/f=75/75/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[74,117)/1 crt=51'590 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:19:25 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 26 01:19:25 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 26 01:19:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 26 01:19:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 26 01:19:26 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 26 01:19:26 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 119 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=117/74 les/c/f=118/75/0 sis=119) [1] r=0 lpr=119 pi=[74,119)/1 luod=0'0 crt=51'590 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:26 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 119 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=0/0 n=6 ec=54/45 lis/c=117/74 les/c/f=118/75/0 sis=119) [1] r=0 lpr=119 pi=[74,119)/1 crt=51'590 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 26 01:19:26 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 119 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=117/118 n=6 ec=54/45 lis/c=117/74 les/c/f=118/75/0 sis=119 pruub=15.536689758s) [1] async=[1] r=-1 lpr=119 pi=[74,119)/1 crt=51'590 mlcod 51'590 active pruub 215.791717529s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 26 01:19:26 compute-0 ceph-osd[208794]: osd.2 pg_epoch: 119 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=117/118 n=6 ec=54/45 lis/c=117/74 les/c/f=118/75/0 sis=119 pruub=15.536161423s) [1] r=-1 lpr=119 pi=[74,119)/1 crt=51'590 mlcod 0'0 unknown NOTIFY pruub 215.791717529s@ mbc={}] state<Start>: transitioning to Stray
Nov 26 01:19:26 compute-0 python3.9[228002]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:19:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 26 01:19:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 26 01:19:27 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 26 01:19:27 compute-0 ceph-osd[207774]: osd.1 pg_epoch: 120 pg[9.1f( v 51'590 (0'0,51'590] local-lis/les=119/120 n=6 ec=54/45 lis/c=117/74 les/c/f=118/75/0 sis=119) [1] r=0 lpr=119 pi=[74,119)/1 crt=51'590 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 26 01:19:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 30 B/s, 1 objects/s recovering
Nov 26 01:19:27 compute-0 podman[228009]: 2025-11-26 01:19:27.614759451 +0000 UTC m=+0.164065310 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:19:28 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 26 01:19:28 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 26 01:19:29 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 26 01:19:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:19:29 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 26 01:19:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 1 objects/s recovering
Nov 26 01:19:29 compute-0 podman[228271]: 2025-11-26 01:19:29.543634949 +0000 UTC m=+0.103396533 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, architecture=x86_64, release-0.7.12=, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 26 01:19:29 compute-0 podman[158021]: time="2025-11-26T01:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:19:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:19:29 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 26 01:19:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6810 "" "Go-http-client/1.1"
Nov 26 01:19:29 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 26 01:19:29 compute-0 python3.9[228324]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 26 01:19:30 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 26 01:19:30 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 26 01:19:31 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 26 01:19:31 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 26 01:19:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Nov 26 01:19:31 compute-0 python3.9[228476]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 26 01:19:31 compute-0 openstack_network_exporter[160178]: ERROR   01:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:19:31 compute-0 openstack_network_exporter[160178]: ERROR   01:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:19:31 compute-0 openstack_network_exporter[160178]: ERROR   01:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:19:31 compute-0 openstack_network_exporter[160178]: ERROR   01:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:19:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:19:31 compute-0 openstack_network_exporter[160178]: ERROR   01:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:19:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:19:31 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 26 01:19:31 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 26 01:19:32 compute-0 python3.9[228628]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:19:33 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 26 01:19:33 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 26 01:19:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 26 01:19:33 compute-0 python3.9[228780]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 26 01:19:33 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 26 01:19:33 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 26 01:19:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:19:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 26 01:19:35 compute-0 python3.9[228932]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:19:36 compute-0 python3.9[229084]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:19:37 compute-0 python3.9[229162]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:19:37 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 26 01:19:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 26 01:19:37 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 26 01:19:37 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 26 01:19:37 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 26 01:19:38 compute-0 python3.9[229314]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:19:38 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 26 01:19:38 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 26 01:19:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:19:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 26 01:19:39 compute-0 python3.9[229468]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 26 01:19:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:19:40
Nov 26 01:19:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:19:40 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:19:40 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.log', 'vms', '.rgw.root', 'default.rgw.meta', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 26 01:19:40 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:19:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 26 01:19:41 compute-0 python3.9[229621]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 26 01:19:41 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 26 01:19:41 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 26 01:19:42 compute-0 python3.9[229774]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 01:19:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:19:43 compute-0 python3.9[229926]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 26 01:19:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:19:44 compute-0 python3.9[230078]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 01:19:44 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 26 01:19:44 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 26 01:19:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:19:45 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 10.3 deep-scrub starts
Nov 26 01:19:45 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 10.3 deep-scrub ok
Nov 26 01:19:45 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.4 deep-scrub starts
Nov 26 01:19:45 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 5.4 deep-scrub ok
Nov 26 01:19:46 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 26 01:19:46 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 26 01:19:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:19:47 compute-0 python3.9[230231]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:19:47 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 26 01:19:47 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 26 01:19:47 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 26 01:19:48 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 26 01:19:48 compute-0 python3.9[230383]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:19:48 compute-0 podman[230434]: 2025-11-26 01:19:48.919078382 +0000 UTC m=+0.112474757 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 01:19:48 compute-0 podman[230433]: 2025-11-26 01:19:48.940317156 +0000 UTC m=+0.132416705 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118)
Nov 26 01:19:48 compute-0 podman[230435]: 2025-11-26 01:19:48.972566258 +0000 UTC m=+0.148827644 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 01:19:48 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 26 01:19:48 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 26 01:19:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:19:49 compute-0 python3.9[230519]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:19:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:19:49 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.9 deep-scrub starts
Nov 26 01:19:49 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.9 deep-scrub ok
Nov 26 01:19:50 compute-0 python3.9[230678]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:19:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:19:50 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 26 01:19:50 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 26 01:19:51 compute-0 python3.9[230756]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:19:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:19:51 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 26 01:19:51 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 26 01:19:52 compute-0 python3.9[230908]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 01:19:52 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 26 01:19:52 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 26 01:19:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:19:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:19:54 compute-0 podman[231030]: 2025-11-26 01:19:54.607114507 +0000 UTC m=+0.154868644 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, version=9.6)
Nov 26 01:19:54 compute-0 podman[231033]: 2025-11-26 01:19:54.609800142 +0000 UTC m=+0.144208945 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:19:54 compute-0 python3.9[231095]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:19:54 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 26 01:19:54 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 26 01:19:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:19:55 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 26 01:19:55 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 26 01:19:55 compute-0 python3.9[231255]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 26 01:19:56 compute-0 python3.9[231405]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:19:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:19:58 compute-0 podman[231529]: 2025-11-26 01:19:58.40649209 +0000 UTC m=+0.136184200 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 26 01:19:58 compute-0 python3.9[231576]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:19:58 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 26 01:19:58 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 26 01:19:58 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 26 01:19:58 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 26 01:19:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:19:59 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 26 01:19:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:19:59 compute-0 podman[158021]: time="2025-11-26T01:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:19:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:19:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6813 "" "Go-http-client/1.1"
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.777 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.778 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feff248b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff25140e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248a270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff35fda90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff5310410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff2489520>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff4ce75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feff25140b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feff248b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feff248b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feff248b740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feff248b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feff248b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feff248b1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feff248ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feff248b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feff248b0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feff248ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feff248bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feff248bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feff24894f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feff248b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feff248bc20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feff248b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feff248bcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feff55e84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feff248bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feff248b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feff248bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feff248a2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feff248aea0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feff248afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.801 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:19:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:19:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:20:00 compute-0 podman[231713]: 2025-11-26 01:20:00.193212302 +0000 UTC m=+0.155743548 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.expose-services=, distribution-scope=public, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release-0.7.12=, build-date=2024-09-18T21:23:30)
Nov 26 01:20:00 compute-0 python3.9[231748]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 26 01:20:00 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 26 01:20:00 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 26 01:20:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:20:01 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 26 01:20:01 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 26 01:20:01 compute-0 openstack_network_exporter[160178]: ERROR   01:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:20:01 compute-0 openstack_network_exporter[160178]: ERROR   01:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:20:01 compute-0 openstack_network_exporter[160178]: ERROR   01:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:20:01 compute-0 openstack_network_exporter[160178]: ERROR   01:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:20:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:20:01 compute-0 openstack_network_exporter[160178]: ERROR   01:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:20:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:20:02 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 26 01:20:02 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 26 01:20:02 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 26 01:20:02 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 26 01:20:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:20:03 compute-0 python3.9[231911]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:20:03 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 26 01:20:03 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 26 01:20:03 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 26 01:20:03 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 26 01:20:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:20:04 compute-0 python3.9[232090]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:20:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:20:05 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 26 01:20:05 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 26 01:20:05 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Nov 26 01:20:05 compute-0 systemd[1]: session-41.scope: Consumed 1min 22.346s CPU time.
Nov 26 01:20:05 compute-0 systemd-logind[800]: Session 41 logged out. Waiting for processes to exit.
Nov 26 01:20:05 compute-0 systemd-logind[800]: Removed session 41.
Nov 26 01:20:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:20:05 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:20:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:20:05 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:20:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:20:05 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:20:05 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 0cfa51a0-5f26-4846-aa2e-352763a77836 does not exist
Nov 26 01:20:05 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev eed3c47b-d24f-4459-87a6-ca8ef80333f1 does not exist
Nov 26 01:20:05 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 3ad4ed88-d345-4e81-8b1e-8f646fa448e7 does not exist
Nov 26 01:20:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:20:05 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:20:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:20:05 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:20:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:20:05 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:20:06 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:20:06 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:20:06 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:20:06 compute-0 podman[232359]: 2025-11-26 01:20:06.833277829 +0000 UTC m=+0.083529948 container create 5d475376f7f2d8be08abcfae1ccd37df879953256e1ae122eee33c18844ce04e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:20:06 compute-0 systemd[194522]: Created slice User Background Tasks Slice.
Nov 26 01:20:06 compute-0 systemd[194522]: Starting Cleanup of User's Temporary Files and Directories...
Nov 26 01:20:06 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 26 01:20:06 compute-0 podman[232359]: 2025-11-26 01:20:06.793682691 +0000 UTC m=+0.043934870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:20:06 compute-0 ceph-osd[207774]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 26 01:20:06 compute-0 systemd[194522]: Finished Cleanup of User's Temporary Files and Directories.
Nov 26 01:20:06 compute-0 systemd[1]: Started libpod-conmon-5d475376f7f2d8be08abcfae1ccd37df879953256e1ae122eee33c18844ce04e.scope.
Nov 26 01:20:06 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:20:06 compute-0 podman[232359]: 2025-11-26 01:20:06.990374393 +0000 UTC m=+0.240626552 container init 5d475376f7f2d8be08abcfae1ccd37df879953256e1ae122eee33c18844ce04e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 01:20:07 compute-0 podman[232359]: 2025-11-26 01:20:07.010490616 +0000 UTC m=+0.260742735 container start 5d475376f7f2d8be08abcfae1ccd37df879953256e1ae122eee33c18844ce04e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chaplygin, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 01:20:07 compute-0 podman[232359]: 2025-11-26 01:20:07.020054304 +0000 UTC m=+0.270306463 container attach 5d475376f7f2d8be08abcfae1ccd37df879953256e1ae122eee33c18844ce04e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chaplygin, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 01:20:07 compute-0 kind_chaplygin[232375]: 167 167
Nov 26 01:20:07 compute-0 systemd[1]: libpod-5d475376f7f2d8be08abcfae1ccd37df879953256e1ae122eee33c18844ce04e.scope: Deactivated successfully.
Nov 26 01:20:07 compute-0 podman[232359]: 2025-11-26 01:20:07.024790006 +0000 UTC m=+0.275042075 container died 5d475376f7f2d8be08abcfae1ccd37df879953256e1ae122eee33c18844ce04e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 01:20:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a70bb43e643337ea26e4801f2f0c79d38d0d8e93320d8bb7e0af927135793502-merged.mount: Deactivated successfully.
Nov 26 01:20:07 compute-0 podman[232359]: 2025-11-26 01:20:07.097462459 +0000 UTC m=+0.347714578 container remove 5d475376f7f2d8be08abcfae1ccd37df879953256e1ae122eee33c18844ce04e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:20:07 compute-0 systemd[1]: libpod-conmon-5d475376f7f2d8be08abcfae1ccd37df879953256e1ae122eee33c18844ce04e.scope: Deactivated successfully.
Nov 26 01:20:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:20:07 compute-0 podman[232400]: 2025-11-26 01:20:07.40064755 +0000 UTC m=+0.089556496 container create f62d1bb41d18ca9d0e8bba1db6dcb3f370d8ad67fdfd19035eb8552952a0832f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_taussig, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:20:07 compute-0 podman[232400]: 2025-11-26 01:20:07.354954612 +0000 UTC m=+0.043863638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:20:07 compute-0 systemd[1]: Started libpod-conmon-f62d1bb41d18ca9d0e8bba1db6dcb3f370d8ad67fdfd19035eb8552952a0832f.scope.
Nov 26 01:20:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09689601214a923cb8e1370094b2c103ebab355abb754e666967ce405f4e64be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09689601214a923cb8e1370094b2c103ebab355abb754e666967ce405f4e64be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09689601214a923cb8e1370094b2c103ebab355abb754e666967ce405f4e64be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09689601214a923cb8e1370094b2c103ebab355abb754e666967ce405f4e64be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09689601214a923cb8e1370094b2c103ebab355abb754e666967ce405f4e64be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:20:07 compute-0 podman[232400]: 2025-11-26 01:20:07.525015109 +0000 UTC m=+0.213924115 container init f62d1bb41d18ca9d0e8bba1db6dcb3f370d8ad67fdfd19035eb8552952a0832f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_taussig, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:20:07 compute-0 podman[232400]: 2025-11-26 01:20:07.55721251 +0000 UTC m=+0.246121486 container start f62d1bb41d18ca9d0e8bba1db6dcb3f370d8ad67fdfd19035eb8552952a0832f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_taussig, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:20:07 compute-0 podman[232400]: 2025-11-26 01:20:07.563913227 +0000 UTC m=+0.252822263 container attach f62d1bb41d18ca9d0e8bba1db6dcb3f370d8ad67fdfd19035eb8552952a0832f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_taussig, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 26 01:20:07 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 26 01:20:07 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 26 01:20:08 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 26 01:20:08 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 26 01:20:08 compute-0 sad_taussig[232414]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:20:08 compute-0 sad_taussig[232414]: --> relative data size: 1.0
Nov 26 01:20:08 compute-0 sad_taussig[232414]: --> All data devices are unavailable
Nov 26 01:20:08 compute-0 systemd[1]: libpod-f62d1bb41d18ca9d0e8bba1db6dcb3f370d8ad67fdfd19035eb8552952a0832f.scope: Deactivated successfully.
Nov 26 01:20:08 compute-0 systemd[1]: libpod-f62d1bb41d18ca9d0e8bba1db6dcb3f370d8ad67fdfd19035eb8552952a0832f.scope: Consumed 1.254s CPU time.
Nov 26 01:20:08 compute-0 podman[232400]: 2025-11-26 01:20:08.865917549 +0000 UTC m=+1.554826525 container died f62d1bb41d18ca9d0e8bba1db6dcb3f370d8ad67fdfd19035eb8552952a0832f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 01:20:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-09689601214a923cb8e1370094b2c103ebab355abb754e666967ce405f4e64be-merged.mount: Deactivated successfully.
Nov 26 01:20:08 compute-0 podman[232400]: 2025-11-26 01:20:08.965216147 +0000 UTC m=+1.654125093 container remove f62d1bb41d18ca9d0e8bba1db6dcb3f370d8ad67fdfd19035eb8552952a0832f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_taussig, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:20:08 compute-0 systemd[1]: libpod-conmon-f62d1bb41d18ca9d0e8bba1db6dcb3f370d8ad67fdfd19035eb8552952a0832f.scope: Deactivated successfully.
Nov 26 01:20:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:20:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:20:10 compute-0 podman[232593]: 2025-11-26 01:20:10.095041862 +0000 UTC m=+0.084470364 container create 2e49ff854ad67222ac5c819b991c71123e22e3c5063cc3fe29ee3dcb34248ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 01:20:10 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 26 01:20:10 compute-0 podman[232593]: 2025-11-26 01:20:10.063703676 +0000 UTC m=+0.053132208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:20:10 compute-0 ceph-osd[208794]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 26 01:20:10 compute-0 systemd[1]: Started libpod-conmon-2e49ff854ad67222ac5c819b991c71123e22e3c5063cc3fe29ee3dcb34248ddc.scope.
Nov 26 01:20:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:20:10 compute-0 podman[232593]: 2025-11-26 01:20:10.268278489 +0000 UTC m=+0.257707021 container init 2e49ff854ad67222ac5c819b991c71123e22e3c5063cc3fe29ee3dcb34248ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 01:20:10 compute-0 podman[232593]: 2025-11-26 01:20:10.284675797 +0000 UTC m=+0.274104289 container start 2e49ff854ad67222ac5c819b991c71123e22e3c5063cc3fe29ee3dcb34248ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:20:10 compute-0 podman[232593]: 2025-11-26 01:20:10.290781088 +0000 UTC m=+0.280209640 container attach 2e49ff854ad67222ac5c819b991c71123e22e3c5063cc3fe29ee3dcb34248ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 01:20:10 compute-0 mystifying_carson[232609]: 167 167
Nov 26 01:20:10 compute-0 systemd[1]: libpod-2e49ff854ad67222ac5c819b991c71123e22e3c5063cc3fe29ee3dcb34248ddc.scope: Deactivated successfully.
Nov 26 01:20:10 compute-0 podman[232593]: 2025-11-26 01:20:10.295918562 +0000 UTC m=+0.285347064 container died 2e49ff854ad67222ac5c819b991c71123e22e3c5063cc3fe29ee3dcb34248ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:20:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-24bc70adb070464890bc948d020f9499ed2d27b7b3f761f99ebaf6ff8a59dc7e-merged.mount: Deactivated successfully.
Nov 26 01:20:10 compute-0 podman[232593]: 2025-11-26 01:20:10.385902469 +0000 UTC m=+0.375330961 container remove 2e49ff854ad67222ac5c819b991c71123e22e3c5063cc3fe29ee3dcb34248ddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 01:20:10 compute-0 systemd[1]: libpod-conmon-2e49ff854ad67222ac5c819b991c71123e22e3c5063cc3fe29ee3dcb34248ddc.scope: Deactivated successfully.
Nov 26 01:20:10 compute-0 podman[232634]: 2025-11-26 01:20:10.637911929 +0000 UTC m=+0.081653425 container create 742be9f20c202819bfc8b59e62d38da5315914a32227f5cc1262400475b9685a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:20:10 compute-0 podman[232634]: 2025-11-26 01:20:10.609590516 +0000 UTC m=+0.053332062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:20:10 compute-0 systemd[1]: Started libpod-conmon-742be9f20c202819bfc8b59e62d38da5315914a32227f5cc1262400475b9685a.scope.
Nov 26 01:20:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:20:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18e725f8c7b45c9979e2c3d219a3e6ff4f934294731cef60dca46ccc8ae6997/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:20:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18e725f8c7b45c9979e2c3d219a3e6ff4f934294731cef60dca46ccc8ae6997/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:20:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18e725f8c7b45c9979e2c3d219a3e6ff4f934294731cef60dca46ccc8ae6997/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:20:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18e725f8c7b45c9979e2c3d219a3e6ff4f934294731cef60dca46ccc8ae6997/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:20:10 compute-0 podman[232634]: 2025-11-26 01:20:10.813772798 +0000 UTC m=+0.257514344 container init 742be9f20c202819bfc8b59e62d38da5315914a32227f5cc1262400475b9685a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:20:10 compute-0 podman[232634]: 2025-11-26 01:20:10.832736219 +0000 UTC m=+0.276477715 container start 742be9f20c202819bfc8b59e62d38da5315914a32227f5cc1262400475b9685a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 01:20:10 compute-0 podman[232634]: 2025-11-26 01:20:10.837626076 +0000 UTC m=+0.281367662 container attach 742be9f20c202819bfc8b59e62d38da5315914a32227f5cc1262400475b9685a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 01:20:10 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 26 01:20:10 compute-0 ceph-osd[206645]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 26 01:20:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:20:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:20:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:20:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:20:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:20:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:20:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:20:11 compute-0 systemd-logind[800]: New session 42 of user zuul.
Nov 26 01:20:11 compute-0 systemd[1]: Started Session 42 of User zuul.
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]: {
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:    "0": [
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:        {
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "devices": [
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "/dev/loop3"
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            ],
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "lv_name": "ceph_lv0",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "lv_size": "21470642176",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "name": "ceph_lv0",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "tags": {
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.cluster_name": "ceph",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.crush_device_class": "",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.encrypted": "0",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.osd_id": "0",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.type": "block",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.vdo": "0"
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            },
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "type": "block",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "vg_name": "ceph_vg0"
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:        }
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:    ],
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:    "1": [
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:        {
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "devices": [
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "/dev/loop4"
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            ],
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "lv_name": "ceph_lv1",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "lv_size": "21470642176",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "name": "ceph_lv1",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "tags": {
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.cluster_name": "ceph",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.crush_device_class": "",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.encrypted": "0",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.osd_id": "1",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.type": "block",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:                "ceph.vdo": "0"
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            },
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "type": "block",
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:            "vg_name": "ceph_vg1"
Nov 26 01:20:11 compute-0 vigorous_nightingale[232648]:        }
Nov 26 01:22:25 compute-0 python3.9[245337]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:22:26 compute-0 rsyslogd[188548]: imjournal: 1622 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 26 01:22:26 compute-0 podman[245455]: 2025-11-26 01:22:26.235736068 +0000 UTC m=+0.106424861 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:22:26 compute-0 podman[245443]: 2025-11-26 01:22:26.250162095 +0000 UTC m=+0.112083451 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, vcs-type=git, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41)
Nov 26 01:22:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:22:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:22:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:22:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:22:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:22:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:22:26 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 0f16869f-b7bb-4f35-ad17-02d361123543 does not exist
Nov 26 01:22:26 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev bb0778ec-5ef8-4201-823b-3bffecfd272c does not exist
Nov 26 01:22:26 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 123bc96e-12ec-44b1-b8ec-07a12b751c12 does not exist
Nov 26 01:22:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:22:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:22:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:22:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:22:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:22:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:22:26 compute-0 python3.9[245647]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:22:26 compute-0 systemd[1]: Reloading.
Nov 26 01:22:27 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:22:27 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:22:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:27 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:22:27 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:22:27 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:22:27 compute-0 systemd[1]: Starting Create netns directory...
Nov 26 01:22:27 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 01:22:27 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 01:22:27 compute-0 systemd[1]: Finished Create netns directory.
Nov 26 01:22:28 compute-0 podman[245958]: 2025-11-26 01:22:28.578148943 +0000 UTC m=+0.109110237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:22:28 compute-0 podman[245958]: 2025-11-26 01:22:28.692197247 +0000 UTC m=+0.223158491 container create 6cfe4fc37bb768ac5a48f05b056e64cf72c32e03b69833ad8e42823b54a1049b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 01:22:28 compute-0 systemd[1]: Started libpod-conmon-6cfe4fc37bb768ac5a48f05b056e64cf72c32e03b69833ad8e42823b54a1049b.scope.
Nov 26 01:22:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:22:28 compute-0 python3.9[246002]: ansible-ansible.builtin.service_facts Invoked
Nov 26 01:22:28 compute-0 podman[245958]: 2025-11-26 01:22:28.874593269 +0000 UTC m=+0.405554513 container init 6cfe4fc37bb768ac5a48f05b056e64cf72c32e03b69833ad8e42823b54a1049b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_northcutt, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 01:22:28 compute-0 podman[245958]: 2025-11-26 01:22:28.896092395 +0000 UTC m=+0.427053639 container start 6cfe4fc37bb768ac5a48f05b056e64cf72c32e03b69833ad8e42823b54a1049b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_northcutt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:22:28 compute-0 loving_northcutt[246007]: 167 167
Nov 26 01:22:28 compute-0 systemd[1]: libpod-6cfe4fc37bb768ac5a48f05b056e64cf72c32e03b69833ad8e42823b54a1049b.scope: Deactivated successfully.
Nov 26 01:22:28 compute-0 podman[245958]: 2025-11-26 01:22:28.936791802 +0000 UTC m=+0.467753016 container attach 6cfe4fc37bb768ac5a48f05b056e64cf72c32e03b69833ad8e42823b54a1049b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_northcutt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:22:28 compute-0 podman[245958]: 2025-11-26 01:22:28.937342327 +0000 UTC m=+0.468303571 container died 6cfe4fc37bb768ac5a48f05b056e64cf72c32e03b69833ad8e42823b54a1049b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:22:28 compute-0 network[246037]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 01:22:28 compute-0 network[246038]: 'network-scripts' will be removed from distribution in near future.
Nov 26 01:22:28 compute-0 network[246039]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 01:22:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:22:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:29 compute-0 podman[158021]: time="2025-11-26T01:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:22:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b71bf245798ccf2be1e76b19095cefc1f3758c272aa5885eb6aaa584974bc4ec-merged.mount: Deactivated successfully.
Nov 26 01:22:30 compute-0 podman[245958]: 2025-11-26 01:22:30.007204234 +0000 UTC m=+1.538165438 container remove 6cfe4fc37bb768ac5a48f05b056e64cf72c32e03b69833ad8e42823b54a1049b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_northcutt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:22:30 compute-0 podman[158021]: @ - - [26/Nov/2025:01:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34190 "" "Go-http-client/1.1"
Nov 26 01:22:30 compute-0 systemd[1]: libpod-conmon-6cfe4fc37bb768ac5a48f05b056e64cf72c32e03b69833ad8e42823b54a1049b.scope: Deactivated successfully.
Nov 26 01:22:30 compute-0 podman[158021]: @ - - [26/Nov/2025:01:22:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6822 "" "Go-http-client/1.1"
Nov 26 01:22:30 compute-0 podman[246062]: 2025-11-26 01:22:30.24938527 +0000 UTC m=+0.077657090 container create 04e548beadbdda043263397f9fdb683bca0f2383ee82d1434dbaa9a9e24d7c86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sinoussi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:22:30 compute-0 podman[246062]: 2025-11-26 01:22:30.207678965 +0000 UTC m=+0.035950835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:22:30 compute-0 systemd[1]: Started libpod-conmon-04e548beadbdda043263397f9fdb683bca0f2383ee82d1434dbaa9a9e24d7c86.scope.
Nov 26 01:22:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf0ec94264412a52c2d86e48c9684d92475c26b46422364aa4da4101f0ebb16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf0ec94264412a52c2d86e48c9684d92475c26b46422364aa4da4101f0ebb16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf0ec94264412a52c2d86e48c9684d92475c26b46422364aa4da4101f0ebb16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf0ec94264412a52c2d86e48c9684d92475c26b46422364aa4da4101f0ebb16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf0ec94264412a52c2d86e48c9684d92475c26b46422364aa4da4101f0ebb16/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:22:30 compute-0 podman[246062]: 2025-11-26 01:22:30.436761352 +0000 UTC m=+0.265033242 container init 04e548beadbdda043263397f9fdb683bca0f2383ee82d1434dbaa9a9e24d7c86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:22:30 compute-0 podman[246062]: 2025-11-26 01:22:30.456595991 +0000 UTC m=+0.284867791 container start 04e548beadbdda043263397f9fdb683bca0f2383ee82d1434dbaa9a9e24d7c86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sinoussi, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 01:22:30 compute-0 podman[246062]: 2025-11-26 01:22:30.463126695 +0000 UTC m=+0.291398575 container attach 04e548beadbdda043263397f9fdb683bca0f2383ee82d1434dbaa9a9e24d7c86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:22:30 compute-0 podman[246096]: 2025-11-26 01:22:30.73292861 +0000 UTC m=+0.117229446 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3)
Nov 26 01:22:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:31 compute-0 openstack_network_exporter[160178]: ERROR   01:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:22:31 compute-0 openstack_network_exporter[160178]: ERROR   01:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:22:31 compute-0 openstack_network_exporter[160178]: ERROR   01:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:22:31 compute-0 openstack_network_exporter[160178]: ERROR   01:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:22:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:22:31 compute-0 openstack_network_exporter[160178]: ERROR   01:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:22:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:22:31 compute-0 confident_sinoussi[246084]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:22:31 compute-0 confident_sinoussi[246084]: --> relative data size: 1.0
Nov 26 01:22:31 compute-0 confident_sinoussi[246084]: --> All data devices are unavailable
Nov 26 01:22:31 compute-0 podman[246062]: 2025-11-26 01:22:31.689318198 +0000 UTC m=+1.517589998 container died 04e548beadbdda043263397f9fdb683bca0f2383ee82d1434dbaa9a9e24d7c86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 01:22:31 compute-0 systemd[1]: libpod-04e548beadbdda043263397f9fdb683bca0f2383ee82d1434dbaa9a9e24d7c86.scope: Deactivated successfully.
Nov 26 01:22:31 compute-0 systemd[1]: libpod-04e548beadbdda043263397f9fdb683bca0f2383ee82d1434dbaa9a9e24d7c86.scope: Consumed 1.149s CPU time.
Nov 26 01:22:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-caf0ec94264412a52c2d86e48c9684d92475c26b46422364aa4da4101f0ebb16-merged.mount: Deactivated successfully.
Nov 26 01:22:31 compute-0 podman[246062]: 2025-11-26 01:22:31.782181164 +0000 UTC m=+1.610452954 container remove 04e548beadbdda043263397f9fdb683bca0f2383ee82d1434dbaa9a9e24d7c86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sinoussi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 01:22:31 compute-0 systemd[1]: libpod-conmon-04e548beadbdda043263397f9fdb683bca0f2383ee82d1434dbaa9a9e24d7c86.scope: Deactivated successfully.
Nov 26 01:22:32 compute-0 podman[246324]: 2025-11-26 01:22:32.742689788 +0000 UTC m=+0.115247409 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, name=ubi9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, maintainer=Red Hat, Inc., config_id=edpm, container_name=kepler, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=)
Nov 26 01:22:32 compute-0 podman[246372]: 2025-11-26 01:22:32.925202263 +0000 UTC m=+0.081204550 container create 22bb1b83dd7905b0453b98ab67d02bd581da882f570d992456c7d6c5ab530354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 01:22:32 compute-0 podman[246372]: 2025-11-26 01:22:32.892614544 +0000 UTC m=+0.048616891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:22:32 compute-0 systemd[1]: Started libpod-conmon-22bb1b83dd7905b0453b98ab67d02bd581da882f570d992456c7d6c5ab530354.scope.
Nov 26 01:22:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:22:33 compute-0 podman[246372]: 2025-11-26 01:22:33.068682537 +0000 UTC m=+0.224684854 container init 22bb1b83dd7905b0453b98ab67d02bd581da882f570d992456c7d6c5ab530354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 01:22:33 compute-0 podman[246372]: 2025-11-26 01:22:33.081182269 +0000 UTC m=+0.237184546 container start 22bb1b83dd7905b0453b98ab67d02bd581da882f570d992456c7d6c5ab530354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 26 01:22:33 compute-0 podman[246372]: 2025-11-26 01:22:33.088488145 +0000 UTC m=+0.244490432 container attach 22bb1b83dd7905b0453b98ab67d02bd581da882f570d992456c7d6c5ab530354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:22:33 compute-0 busy_shockley[246391]: 167 167
Nov 26 01:22:33 compute-0 systemd[1]: libpod-22bb1b83dd7905b0453b98ab67d02bd581da882f570d992456c7d6c5ab530354.scope: Deactivated successfully.
Nov 26 01:22:33 compute-0 podman[246372]: 2025-11-26 01:22:33.095210775 +0000 UTC m=+0.251213062 container died 22bb1b83dd7905b0453b98ab67d02bd581da882f570d992456c7d6c5ab530354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:22:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb5021b3081fa7107a3548e98c168a30d3277d15be9c4e95e0d977b0ba706bf2-merged.mount: Deactivated successfully.
Nov 26 01:22:33 compute-0 podman[246372]: 2025-11-26 01:22:33.18620775 +0000 UTC m=+0.342210007 container remove 22bb1b83dd7905b0453b98ab67d02bd581da882f570d992456c7d6c5ab530354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:22:33 compute-0 systemd[1]: libpod-conmon-22bb1b83dd7905b0453b98ab67d02bd581da882f570d992456c7d6c5ab530354.scope: Deactivated successfully.
Nov 26 01:22:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:33 compute-0 podman[246428]: 2025-11-26 01:22:33.44697992 +0000 UTC m=+0.087476677 container create 48f9e57a9d55773186956043b4a620ab46f60a711ad65e1da87fdda63acf3a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:22:33 compute-0 podman[246428]: 2025-11-26 01:22:33.413377803 +0000 UTC m=+0.053874570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:22:33 compute-0 systemd[1]: Started libpod-conmon-48f9e57a9d55773186956043b4a620ab46f60a711ad65e1da87fdda63acf3a47.scope.
Nov 26 01:22:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e392b3c828afd8f4f8b3ca3a3d8358520be9e2537cab04b72a6fd24025a1b64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e392b3c828afd8f4f8b3ca3a3d8358520be9e2537cab04b72a6fd24025a1b64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e392b3c828afd8f4f8b3ca3a3d8358520be9e2537cab04b72a6fd24025a1b64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:22:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e392b3c828afd8f4f8b3ca3a3d8358520be9e2537cab04b72a6fd24025a1b64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:22:33 compute-0 podman[246428]: 2025-11-26 01:22:33.646165665 +0000 UTC m=+0.286662482 container init 48f9e57a9d55773186956043b4a620ab46f60a711ad65e1da87fdda63acf3a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_rosalind, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 01:22:33 compute-0 podman[246428]: 2025-11-26 01:22:33.66230907 +0000 UTC m=+0.302805837 container start 48f9e57a9d55773186956043b4a620ab46f60a711ad65e1da87fdda63acf3a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_rosalind, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 01:22:33 compute-0 podman[246428]: 2025-11-26 01:22:33.668495574 +0000 UTC m=+0.308992331 container attach 48f9e57a9d55773186956043b4a620ab46f60a711ad65e1da87fdda63acf3a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_rosalind, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 01:22:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:22:34 compute-0 strange_rosalind[246450]: {
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:    "0": [
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:        {
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "devices": [
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "/dev/loop3"
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            ],
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "lv_name": "ceph_lv0",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "lv_size": "21470642176",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "name": "ceph_lv0",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "tags": {
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.cluster_name": "ceph",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.crush_device_class": "",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.encrypted": "0",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.osd_id": "0",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.type": "block",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.vdo": "0"
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            },
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "type": "block",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "vg_name": "ceph_vg0"
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:        }
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:    ],
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:    "1": [
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:        {
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "devices": [
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "/dev/loop4"
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            ],
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "lv_name": "ceph_lv1",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "lv_size": "21470642176",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "name": "ceph_lv1",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "tags": {
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.cluster_name": "ceph",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.crush_device_class": "",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.encrypted": "0",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.osd_id": "1",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.type": "block",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.vdo": "0"
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            },
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "type": "block",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "vg_name": "ceph_vg1"
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:        }
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:    ],
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:    "2": [
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:        {
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "devices": [
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "/dev/loop5"
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            ],
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "lv_name": "ceph_lv2",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "lv_size": "21470642176",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "name": "ceph_lv2",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "tags": {
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.cluster_name": "ceph",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.crush_device_class": "",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.encrypted": "0",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.osd_id": "2",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.type": "block",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:                "ceph.vdo": "0"
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            },
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "type": "block",
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:            "vg_name": "ceph_vg2"
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:        }
Nov 26 01:22:34 compute-0 strange_rosalind[246450]:    ]
Nov 26 01:22:34 compute-0 strange_rosalind[246450]: }
Nov 26 01:22:34 compute-0 systemd[1]: libpod-48f9e57a9d55773186956043b4a620ab46f60a711ad65e1da87fdda63acf3a47.scope: Deactivated successfully.
Nov 26 01:22:34 compute-0 podman[246428]: 2025-11-26 01:22:34.52369614 +0000 UTC m=+1.164192897 container died 48f9e57a9d55773186956043b4a620ab46f60a711ad65e1da87fdda63acf3a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_rosalind, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:22:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e392b3c828afd8f4f8b3ca3a3d8358520be9e2537cab04b72a6fd24025a1b64-merged.mount: Deactivated successfully.
Nov 26 01:22:34 compute-0 podman[246428]: 2025-11-26 01:22:34.643869327 +0000 UTC m=+1.284366064 container remove 48f9e57a9d55773186956043b4a620ab46f60a711ad65e1da87fdda63acf3a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 01:22:34 compute-0 systemd[1]: libpod-conmon-48f9e57a9d55773186956043b4a620ab46f60a711ad65e1da87fdda63acf3a47.scope: Deactivated successfully.
Nov 26 01:22:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:35 compute-0 python3.9[246725]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:22:35 compute-0 podman[246807]: 2025-11-26 01:22:35.798551713 +0000 UTC m=+0.079307016 container create 7a95f367c1620ceaa71e6639559635add67f3c154c06761b22b99fbb36a87141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jones, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:22:35 compute-0 podman[246807]: 2025-11-26 01:22:35.772367415 +0000 UTC m=+0.053122708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:22:35 compute-0 systemd[1]: Started libpod-conmon-7a95f367c1620ceaa71e6639559635add67f3c154c06761b22b99fbb36a87141.scope.
Nov 26 01:22:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:22:35 compute-0 podman[246807]: 2025-11-26 01:22:35.947505482 +0000 UTC m=+0.228260825 container init 7a95f367c1620ceaa71e6639559635add67f3c154c06761b22b99fbb36a87141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jones, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 01:22:35 compute-0 podman[246807]: 2025-11-26 01:22:35.965977583 +0000 UTC m=+0.246732886 container start 7a95f367c1620ceaa71e6639559635add67f3c154c06761b22b99fbb36a87141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jones, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 26 01:22:35 compute-0 podman[246807]: 2025-11-26 01:22:35.972438705 +0000 UTC m=+0.253194048 container attach 7a95f367c1620ceaa71e6639559635add67f3c154c06761b22b99fbb36a87141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jones, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 01:22:35 compute-0 sad_jones[246857]: 167 167
Nov 26 01:22:35 compute-0 systemd[1]: libpod-7a95f367c1620ceaa71e6639559635add67f3c154c06761b22b99fbb36a87141.scope: Deactivated successfully.
Nov 26 01:22:35 compute-0 podman[246807]: 2025-11-26 01:22:35.979068472 +0000 UTC m=+0.259823775 container died 7a95f367c1620ceaa71e6639559635add67f3c154c06761b22b99fbb36a87141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jones, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 01:22:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-99278ee1bf4682f3b16263f1ee2c7791d6aa47e053264777064dece508bab44b-merged.mount: Deactivated successfully.
Nov 26 01:22:36 compute-0 podman[246807]: 2025-11-26 01:22:36.049888518 +0000 UTC m=+0.330643801 container remove 7a95f367c1620ceaa71e6639559635add67f3c154c06761b22b99fbb36a87141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jones, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 01:22:36 compute-0 systemd[1]: libpod-conmon-7a95f367c1620ceaa71e6639559635add67f3c154c06761b22b99fbb36a87141.scope: Deactivated successfully.
Nov 26 01:22:36 compute-0 python3.9[246861]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:22:36 compute-0 podman[246891]: 2025-11-26 01:22:36.306636925 +0000 UTC m=+0.065882458 container create 8b302b57411eb826ba33f577cb1aacdcee02b140f998762a9e575456946fd351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 01:22:36 compute-0 podman[246891]: 2025-11-26 01:22:36.277610867 +0000 UTC m=+0.036856400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:22:36 compute-0 systemd[1]: Started libpod-conmon-8b302b57411eb826ba33f577cb1aacdcee02b140f998762a9e575456946fd351.scope.
Nov 26 01:22:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b6a334e0293fd9a4232aa09e3534dcb952792c51ffb7fe3379e92bc6f9c45c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b6a334e0293fd9a4232aa09e3534dcb952792c51ffb7fe3379e92bc6f9c45c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b6a334e0293fd9a4232aa09e3534dcb952792c51ffb7fe3379e92bc6f9c45c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b6a334e0293fd9a4232aa09e3534dcb952792c51ffb7fe3379e92bc6f9c45c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:22:36 compute-0 podman[246891]: 2025-11-26 01:22:36.464975328 +0000 UTC m=+0.224220901 container init 8b302b57411eb826ba33f577cb1aacdcee02b140f998762a9e575456946fd351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 01:22:36 compute-0 podman[246891]: 2025-11-26 01:22:36.481343889 +0000 UTC m=+0.240589422 container start 8b302b57411eb826ba33f577cb1aacdcee02b140f998762a9e575456946fd351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 01:22:36 compute-0 podman[246891]: 2025-11-26 01:22:36.487458502 +0000 UTC m=+0.246704075 container attach 8b302b57411eb826ba33f577cb1aacdcee02b140f998762a9e575456946fd351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:22:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:37 compute-0 python3.9[247056]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:22:37 compute-0 reverent_cerf[246924]: {
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:        "osd_id": 0,
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:        "type": "bluestore"
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:    },
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:        "osd_id": 2,
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:        "type": "bluestore"
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:    },
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:        "osd_id": 1,
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:        "type": "bluestore"
Nov 26 01:22:37 compute-0 reverent_cerf[246924]:    }
Nov 26 01:22:37 compute-0 reverent_cerf[246924]: }
Nov 26 01:22:37 compute-0 systemd[1]: libpod-8b302b57411eb826ba33f577cb1aacdcee02b140f998762a9e575456946fd351.scope: Deactivated successfully.
Nov 26 01:22:37 compute-0 podman[246891]: 2025-11-26 01:22:37.707927833 +0000 UTC m=+1.467173366 container died 8b302b57411eb826ba33f577cb1aacdcee02b140f998762a9e575456946fd351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 01:22:37 compute-0 systemd[1]: libpod-8b302b57411eb826ba33f577cb1aacdcee02b140f998762a9e575456946fd351.scope: Consumed 1.214s CPU time.
Nov 26 01:22:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-21b6a334e0293fd9a4232aa09e3534dcb952792c51ffb7fe3379e92bc6f9c45c-merged.mount: Deactivated successfully.
Nov 26 01:22:37 compute-0 podman[246891]: 2025-11-26 01:22:37.806676346 +0000 UTC m=+1.565921849 container remove 8b302b57411eb826ba33f577cb1aacdcee02b140f998762a9e575456946fd351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cerf, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 01:22:37 compute-0 systemd[1]: libpod-conmon-8b302b57411eb826ba33f577cb1aacdcee02b140f998762a9e575456946fd351.scope: Deactivated successfully.
Nov 26 01:22:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:22:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:22:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:22:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:22:37 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 8d1b5df5-7bd5-4924-8cf0-add72d092253 does not exist
Nov 26 01:22:37 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 9ab254ea-d656-499b-9cef-8f0971e1ddf3 does not exist
Nov 26 01:22:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:22:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:22:38 compute-0 python3.9[247297]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:22:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:22:39 compute-0 python3.9[247375]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:22:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:40 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Nov 26 01:22:40 compute-0 systemd[1]: session-24.scope: Consumed 2min 51.912s CPU time.
Nov 26 01:22:40 compute-0 systemd-logind[800]: Session 24 logged out. Waiting for processes to exit.
Nov 26 01:22:40 compute-0 systemd-logind[800]: Removed session 24.
Nov 26 01:22:40 compute-0 python3.9[247527]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 26 01:22:40 compute-0 systemd[1]: Starting Time & Date Service...
Nov 26 01:22:40 compute-0 systemd[1]: Started Time & Date Service.
Nov 26 01:22:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:22:40
Nov 26 01:22:40 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:22:40 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:22:40 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'default.rgw.log', '.rgw.root', '.mgr', 'default.rgw.meta', 'backups']
Nov 26 01:22:40 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:22:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:42 compute-0 python3.9[247683]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:22:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:43 compute-0 python3.9[247835]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:22:44 compute-0 python3.9[247913]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:22:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:22:45 compute-0 python3.9[248065]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:22:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:45 compute-0 python3.9[248143]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yhbbb2go recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:22:46 compute-0 python3.9[248295]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:22:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:47 compute-0 python3.9[248373]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:22:48 compute-0 python3.9[248525]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:22:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:22:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:50 compute-0 python3[248679]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:22:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:22:51 compute-0 python3.9[248831]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:22:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:51 compute-0 podman[248910]: 2025-11-26 01:22:51.782321414 +0000 UTC m=+0.115558468 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:22:51 compute-0 podman[248909]: 2025-11-26 01:22:51.804183611 +0000 UTC m=+0.141610363 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm)
Nov 26 01:22:51 compute-0 podman[248911]: 2025-11-26 01:22:51.823582858 +0000 UTC m=+0.152080078 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:22:51 compute-0 python3.9[248912]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:22:53 compute-0 python3.9[249124]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:22:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:53 compute-0 python3.9[249202]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:22:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:22:54 compute-0 python3.9[249354]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:22:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:55 compute-0 python3.9[249432]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:22:56 compute-0 podman[249556]: 2025-11-26 01:22:56.481121629 +0000 UTC m=+0.093374783 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9-minimal, container_name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, distribution-scope=public, version=9.6, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible)
Nov 26 01:22:56 compute-0 podman[249557]: 2025-11-26 01:22:56.501199095 +0000 UTC m=+0.098664712 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:22:56 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 01:22:56 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 01:22:56 compute-0 python3.9[249623]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:22:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:57 compute-0 python3.9[249702]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:22:58 compute-0 python3.9[249854]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:22:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:22:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:22:59 compute-0 python3.9[249932]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:22:59 compute-0 podman[158021]: time="2025-11-26T01:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:22:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:22:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6826 "" "Go-http-client/1.1"
Nov 26 01:23:00 compute-0 python3.9[250084]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:23:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:01 compute-0 openstack_network_exporter[160178]: ERROR   01:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:23:01 compute-0 openstack_network_exporter[160178]: ERROR   01:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:23:01 compute-0 openstack_network_exporter[160178]: ERROR   01:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:23:01 compute-0 openstack_network_exporter[160178]: ERROR   01:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:23:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:23:01 compute-0 openstack_network_exporter[160178]: ERROR   01:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:23:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:23:01 compute-0 podman[250211]: 2025-11-26 01:23:01.613287778 +0000 UTC m=+0.162632405 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 26 01:23:01 compute-0 python3.9[250258]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:23:02 compute-0 python3.9[250410]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:23:03 compute-0 podman[250411]: 2025-11-26 01:23:03.11617553 +0000 UTC m=+0.150822002 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, container_name=kepler, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.openshift.expose-services=, config_id=edpm, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=)
Nov 26 01:23:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:04 compute-0 python3.9[250581]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:23:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:23:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:05 compute-0 python3.9[250733]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 01:23:06 compute-0 python3.9[250885]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 01:23:06 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Nov 26 01:23:06 compute-0 systemd[1]: session-46.scope: Consumed 51.070s CPU time.
Nov 26 01:23:06 compute-0 systemd-logind[800]: Session 46 logged out. Waiting for processes to exit.
Nov 26 01:23:06 compute-0 systemd-logind[800]: Removed session 46.
Nov 26 01:23:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:23:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:10 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 26 01:23:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:23:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:23:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:23:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:23:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:23:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:23:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:12 compute-0 systemd-logind[800]: New session 47 of user zuul.
Nov 26 01:23:12 compute-0 systemd[1]: Started Session 47 of User zuul.
Nov 26 01:23:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:14 compute-0 python3.9[251067]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 26 01:23:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:23:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:15 compute-0 python3.9[251219]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:23:16 compute-0 python3.9[251373]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 26 01:23:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:17 compute-0 python3.9[251525]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.0emg4g7w follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:23:18 compute-0 python3.9[251650]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.0emg4g7w mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764120196.9294086-44-113033212376473/.source.0emg4g7w _original_basename=.yejn5_j2 follow=False checksum=31a3ffa10d08996ca1de56ec16d24731abb70af7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:23:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:23:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:20 compute-0 python3.9[251806]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:23:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:21 compute-0 python3.9[251958]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCykmnY+oafG3mHme/LpEkb2adSDQrzMN3MimIJb6cb9uyFPPekXIkxuzLR2hnrvQYJh8FRip2XXTA7OK9VGOt/2ffm5oV/vtTcglUGBGV2I6g6oMNtUbnvnulNj76pFz/cfKe0hQkAGM+b2aadpjm9DG0vOtuULnGPYiexfSN6uH58xfd6fWWwXjl3fLfUAdeMMfIXKn8+yO/MWeiP0OXqDBlmxsSq2awwlyW9zXr3UKOEVNzRm1HWuDoC92FALJq2LRIlgRWL62xsOSzlx2yESDY5d5NMP8+T5pbIRZls9qv5+Ngd2uM4RwQeE8HfNRAn9pBMJH1w0wa4/SkUv7v+88rm9mUzO9qsWn4KxM3S4ZJ9OGdX6YIRZ1gi4mMR9avWqoJHvs60HyrpKTvZHZrgOLXzXP+Dt35H271u/euxUPrrrRKH77hRA+rUnFkO1gpJFKKdp+VODlgXMotBRQOtwFhOf5UfJivpSu1UeS3WlZKkmCVCnf3KFdlEkcKNNjU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE4JumxWKxmoxGnJJmVBjitKlLFgQ6W4f029bTfAiSDd#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLJ2eh4CQVE9/EuBwJMMRg0Myb0WN6nOq5cVeYrcwl3vKUnKN3kWqlDkumr3pQyW/7ceK7qycJrI9T1pQjoOj2A=#012 create=True mode=0644 path=/tmp/ansible.0emg4g7w state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:23:22 compute-0 podman[252083]: 2025-11-26 01:23:22.455791787 +0000 UTC m=+0.103787213 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:23:22 compute-0 podman[252082]: 2025-11-26 01:23:22.45945566 +0000 UTC m=+0.100872660 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 01:23:22 compute-0 podman[252084]: 2025-11-26 01:23:22.503267538 +0000 UTC m=+0.137634749 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 26 01:23:22 compute-0 python3.9[252166]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.0emg4g7w' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:23:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:23 compute-0 python3.9[252333]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.0emg4g7w state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:23:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:23:24 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Nov 26 01:23:24 compute-0 systemd[1]: session-47.scope: Consumed 8.485s CPU time.
Nov 26 01:23:24 compute-0 systemd-logind[800]: Session 47 logged out. Waiting for processes to exit.
Nov 26 01:23:24 compute-0 systemd-logind[800]: Removed session 47.
Nov 26 01:23:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:27 compute-0 podman[252359]: 2025-11-26 01:23:27.539045138 +0000 UTC m=+0.093455921 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:23:27 compute-0 podman[252358]: 2025-11-26 01:23:27.548931967 +0000 UTC m=+0.105857181 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, distribution-scope=public, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 01:23:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:23:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:29 compute-0 systemd-logind[800]: New session 48 of user zuul.
Nov 26 01:23:29 compute-0 systemd[1]: Started Session 48 of User zuul.
Nov 26 01:23:29 compute-0 podman[158021]: time="2025-11-26T01:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:23:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:23:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6823 "" "Go-http-client/1.1"
Nov 26 01:23:31 compute-0 python3.9[252552]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:23:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:31 compute-0 openstack_network_exporter[160178]: ERROR   01:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:23:31 compute-0 openstack_network_exporter[160178]: ERROR   01:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:23:31 compute-0 openstack_network_exporter[160178]: ERROR   01:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:23:31 compute-0 openstack_network_exporter[160178]: ERROR   01:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:23:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:23:31 compute-0 openstack_network_exporter[160178]: ERROR   01:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:23:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:23:32 compute-0 podman[252680]: 2025-11-26 01:23:32.539679745 +0000 UTC m=+0.156059429 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 01:23:32 compute-0 python3.9[252728]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 26 01:23:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:33 compute-0 podman[252782]: 2025-11-26 01:23:33.535274596 +0000 UTC m=+0.098161404 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, release=1214.1726694543, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, architecture=x86_64, com.redhat.component=ubi9-container)
Nov 26 01:23:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.090559) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120214090607, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1690, "num_deletes": 252, "total_data_size": 2374644, "memory_usage": 2421888, "flush_reason": "Manual Compaction"}
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120214104174, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1392213, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7303, "largest_seqno": 8992, "table_properties": {"data_size": 1386718, "index_size": 2442, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16451, "raw_average_key_size": 20, "raw_value_size": 1373507, "raw_average_value_size": 1743, "num_data_blocks": 115, "num_entries": 788, "num_filter_entries": 788, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764120059, "oldest_key_time": 1764120059, "file_creation_time": 1764120214, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 13704 microseconds, and 8729 cpu microseconds.
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.104260) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1392213 bytes OK
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.104286) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.107198) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.107222) EVENT_LOG_v1 {"time_micros": 1764120214107213, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.107245) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2367098, prev total WAL file size 2367098, number of live WAL files 2.
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.108895) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1359KB)], [20(6951KB)]
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120214108992, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8510137, "oldest_snapshot_seqno": -1}
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3370 keys, 6795401 bytes, temperature: kUnknown
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120214157074, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6795401, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6769491, "index_size": 16392, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 80621, "raw_average_key_size": 23, "raw_value_size": 6705153, "raw_average_value_size": 1989, "num_data_blocks": 728, "num_entries": 3370, "num_filter_entries": 3370, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764120214, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.157295) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6795401 bytes
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.159366) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.7 rd, 141.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.8 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(11.0) write-amplify(4.9) OK, records in: 3814, records dropped: 444 output_compression: NoCompression
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.159385) EVENT_LOG_v1 {"time_micros": 1764120214159375, "job": 6, "event": "compaction_finished", "compaction_time_micros": 48152, "compaction_time_cpu_micros": 30539, "output_level": 6, "num_output_files": 1, "total_output_size": 6795401, "num_input_records": 3814, "num_output_records": 3370, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120214159694, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120214160798, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.108513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.161196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.161205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.161209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.161213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:23:34 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:23:34.161217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:23:34 compute-0 python3.9[252901]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 01:23:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:35 compute-0 python3.9[253054]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:23:36 compute-0 python3.9[253207]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:23:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:37 compute-0 python3.9[253359]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:23:38 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Nov 26 01:23:38 compute-0 systemd[1]: session-48.scope: Consumed 6.315s CPU time.
Nov 26 01:23:38 compute-0 systemd-logind[800]: Session 48 logged out. Waiting for processes to exit.
Nov 26 01:23:38 compute-0 systemd-logind[800]: Removed session 48.
Nov 26 01:23:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:23:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:23:39 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:23:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:23:39 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:23:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:23:39 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:23:39 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 164ba48e-cd81-4ca6-8be4-be246420914f does not exist
Nov 26 01:23:39 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 70677a46-d4ee-422f-b06b-335a792b59bc does not exist
Nov 26 01:23:39 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 45c5a8f0-76ed-4b37-bbe7-87fcc0bd368e does not exist
Nov 26 01:23:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:23:39 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:23:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:23:39 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:23:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:23:39 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:23:40 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:23:40 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:23:40 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:23:40 compute-0 podman[253656]: 2025-11-26 01:23:40.593041768 +0000 UTC m=+0.087662537 container create a56d6faffaf51dccb73516240690305033ab51f2faf8c34f01d1110a4b2d6473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 01:23:40 compute-0 podman[253656]: 2025-11-26 01:23:40.557961167 +0000 UTC m=+0.052581986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:23:40 compute-0 systemd[1]: Started libpod-conmon-a56d6faffaf51dccb73516240690305033ab51f2faf8c34f01d1110a4b2d6473.scope.
Nov 26 01:23:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:23:40 compute-0 podman[253656]: 2025-11-26 01:23:40.743564149 +0000 UTC m=+0.238184958 container init a56d6faffaf51dccb73516240690305033ab51f2faf8c34f01d1110a4b2d6473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_driscoll, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 01:23:40 compute-0 podman[253656]: 2025-11-26 01:23:40.762396681 +0000 UTC m=+0.257017450 container start a56d6faffaf51dccb73516240690305033ab51f2faf8c34f01d1110a4b2d6473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_driscoll, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 01:23:40 compute-0 podman[253656]: 2025-11-26 01:23:40.769256515 +0000 UTC m=+0.263877344 container attach a56d6faffaf51dccb73516240690305033ab51f2faf8c34f01d1110a4b2d6473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 01:23:40 compute-0 flamboyant_driscoll[253672]: 167 167
Nov 26 01:23:40 compute-0 systemd[1]: libpod-a56d6faffaf51dccb73516240690305033ab51f2faf8c34f01d1110a4b2d6473.scope: Deactivated successfully.
Nov 26 01:23:40 compute-0 podman[253656]: 2025-11-26 01:23:40.776359886 +0000 UTC m=+0.270980655 container died a56d6faffaf51dccb73516240690305033ab51f2faf8c34f01d1110a4b2d6473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 01:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-76daf2f3767a242896481bd4e5c4046797869a3e1f162f64d484e8112682d378-merged.mount: Deactivated successfully.
Nov 26 01:23:40 compute-0 podman[253656]: 2025-11-26 01:23:40.850314305 +0000 UTC m=+0.344935044 container remove a56d6faffaf51dccb73516240690305033ab51f2faf8c34f01d1110a4b2d6473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_driscoll, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 01:23:40 compute-0 systemd[1]: libpod-conmon-a56d6faffaf51dccb73516240690305033ab51f2faf8c34f01d1110a4b2d6473.scope: Deactivated successfully.
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:23:40
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'vms', 'volumes', 'images', 'default.rgw.log', 'default.rgw.control']
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:23:41 compute-0 podman[253695]: 2025-11-26 01:23:41.159012174 +0000 UTC m=+0.103514685 container create 7fbd60e14d1dc99f1038e8a8cfd980bfa686faa8b8f3e03fa158c2f5520e13a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:23:41 compute-0 podman[253695]: 2025-11-26 01:23:41.107960582 +0000 UTC m=+0.052463153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:23:41 compute-0 systemd[1]: Started libpod-conmon-7fbd60e14d1dc99f1038e8a8cfd980bfa686faa8b8f3e03fa158c2f5520e13a4.scope.
Nov 26 01:23:41 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a2cd3b59474710a92899ee7101e9c1ac28d131c02fb329ef2738dfe506de1d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a2cd3b59474710a92899ee7101e9c1ac28d131c02fb329ef2738dfe506de1d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a2cd3b59474710a92899ee7101e9c1ac28d131c02fb329ef2738dfe506de1d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a2cd3b59474710a92899ee7101e9c1ac28d131c02fb329ef2738dfe506de1d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a2cd3b59474710a92899ee7101e9c1ac28d131c02fb329ef2738dfe506de1d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:23:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:41 compute-0 podman[253695]: 2025-11-26 01:23:41.328149852 +0000 UTC m=+0.272652363 container init 7fbd60e14d1dc99f1038e8a8cfd980bfa686faa8b8f3e03fa158c2f5520e13a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jackson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 26 01:23:41 compute-0 podman[253695]: 2025-11-26 01:23:41.357258504 +0000 UTC m=+0.301761015 container start 7fbd60e14d1dc99f1038e8a8cfd980bfa686faa8b8f3e03fa158c2f5520e13a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:23:41 compute-0 podman[253695]: 2025-11-26 01:23:41.364747835 +0000 UTC m=+0.309250406 container attach 7fbd60e14d1dc99f1038e8a8cfd980bfa686faa8b8f3e03fa158c2f5520e13a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:23:42 compute-0 infallible_jackson[253711]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:23:42 compute-0 infallible_jackson[253711]: --> relative data size: 1.0
Nov 26 01:23:42 compute-0 infallible_jackson[253711]: --> All data devices are unavailable
Nov 26 01:23:42 compute-0 systemd[1]: libpod-7fbd60e14d1dc99f1038e8a8cfd980bfa686faa8b8f3e03fa158c2f5520e13a4.scope: Deactivated successfully.
Nov 26 01:23:42 compute-0 systemd[1]: libpod-7fbd60e14d1dc99f1038e8a8cfd980bfa686faa8b8f3e03fa158c2f5520e13a4.scope: Consumed 1.184s CPU time.
Nov 26 01:23:42 compute-0 podman[253695]: 2025-11-26 01:23:42.598315789 +0000 UTC m=+1.542818300 container died 7fbd60e14d1dc99f1038e8a8cfd980bfa686faa8b8f3e03fa158c2f5520e13a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a2cd3b59474710a92899ee7101e9c1ac28d131c02fb329ef2738dfe506de1d3-merged.mount: Deactivated successfully.
Nov 26 01:23:42 compute-0 podman[253695]: 2025-11-26 01:23:42.686135009 +0000 UTC m=+1.630637520 container remove 7fbd60e14d1dc99f1038e8a8cfd980bfa686faa8b8f3e03fa158c2f5520e13a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:23:42 compute-0 systemd[1]: libpod-conmon-7fbd60e14d1dc99f1038e8a8cfd980bfa686faa8b8f3e03fa158c2f5520e13a4.scope: Deactivated successfully.
Nov 26 01:23:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:43 compute-0 podman[253888]: 2025-11-26 01:23:43.884615861 +0000 UTC m=+0.093280986 container create 47b1f7579bb0fe4794bc3f6a831de6baf6e48a7743d0b3b53569b061080ba1a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_babbage, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 01:23:43 compute-0 podman[253888]: 2025-11-26 01:23:43.851069863 +0000 UTC m=+0.059735058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:23:43 compute-0 systemd[1]: Started libpod-conmon-47b1f7579bb0fe4794bc3f6a831de6baf6e48a7743d0b3b53569b061080ba1a4.scope.
Nov 26 01:23:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:23:44 compute-0 podman[253888]: 2025-11-26 01:23:44.034309969 +0000 UTC m=+0.242975144 container init 47b1f7579bb0fe4794bc3f6a831de6baf6e48a7743d0b3b53569b061080ba1a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:23:44 compute-0 podman[253888]: 2025-11-26 01:23:44.050263969 +0000 UTC m=+0.258929094 container start 47b1f7579bb0fe4794bc3f6a831de6baf6e48a7743d0b3b53569b061080ba1a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_babbage, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:23:44 compute-0 podman[253888]: 2025-11-26 01:23:44.057548495 +0000 UTC m=+0.266213680 container attach 47b1f7579bb0fe4794bc3f6a831de6baf6e48a7743d0b3b53569b061080ba1a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 01:23:44 compute-0 crazy_babbage[253903]: 167 167
Nov 26 01:23:44 compute-0 systemd[1]: libpod-47b1f7579bb0fe4794bc3f6a831de6baf6e48a7743d0b3b53569b061080ba1a4.scope: Deactivated successfully.
Nov 26 01:23:44 compute-0 podman[253888]: 2025-11-26 01:23:44.063768371 +0000 UTC m=+0.272433506 container died 47b1f7579bb0fe4794bc3f6a831de6baf6e48a7743d0b3b53569b061080ba1a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:23:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:23:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-107526fe1b7d76d91873a435a50eaaf6bd57553d19f5ace45603bf454b210db4-merged.mount: Deactivated successfully.
Nov 26 01:23:44 compute-0 podman[253888]: 2025-11-26 01:23:44.133993994 +0000 UTC m=+0.342659119 container remove 47b1f7579bb0fe4794bc3f6a831de6baf6e48a7743d0b3b53569b061080ba1a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_babbage, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 01:23:44 compute-0 systemd[1]: libpod-conmon-47b1f7579bb0fe4794bc3f6a831de6baf6e48a7743d0b3b53569b061080ba1a4.scope: Deactivated successfully.
Nov 26 01:23:44 compute-0 systemd-logind[800]: New session 49 of user zuul.
Nov 26 01:23:44 compute-0 systemd[1]: Started Session 49 of User zuul.
Nov 26 01:23:44 compute-0 podman[253929]: 2025-11-26 01:23:44.438511956 +0000 UTC m=+0.112554950 container create a0cca5420038e08e358f10bd2294c3c63b482cb33c6fe68c94fb722b1557c84a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 26 01:23:44 compute-0 podman[253929]: 2025-11-26 01:23:44.405276897 +0000 UTC m=+0.079319951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:23:44 compute-0 systemd[1]: Started libpod-conmon-a0cca5420038e08e358f10bd2294c3c63b482cb33c6fe68c94fb722b1557c84a.scope.
Nov 26 01:23:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34275e84cc5cfd1f3b1ef908f2b2fd2d86ce87e79f9fed08b90c7040c548324a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34275e84cc5cfd1f3b1ef908f2b2fd2d86ce87e79f9fed08b90c7040c548324a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34275e84cc5cfd1f3b1ef908f2b2fd2d86ce87e79f9fed08b90c7040c548324a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34275e84cc5cfd1f3b1ef908f2b2fd2d86ce87e79f9fed08b90c7040c548324a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:23:44 compute-0 podman[253929]: 2025-11-26 01:23:44.629350386 +0000 UTC m=+0.303393390 container init a0cca5420038e08e358f10bd2294c3c63b482cb33c6fe68c94fb722b1557c84a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:23:44 compute-0 podman[253929]: 2025-11-26 01:23:44.650593626 +0000 UTC m=+0.324636630 container start a0cca5420038e08e358f10bd2294c3c63b482cb33c6fe68c94fb722b1557c84a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 01:23:44 compute-0 podman[253929]: 2025-11-26 01:23:44.657622425 +0000 UTC m=+0.331665399 container attach a0cca5420038e08e358f10bd2294c3c63b482cb33c6fe68c94fb722b1557c84a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_gauss, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 01:23:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:45 compute-0 sharp_gauss[253990]: {
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:    "0": [
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:        {
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "devices": [
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "/dev/loop3"
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            ],
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "lv_name": "ceph_lv0",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "lv_size": "21470642176",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "name": "ceph_lv0",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "tags": {
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.cluster_name": "ceph",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.crush_device_class": "",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.encrypted": "0",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.osd_id": "0",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.type": "block",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.vdo": "0"
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            },
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "type": "block",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "vg_name": "ceph_vg0"
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:        }
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:    ],
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:    "1": [
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:        {
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "devices": [
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "/dev/loop4"
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            ],
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "lv_name": "ceph_lv1",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "lv_size": "21470642176",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "name": "ceph_lv1",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "tags": {
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.cluster_name": "ceph",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.crush_device_class": "",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.encrypted": "0",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.osd_id": "1",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.type": "block",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.vdo": "0"
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            },
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "type": "block",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "vg_name": "ceph_vg1"
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:        }
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:    ],
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:    "2": [
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:        {
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "devices": [
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "/dev/loop5"
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            ],
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "lv_name": "ceph_lv2",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "lv_size": "21470642176",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "name": "ceph_lv2",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "tags": {
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.cluster_name": "ceph",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.crush_device_class": "",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.encrypted": "0",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.osd_id": "2",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.type": "block",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:                "ceph.vdo": "0"
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            },
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "type": "block",
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:            "vg_name": "ceph_vg2"
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:        }
Nov 26 01:23:45 compute-0 sharp_gauss[253990]:    ]
Nov 26 01:23:45 compute-0 sharp_gauss[253990]: }
Nov 26 01:23:45 compute-0 systemd[1]: libpod-a0cca5420038e08e358f10bd2294c3c63b482cb33c6fe68c94fb722b1557c84a.scope: Deactivated successfully.
Nov 26 01:23:45 compute-0 podman[253929]: 2025-11-26 01:23:45.469067575 +0000 UTC m=+1.143110589 container died a0cca5420038e08e358f10bd2294c3c63b482cb33c6fe68c94fb722b1557c84a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:23:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-34275e84cc5cfd1f3b1ef908f2b2fd2d86ce87e79f9fed08b90c7040c548324a-merged.mount: Deactivated successfully.
Nov 26 01:23:45 compute-0 podman[253929]: 2025-11-26 01:23:45.575005117 +0000 UTC m=+1.249048091 container remove a0cca5420038e08e358f10bd2294c3c63b482cb33c6fe68c94fb722b1557c84a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_gauss, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:23:45 compute-0 systemd[1]: libpod-conmon-a0cca5420038e08e358f10bd2294c3c63b482cb33c6fe68c94fb722b1557c84a.scope: Deactivated successfully.
Nov 26 01:23:45 compute-0 python3.9[254102]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:23:46 compute-0 podman[254329]: 2025-11-26 01:23:46.630419668 +0000 UTC m=+0.076026698 container create 41b9af2aa18f92b19b5a74fb3baae751ae1398a80a5d5f5d4aec7d2d2c66a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lichterman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 01:23:46 compute-0 systemd[1]: Started libpod-conmon-41b9af2aa18f92b19b5a74fb3baae751ae1398a80a5d5f5d4aec7d2d2c66a565.scope.
Nov 26 01:23:46 compute-0 podman[254329]: 2025-11-26 01:23:46.602191591 +0000 UTC m=+0.047798651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:23:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:23:46 compute-0 podman[254329]: 2025-11-26 01:23:46.758459415 +0000 UTC m=+0.204066515 container init 41b9af2aa18f92b19b5a74fb3baae751ae1398a80a5d5f5d4aec7d2d2c66a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 26 01:23:46 compute-0 podman[254329]: 2025-11-26 01:23:46.776651629 +0000 UTC m=+0.222258679 container start 41b9af2aa18f92b19b5a74fb3baae751ae1398a80a5d5f5d4aec7d2d2c66a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:23:46 compute-0 podman[254329]: 2025-11-26 01:23:46.783238335 +0000 UTC m=+0.228845405 container attach 41b9af2aa18f92b19b5a74fb3baae751ae1398a80a5d5f5d4aec7d2d2c66a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:23:46 compute-0 eager_lichterman[254372]: 167 167
Nov 26 01:23:46 compute-0 systemd[1]: libpod-41b9af2aa18f92b19b5a74fb3baae751ae1398a80a5d5f5d4aec7d2d2c66a565.scope: Deactivated successfully.
Nov 26 01:23:46 compute-0 podman[254329]: 2025-11-26 01:23:46.78835532 +0000 UTC m=+0.233962380 container died 41b9af2aa18f92b19b5a74fb3baae751ae1398a80a5d5f5d4aec7d2d2c66a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:23:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e581761b3c6821327340e48da7601e1fc710a6907eee1241154ebdc991c273d2-merged.mount: Deactivated successfully.
Nov 26 01:23:46 compute-0 podman[254329]: 2025-11-26 01:23:46.861412003 +0000 UTC m=+0.307019033 container remove 41b9af2aa18f92b19b5a74fb3baae751ae1398a80a5d5f5d4aec7d2d2c66a565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 01:23:46 compute-0 systemd[1]: libpod-conmon-41b9af2aa18f92b19b5a74fb3baae751ae1398a80a5d5f5d4aec7d2d2c66a565.scope: Deactivated successfully.
Nov 26 01:23:47 compute-0 podman[254445]: 2025-11-26 01:23:47.073888904 +0000 UTC m=+0.069697370 container create 0dfc259bb8f834cb9d9373bc825b3d50e36a182a429c7b771fb232c15ff4d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_gates, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:23:47 compute-0 podman[254445]: 2025-11-26 01:23:47.04155662 +0000 UTC m=+0.037365166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:23:47 compute-0 systemd[1]: Started libpod-conmon-0dfc259bb8f834cb9d9373bc825b3d50e36a182a429c7b771fb232c15ff4d3ca.scope.
Nov 26 01:23:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7a4c9d576b596b4cd35a5b3417eab3cbe83ca904d67a198d2380949315b5ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7a4c9d576b596b4cd35a5b3417eab3cbe83ca904d67a198d2380949315b5ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7a4c9d576b596b4cd35a5b3417eab3cbe83ca904d67a198d2380949315b5ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7a4c9d576b596b4cd35a5b3417eab3cbe83ca904d67a198d2380949315b5ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:23:47 compute-0 podman[254445]: 2025-11-26 01:23:47.246382166 +0000 UTC m=+0.242190712 container init 0dfc259bb8f834cb9d9373bc825b3d50e36a182a429c7b771fb232c15ff4d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 01:23:47 compute-0 python3.9[254439]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 01:23:47 compute-0 podman[254445]: 2025-11-26 01:23:47.264626181 +0000 UTC m=+0.260434687 container start 0dfc259bb8f834cb9d9373bc825b3d50e36a182a429c7b771fb232c15ff4d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 01:23:47 compute-0 podman[254445]: 2025-11-26 01:23:47.274961823 +0000 UTC m=+0.270770319 container attach 0dfc259bb8f834cb9d9373bc825b3d50e36a182a429c7b771fb232c15ff4d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_gates, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 01:23:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:48 compute-0 priceless_gates[254461]: {
Nov 26 01:23:48 compute-0 priceless_gates[254461]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:23:48 compute-0 priceless_gates[254461]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:23:48 compute-0 priceless_gates[254461]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:23:48 compute-0 priceless_gates[254461]:        "osd_id": 0,
Nov 26 01:23:48 compute-0 priceless_gates[254461]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:23:48 compute-0 priceless_gates[254461]:        "type": "bluestore"
Nov 26 01:23:48 compute-0 priceless_gates[254461]:    },
Nov 26 01:23:48 compute-0 priceless_gates[254461]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:23:48 compute-0 priceless_gates[254461]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:23:48 compute-0 priceless_gates[254461]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:23:48 compute-0 priceless_gates[254461]:        "osd_id": 2,
Nov 26 01:23:48 compute-0 priceless_gates[254461]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:23:48 compute-0 priceless_gates[254461]:        "type": "bluestore"
Nov 26 01:23:48 compute-0 priceless_gates[254461]:    },
Nov 26 01:23:48 compute-0 priceless_gates[254461]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:23:48 compute-0 priceless_gates[254461]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:23:48 compute-0 priceless_gates[254461]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:23:48 compute-0 priceless_gates[254461]:        "osd_id": 1,
Nov 26 01:23:48 compute-0 priceless_gates[254461]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:23:48 compute-0 priceless_gates[254461]:        "type": "bluestore"
Nov 26 01:23:48 compute-0 priceless_gates[254461]:    }
Nov 26 01:23:48 compute-0 priceless_gates[254461]: }
Nov 26 01:23:48 compute-0 systemd[1]: libpod-0dfc259bb8f834cb9d9373bc825b3d50e36a182a429c7b771fb232c15ff4d3ca.scope: Deactivated successfully.
Nov 26 01:23:48 compute-0 systemd[1]: libpod-0dfc259bb8f834cb9d9373bc825b3d50e36a182a429c7b771fb232c15ff4d3ca.scope: Consumed 1.179s CPU time.
Nov 26 01:23:48 compute-0 podman[254578]: 2025-11-26 01:23:48.496280861 +0000 UTC m=+0.039527178 container died 0dfc259bb8f834cb9d9373bc825b3d50e36a182a429c7b771fb232c15ff4d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_gates, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:23:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee7a4c9d576b596b4cd35a5b3417eab3cbe83ca904d67a198d2380949315b5ac-merged.mount: Deactivated successfully.
Nov 26 01:23:48 compute-0 python3.9[254571]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 01:23:48 compute-0 podman[254578]: 2025-11-26 01:23:48.599588329 +0000 UTC m=+0.142834556 container remove 0dfc259bb8f834cb9d9373bc825b3d50e36a182a429c7b771fb232c15ff4d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_gates, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:23:48 compute-0 systemd[1]: libpod-conmon-0dfc259bb8f834cb9d9373bc825b3d50e36a182a429c7b771fb232c15ff4d3ca.scope: Deactivated successfully.
Nov 26 01:23:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:23:48 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:23:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:23:48 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:23:48 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 77de293c-4221-475e-9a03-41d983f878d8 does not exist
Nov 26 01:23:48 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 96329005-0ffe-4133-bee3-3404e4135853 does not exist
Nov 26 01:23:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:23:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:49 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:23:49 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:23:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:23:51 compute-0 python3.9[254794]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:23:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:52 compute-0 podman[254924]: 2025-11-26 01:23:52.834895898 +0000 UTC m=+0.104130802 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:23:52 compute-0 podman[254920]: 2025-11-26 01:23:52.855753377 +0000 UTC m=+0.126704070 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 01:23:52 compute-0 podman[254927]: 2025-11-26 01:23:52.927825693 +0000 UTC m=+0.193785745 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 26 01:23:52 compute-0 python3.9[254973]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 01:23:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:23:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2037 writes, 9032 keys, 2037 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2037 writes, 2037 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2037 writes, 9032 keys, 2037 commit groups, 1.0 writes per commit group, ingest: 10.87 MB, 0.02 MB/s#012Interval WAL: 2037 writes, 2037 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    121.7      0.07              0.04         3    0.022       0      0       0.0       0.0#012  L6      1/0    6.48 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    153.8    136.6      0.10              0.06         2    0.049    7131    735       0.0       0.0#012 Sum      1/0    6.48 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     91.0    130.5      0.16              0.10         5    0.033    7131    735       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     95.3    136.3      0.16              0.10         4    0.039    7131    735       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    153.8    136.6      0.10              0.06         2    0.049    7131    735       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    135.9      0.06              0.04         2    0.030       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5636b955b1f0#2 capacity: 308.00 MB usage: 600.20 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(37,509.42 KB,0.16152%) FilterBlock(6,27.61 KB,0.00875399%) IndexBlock(6,63.17 KB,0.0200296%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 26 01:23:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:23:54 compute-0 python3.9[255158]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:23:55 compute-0 python3.9[255308]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:23:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:55 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Nov 26 01:23:55 compute-0 systemd[1]: session-49.scope: Consumed 9.253s CPU time.
Nov 26 01:23:55 compute-0 systemd-logind[800]: Session 49 logged out. Waiting for processes to exit.
Nov 26 01:23:55 compute-0 systemd-logind[800]: Removed session 49.
Nov 26 01:23:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:58 compute-0 podman[255336]: 2025-11-26 01:23:58.577580834 +0000 UTC m=+0.120583647 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:23:58 compute-0 podman[255335]: 2025-11-26 01:23:58.593519904 +0000 UTC m=+0.139434899 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible)
Nov 26 01:23:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:23:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:23:59 compute-0 podman[158021]: time="2025-11-26T01:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:23:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:23:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6823 "" "Go-http-client/1.1"
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.778 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.779 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feff248b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff25140e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248a270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff35fda90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff5310410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff2489520>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feff25140b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff4ce75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feff248b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feff248b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feff248b740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feff248b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feff248b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feff248b1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feff248ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feff248b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feff248b0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feff248ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feff248bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feff248bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feff24894f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feff248b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feff248bc20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feff248b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feff248bcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feff55e84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feff248bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feff248b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feff248bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feff248a2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feff248aea0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feff248afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:23:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:23:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:24:01 compute-0 systemd-logind[800]: New session 50 of user zuul.
Nov 26 01:24:01 compute-0 systemd[1]: Started Session 50 of User zuul.
Nov 26 01:24:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:01 compute-0 openstack_network_exporter[160178]: ERROR   01:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:24:01 compute-0 openstack_network_exporter[160178]: ERROR   01:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:24:01 compute-0 openstack_network_exporter[160178]: ERROR   01:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:24:01 compute-0 openstack_network_exporter[160178]: ERROR   01:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:24:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:24:01 compute-0 openstack_network_exporter[160178]: ERROR   01:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:24:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:24:02 compute-0 python3.9[255534]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:24:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:03 compute-0 podman[255563]: 2025-11-26 01:24:03.584811407 +0000 UTC m=+0.128450590 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Nov 26 01:24:03 compute-0 podman[255582]: 2025-11-26 01:24:03.701580785 +0000 UTC m=+0.105657406 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, release-0.7.12=, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.component=ubi9-container, name=ubi9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, config_id=edpm, container_name=kepler)
Nov 26 01:24:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:24:05 compute-0 python3.9[255728]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:24:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:06 compute-0 python3.9[255880]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:24:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:07 compute-0 python3.9[256032]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:08 compute-0 python3.9[256110]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:24:09 compute-0 python3.9[256262]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:09 compute-0 python3.9[256340]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:24:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:24:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:24:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:24:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:24:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:24:11 compute-0 python3.9[256492]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:11 compute-0 python3.9[256570]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:13 compute-0 python3.9[256722]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:24:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:24:14 compute-0 python3.9[256874]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:24:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:15 compute-0 python3.9[257026]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:16 compute-0 python3.9[257104]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:17 compute-0 python3.9[257256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:17 compute-0 python3.9[257334]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:18 compute-0 python3.9[257486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:24:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:19 compute-0 python3.9[257564]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:20 compute-0 python3.9[257717]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:24:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:21 compute-0 python3.9[257869]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:24:22 compute-0 python3.9[258021]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:23 compute-0 podman[258089]: 2025-11-26 01:24:23.557968363 +0000 UTC m=+0.103295719 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:24:23 compute-0 podman[258093]: 2025-11-26 01:24:23.584344078 +0000 UTC m=+0.126443643 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 01:24:23 compute-0 podman[258094]: 2025-11-26 01:24:23.623581386 +0000 UTC m=+0.159308181 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 26 01:24:23 compute-0 python3.9[258210]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764120262.073727-165-63991106539361/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f918a416dca7e4f15e398242ab1e204ee3e124c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:24:24 compute-0 python3.9[258362]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:25 compute-0 python3.9[258485]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764120264.1765392-165-22653870648074/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=b771969d0143ad59aea8506fa55f83a43a00e414 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:26 compute-0 python3.9[258637]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:27 compute-0 python3.9[258760]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764120266.109599-165-252959454758808/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=b6153cbfcc3323b3fb60739132b3b7d9f53c5e75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:28 compute-0 podman[258884]: 2025-11-26 01:24:28.839318919 +0000 UTC m=+0.096216629 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.expose-services=)
Nov 26 01:24:28 compute-0 podman[258885]: 2025-11-26 01:24:28.844224508 +0000 UTC m=+0.095852979 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:24:29 compute-0 python3.9[258953]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:24:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:24:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:29 compute-0 podman[158021]: time="2025-11-26T01:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:24:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:24:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6826 "" "Go-http-client/1.1"
Nov 26 01:24:30 compute-0 python3.9[259107]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.409931) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120270409974, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 692, "num_deletes": 251, "total_data_size": 845182, "memory_usage": 858056, "flush_reason": "Manual Compaction"}
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120270420968, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 837651, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8993, "largest_seqno": 9684, "table_properties": {"data_size": 834024, "index_size": 1471, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7849, "raw_average_key_size": 18, "raw_value_size": 826776, "raw_average_value_size": 1949, "num_data_blocks": 68, "num_entries": 424, "num_filter_entries": 424, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764120214, "oldest_key_time": 1764120214, "file_creation_time": 1764120270, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 11115 microseconds, and 6278 cpu microseconds.
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.421044) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 837651 bytes OK
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.421067) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.423778) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.423802) EVENT_LOG_v1 {"time_micros": 1764120270423795, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.423888) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 841575, prev total WAL file size 841575, number of live WAL files 2.
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.424888) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(818KB)], [23(6636KB)]
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120270424941, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7633052, "oldest_snapshot_seqno": -1}
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3280 keys, 6066875 bytes, temperature: kUnknown
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120270451767, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6066875, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6042857, "index_size": 14693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79545, "raw_average_key_size": 24, "raw_value_size": 5981403, "raw_average_value_size": 1823, "num_data_blocks": 642, "num_entries": 3280, "num_filter_entries": 3280, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764120270, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.452125) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6066875 bytes
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.455272) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 282.8 rd, 224.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.5 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(16.4) write-amplify(7.2) OK, records in: 3794, records dropped: 514 output_compression: NoCompression
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.455304) EVENT_LOG_v1 {"time_micros": 1764120270455290, "job": 8, "event": "compaction_finished", "compaction_time_micros": 26992, "compaction_time_cpu_micros": 14919, "output_level": 6, "num_output_files": 1, "total_output_size": 6066875, "num_input_records": 3794, "num_output_records": 3280, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120270456293, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120270459157, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.424728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.459461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.459468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.459471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.459474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:24:30 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:24:30.459477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:24:31 compute-0 python3.9[259259]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:31 compute-0 openstack_network_exporter[160178]: ERROR   01:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:24:31 compute-0 openstack_network_exporter[160178]: ERROR   01:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:24:31 compute-0 openstack_network_exporter[160178]: ERROR   01:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:24:31 compute-0 openstack_network_exporter[160178]: ERROR   01:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:24:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:24:31 compute-0 openstack_network_exporter[160178]: ERROR   01:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:24:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:24:31 compute-0 python3.9[259337]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:32 compute-0 python3.9[259489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:33 compute-0 python3.9[259567]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:24:34 compute-0 podman[259691]: 2025-11-26 01:24:34.530741597 +0000 UTC m=+0.125106865 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, release=1214.1726694543, container_name=kepler, release-0.7.12=, io.buildah.version=1.29.0, version=9.4, config_id=edpm, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9)
Nov 26 01:24:34 compute-0 podman[259692]: 2025-11-26 01:24:34.549746233 +0000 UTC m=+0.140525280 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 01:24:34 compute-0 python3.9[259752]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:35 compute-0 python3.9[259831]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:36 compute-0 python3.9[259983]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:24:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:37 compute-0 python3.9[260135]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:24:38 compute-0 python3.9[260287]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:24:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:39 compute-0 python3.9[260365]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:40 compute-0 python3.9[260517]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:24:40
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'volumes', '.rgw.root', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'backups', 'default.rgw.log', 'images']
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:24:41 compute-0 python3.9[260595]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:24:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:42 compute-0 python3.9[260747]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:42 compute-0 python3.9[260825]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:24:44 compute-0 python3.9[260977]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:24:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:45 compute-0 python3.9[261129]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:46 compute-0 python3.9[261207]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:47 compute-0 python3.9[261359]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:24:48 compute-0 python3.9[261511]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:24:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:49 compute-0 python3.9[261685]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:24:50 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:24:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:24:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:24:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:24:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev abc59168-2e0e-48e4-a24c-8c2eaef72600 does not exist
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 14ed57fd-a7e8-4583-8169-d9d71f3ef361 does not exist
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 071837c5-8a7a-4145-89d1-0694171f8c69 does not exist
Nov 26 01:24:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:24:50 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:24:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:24:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:24:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:24:50 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:24:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:24:50 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:24:50 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:24:50 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:24:50 compute-0 python3.9[261961]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:24:51 compute-0 podman[262053]: 2025-11-26 01:24:51.121462104 +0000 UTC m=+0.092391321 container create 7a7a621e43fca2a6dd2ae4eebe9a069d908af643ec9d3b8a03f963fab54525fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:24:51 compute-0 podman[262053]: 2025-11-26 01:24:51.082153824 +0000 UTC m=+0.053083051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:24:51 compute-0 systemd[1]: Started libpod-conmon-7a7a621e43fca2a6dd2ae4eebe9a069d908af643ec9d3b8a03f963fab54525fd.scope.
Nov 26 01:24:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:24:51 compute-0 podman[262053]: 2025-11-26 01:24:51.277379228 +0000 UTC m=+0.248308505 container init 7a7a621e43fca2a6dd2ae4eebe9a069d908af643ec9d3b8a03f963fab54525fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:24:51 compute-0 podman[262053]: 2025-11-26 01:24:51.29266825 +0000 UTC m=+0.263597477 container start 7a7a621e43fca2a6dd2ae4eebe9a069d908af643ec9d3b8a03f963fab54525fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:24:51 compute-0 podman[262053]: 2025-11-26 01:24:51.299221425 +0000 UTC m=+0.270150692 container attach 7a7a621e43fca2a6dd2ae4eebe9a069d908af643ec9d3b8a03f963fab54525fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:24:51 compute-0 serene_mccarthy[262104]: 167 167
Nov 26 01:24:51 compute-0 podman[262053]: 2025-11-26 01:24:51.305657217 +0000 UTC m=+0.276586434 container died 7a7a621e43fca2a6dd2ae4eebe9a069d908af643ec9d3b8a03f963fab54525fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mccarthy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:24:51 compute-0 systemd[1]: libpod-7a7a621e43fca2a6dd2ae4eebe9a069d908af643ec9d3b8a03f963fab54525fd.scope: Deactivated successfully.
Nov 26 01:24:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4698953df3f33f2047496243ebcd878262d5086b45370c3e07db599ab3dcc038-merged.mount: Deactivated successfully.
Nov 26 01:24:51 compute-0 podman[262053]: 2025-11-26 01:24:51.42333256 +0000 UTC m=+0.394261777 container remove 7a7a621e43fca2a6dd2ae4eebe9a069d908af643ec9d3b8a03f963fab54525fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:24:51 compute-0 systemd[1]: libpod-conmon-7a7a621e43fca2a6dd2ae4eebe9a069d908af643ec9d3b8a03f963fab54525fd.scope: Deactivated successfully.
Nov 26 01:24:51 compute-0 podman[262193]: 2025-11-26 01:24:51.69913801 +0000 UTC m=+0.073398474 container create 99e3456290d95c086539b13f1e993aa84b69f8fd4f36d6f0c101ed9203ab0425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euler, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 01:24:51 compute-0 podman[262193]: 2025-11-26 01:24:51.669440631 +0000 UTC m=+0.043701135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:24:51 compute-0 systemd[1]: Started libpod-conmon-99e3456290d95c086539b13f1e993aa84b69f8fd4f36d6f0c101ed9203ab0425.scope.
Nov 26 01:24:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3d7c92a5bda2cae7abca3ce735a8701ae3c6186faec509ab28728250dc1186/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3d7c92a5bda2cae7abca3ce735a8701ae3c6186faec509ab28728250dc1186/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3d7c92a5bda2cae7abca3ce735a8701ae3c6186faec509ab28728250dc1186/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3d7c92a5bda2cae7abca3ce735a8701ae3c6186faec509ab28728250dc1186/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3d7c92a5bda2cae7abca3ce735a8701ae3c6186faec509ab28728250dc1186/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:24:51 compute-0 podman[262193]: 2025-11-26 01:24:51.883396895 +0000 UTC m=+0.257657409 container init 99e3456290d95c086539b13f1e993aa84b69f8fd4f36d6f0c101ed9203ab0425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euler, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:24:51 compute-0 podman[262193]: 2025-11-26 01:24:51.910580632 +0000 UTC m=+0.284841076 container start 99e3456290d95c086539b13f1e993aa84b69f8fd4f36d6f0c101ed9203ab0425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:24:51 compute-0 podman[262193]: 2025-11-26 01:24:51.916292264 +0000 UTC m=+0.290552778 container attach 99e3456290d95c086539b13f1e993aa84b69f8fd4f36d6f0c101ed9203ab0425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euler, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 01:24:51 compute-0 python3.9[262216]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:52 compute-0 python3.9[262351]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764120291.052338-375-144720176916072/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=661af12c565470228d854ced01dfaeaefe9a4726 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:53 compute-0 frosty_euler[262222]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:24:53 compute-0 frosty_euler[262222]: --> relative data size: 1.0
Nov 26 01:24:53 compute-0 frosty_euler[262222]: --> All data devices are unavailable
Nov 26 01:24:53 compute-0 systemd[1]: libpod-99e3456290d95c086539b13f1e993aa84b69f8fd4f36d6f0c101ed9203ab0425.scope: Deactivated successfully.
Nov 26 01:24:53 compute-0 systemd[1]: libpod-99e3456290d95c086539b13f1e993aa84b69f8fd4f36d6f0c101ed9203ab0425.scope: Consumed 1.226s CPU time.
Nov 26 01:24:53 compute-0 podman[262193]: 2025-11-26 01:24:53.196896026 +0000 UTC m=+1.571156490 container died 99e3456290d95c086539b13f1e993aa84b69f8fd4f36d6f0c101ed9203ab0425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:24:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d3d7c92a5bda2cae7abca3ce735a8701ae3c6186faec509ab28728250dc1186-merged.mount: Deactivated successfully.
Nov 26 01:24:53 compute-0 podman[262193]: 2025-11-26 01:24:53.309904688 +0000 UTC m=+1.684165172 container remove 99e3456290d95c086539b13f1e993aa84b69f8fd4f36d6f0c101ed9203ab0425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 01:24:53 compute-0 systemd[1]: libpod-conmon-99e3456290d95c086539b13f1e993aa84b69f8fd4f36d6f0c101ed9203ab0425.scope: Deactivated successfully.
Nov 26 01:24:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:53 compute-0 podman[262556]: 2025-11-26 01:24:53.773203324 +0000 UTC m=+0.118301872 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:24:53 compute-0 podman[262555]: 2025-11-26 01:24:53.789721271 +0000 UTC m=+0.134837880 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 01:24:53 compute-0 podman[262559]: 2025-11-26 01:24:53.812158724 +0000 UTC m=+0.147180338 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 01:24:53 compute-0 python3.9[262654]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:24:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:24:54 compute-0 podman[262789]: 2025-11-26 01:24:54.403375394 +0000 UTC m=+0.087897684 container create ea697c62e71b006fd0f069cc3de3a498f7a91a06c8947f6493d210b03683dfe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 01:24:54 compute-0 podman[262789]: 2025-11-26 01:24:54.377648827 +0000 UTC m=+0.062171127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:24:54 compute-0 systemd[1]: Started libpod-conmon-ea697c62e71b006fd0f069cc3de3a498f7a91a06c8947f6493d210b03683dfe1.scope.
Nov 26 01:24:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:24:54 compute-0 podman[262789]: 2025-11-26 01:24:54.547597867 +0000 UTC m=+0.232120227 container init ea697c62e71b006fd0f069cc3de3a498f7a91a06c8947f6493d210b03683dfe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 01:24:54 compute-0 podman[262789]: 2025-11-26 01:24:54.558712161 +0000 UTC m=+0.243234451 container start ea697c62e71b006fd0f069cc3de3a498f7a91a06c8947f6493d210b03683dfe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 01:24:54 compute-0 podman[262789]: 2025-11-26 01:24:54.565418801 +0000 UTC m=+0.249941121 container attach ea697c62e71b006fd0f069cc3de3a498f7a91a06c8947f6493d210b03683dfe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 01:24:54 compute-0 busy_raman[262841]: 167 167
Nov 26 01:24:54 compute-0 systemd[1]: libpod-ea697c62e71b006fd0f069cc3de3a498f7a91a06c8947f6493d210b03683dfe1.scope: Deactivated successfully.
Nov 26 01:24:54 compute-0 podman[262856]: 2025-11-26 01:24:54.645604996 +0000 UTC m=+0.053547834 container died ea697c62e71b006fd0f069cc3de3a498f7a91a06c8947f6493d210b03683dfe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_raman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 01:24:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a95be19cfa1e272a194d9aee9d4de4a8e4f4360937d685c890a6809dc1baadb6-merged.mount: Deactivated successfully.
Nov 26 01:24:54 compute-0 podman[262856]: 2025-11-26 01:24:54.699958211 +0000 UTC m=+0.107901029 container remove ea697c62e71b006fd0f069cc3de3a498f7a91a06c8947f6493d210b03683dfe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:24:54 compute-0 systemd[1]: libpod-conmon-ea697c62e71b006fd0f069cc3de3a498f7a91a06c8947f6493d210b03683dfe1.scope: Deactivated successfully.
Nov 26 01:24:54 compute-0 podman[262930]: 2025-11-26 01:24:54.978452896 +0000 UTC m=+0.085197616 container create f92acca440b53f68b6de28d5185c8b01be5fefdc5fcfc63ba062b97e7d56f96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_rosalind, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:24:55 compute-0 podman[262930]: 2025-11-26 01:24:54.944674332 +0000 UTC m=+0.051419102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:24:55 compute-0 systemd[1]: Started libpod-conmon-f92acca440b53f68b6de28d5185c8b01be5fefdc5fcfc63ba062b97e7d56f96a.scope.
Nov 26 01:24:55 compute-0 python3.9[262931]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:55 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed5c56c777dd7733f0ba45d25143c469b963f0eabf59a6cd3fbf3e2603ae6cd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed5c56c777dd7733f0ba45d25143c469b963f0eabf59a6cd3fbf3e2603ae6cd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed5c56c777dd7733f0ba45d25143c469b963f0eabf59a6cd3fbf3e2603ae6cd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed5c56c777dd7733f0ba45d25143c469b963f0eabf59a6cd3fbf3e2603ae6cd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:24:55 compute-0 podman[262930]: 2025-11-26 01:24:55.147707277 +0000 UTC m=+0.254451997 container init f92acca440b53f68b6de28d5185c8b01be5fefdc5fcfc63ba062b97e7d56f96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_rosalind, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 01:24:55 compute-0 podman[262930]: 2025-11-26 01:24:55.169129082 +0000 UTC m=+0.275873762 container start f92acca440b53f68b6de28d5185c8b01be5fefdc5fcfc63ba062b97e7d56f96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_rosalind, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 01:24:55 compute-0 podman[262930]: 2025-11-26 01:24:55.174692269 +0000 UTC m=+0.281436989 container attach f92acca440b53f68b6de28d5185c8b01be5fefdc5fcfc63ba062b97e7d56f96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 01:24:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:55 compute-0 python3.9[263029]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]: {
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:    "0": [
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:        {
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "devices": [
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "/dev/loop3"
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            ],
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "lv_name": "ceph_lv0",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "lv_size": "21470642176",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "name": "ceph_lv0",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "tags": {
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.cluster_name": "ceph",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.crush_device_class": "",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.encrypted": "0",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.osd_id": "0",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.type": "block",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.vdo": "0"
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            },
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "type": "block",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "vg_name": "ceph_vg0"
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:        }
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:    ],
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:    "1": [
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:        {
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "devices": [
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "/dev/loop4"
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            ],
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "lv_name": "ceph_lv1",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "lv_size": "21470642176",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "name": "ceph_lv1",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "tags": {
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.cluster_name": "ceph",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.crush_device_class": "",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.encrypted": "0",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.osd_id": "1",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.type": "block",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.vdo": "0"
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            },
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "type": "block",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "vg_name": "ceph_vg1"
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:        }
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:    ],
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:    "2": [
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:        {
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "devices": [
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "/dev/loop5"
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            ],
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "lv_name": "ceph_lv2",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "lv_size": "21470642176",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "name": "ceph_lv2",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "tags": {
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.cluster_name": "ceph",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.crush_device_class": "",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.encrypted": "0",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.osd_id": "2",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.type": "block",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:                "ceph.vdo": "0"
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            },
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "type": "block",
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:            "vg_name": "ceph_vg2"
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:        }
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]:    ]
Nov 26 01:24:55 compute-0 mystifying_rosalind[262947]: }
Nov 26 01:24:56 compute-0 systemd[1]: libpod-f92acca440b53f68b6de28d5185c8b01be5fefdc5fcfc63ba062b97e7d56f96a.scope: Deactivated successfully.
Nov 26 01:24:56 compute-0 podman[262930]: 2025-11-26 01:24:56.025700367 +0000 UTC m=+1.132445077 container died f92acca440b53f68b6de28d5185c8b01be5fefdc5fcfc63ba062b97e7d56f96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 26 01:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed5c56c777dd7733f0ba45d25143c469b963f0eabf59a6cd3fbf3e2603ae6cd4-merged.mount: Deactivated successfully.
Nov 26 01:24:56 compute-0 podman[262930]: 2025-11-26 01:24:56.13592535 +0000 UTC m=+1.242670060 container remove f92acca440b53f68b6de28d5185c8b01be5fefdc5fcfc63ba062b97e7d56f96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_rosalind, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 01:24:56 compute-0 systemd[1]: libpod-conmon-f92acca440b53f68b6de28d5185c8b01be5fefdc5fcfc63ba062b97e7d56f96a.scope: Deactivated successfully.
Nov 26 01:24:57 compute-0 python3.9[263296]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:24:57 compute-0 podman[263348]: 2025-11-26 01:24:57.34422061 +0000 UTC m=+0.090704723 container create a615e39b3aec9829851b6c948e95a11195a5c0fc0bedd68806e9e00e54f484f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 01:24:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:57 compute-0 podman[263348]: 2025-11-26 01:24:57.31022215 +0000 UTC m=+0.056706333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:24:57 compute-0 systemd[1]: Started libpod-conmon-a615e39b3aec9829851b6c948e95a11195a5c0fc0bedd68806e9e00e54f484f4.scope.
Nov 26 01:24:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:24:57 compute-0 podman[263348]: 2025-11-26 01:24:57.469143529 +0000 UTC m=+0.215627712 container init a615e39b3aec9829851b6c948e95a11195a5c0fc0bedd68806e9e00e54f484f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 01:24:57 compute-0 podman[263348]: 2025-11-26 01:24:57.491788778 +0000 UTC m=+0.238272911 container start a615e39b3aec9829851b6c948e95a11195a5c0fc0bedd68806e9e00e54f484f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 01:24:57 compute-0 podman[263348]: 2025-11-26 01:24:57.498078066 +0000 UTC m=+0.244562259 container attach a615e39b3aec9829851b6c948e95a11195a5c0fc0bedd68806e9e00e54f484f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 01:24:57 compute-0 vibrant_antonelli[263396]: 167 167
Nov 26 01:24:57 compute-0 systemd[1]: libpod-a615e39b3aec9829851b6c948e95a11195a5c0fc0bedd68806e9e00e54f484f4.scope: Deactivated successfully.
Nov 26 01:24:57 compute-0 podman[263348]: 2025-11-26 01:24:57.502264534 +0000 UTC m=+0.248748637 container died a615e39b3aec9829851b6c948e95a11195a5c0fc0bedd68806e9e00e54f484f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:24:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-210e7224c5365d50a48dca832d915a25b5d3748f1ee934cd630cf2936ebe250f-merged.mount: Deactivated successfully.
Nov 26 01:24:57 compute-0 podman[263348]: 2025-11-26 01:24:57.588421218 +0000 UTC m=+0.334905351 container remove a615e39b3aec9829851b6c948e95a11195a5c0fc0bedd68806e9e00e54f484f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:24:57 compute-0 systemd[1]: libpod-conmon-a615e39b3aec9829851b6c948e95a11195a5c0fc0bedd68806e9e00e54f484f4.scope: Deactivated successfully.
Nov 26 01:24:57 compute-0 podman[263479]: 2025-11-26 01:24:57.8674751 +0000 UTC m=+0.094802429 container create 83b4d628c7e495846bd63bd44277686103125ae843a679324f3eb1abb35c03b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:24:57 compute-0 podman[263479]: 2025-11-26 01:24:57.830936708 +0000 UTC m=+0.058264087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:24:57 compute-0 systemd[1]: Started libpod-conmon-83b4d628c7e495846bd63bd44277686103125ae843a679324f3eb1abb35c03b0.scope.
Nov 26 01:24:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cef95196b3b2ed22b6c4662e4a0b8e11b2fd8b35dc6363e56b07879f7231707/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cef95196b3b2ed22b6c4662e4a0b8e11b2fd8b35dc6363e56b07879f7231707/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cef95196b3b2ed22b6c4662e4a0b8e11b2fd8b35dc6363e56b07879f7231707/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cef95196b3b2ed22b6c4662e4a0b8e11b2fd8b35dc6363e56b07879f7231707/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:24:58 compute-0 podman[263479]: 2025-11-26 01:24:58.050673704 +0000 UTC m=+0.278001043 container init 83b4d628c7e495846bd63bd44277686103125ae843a679324f3eb1abb35c03b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 01:24:58 compute-0 podman[263479]: 2025-11-26 01:24:58.093167555 +0000 UTC m=+0.320494884 container start 83b4d628c7e495846bd63bd44277686103125ae843a679324f3eb1abb35c03b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 01:24:58 compute-0 podman[263479]: 2025-11-26 01:24:58.102167239 +0000 UTC m=+0.329494538 container attach 83b4d628c7e495846bd63bd44277686103125ae843a679324f3eb1abb35c03b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 01:24:58 compute-0 python3.9[263541]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:24:58 compute-0 python3.9[263621]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:24:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:24:59 compute-0 sweet_turing[263536]: {
Nov 26 01:24:59 compute-0 sweet_turing[263536]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:24:59 compute-0 sweet_turing[263536]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:24:59 compute-0 sweet_turing[263536]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:24:59 compute-0 sweet_turing[263536]:        "osd_id": 0,
Nov 26 01:24:59 compute-0 sweet_turing[263536]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:24:59 compute-0 sweet_turing[263536]:        "type": "bluestore"
Nov 26 01:24:59 compute-0 sweet_turing[263536]:    },
Nov 26 01:24:59 compute-0 sweet_turing[263536]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:24:59 compute-0 sweet_turing[263536]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:24:59 compute-0 sweet_turing[263536]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:24:59 compute-0 sweet_turing[263536]:        "osd_id": 2,
Nov 26 01:24:59 compute-0 sweet_turing[263536]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:24:59 compute-0 sweet_turing[263536]:        "type": "bluestore"
Nov 26 01:24:59 compute-0 sweet_turing[263536]:    },
Nov 26 01:24:59 compute-0 sweet_turing[263536]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:24:59 compute-0 sweet_turing[263536]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:24:59 compute-0 sweet_turing[263536]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:24:59 compute-0 sweet_turing[263536]:        "osd_id": 1,
Nov 26 01:24:59 compute-0 sweet_turing[263536]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:24:59 compute-0 sweet_turing[263536]:        "type": "bluestore"
Nov 26 01:24:59 compute-0 sweet_turing[263536]:    }
Nov 26 01:24:59 compute-0 sweet_turing[263536]: }
Nov 26 01:24:59 compute-0 systemd[1]: libpod-83b4d628c7e495846bd63bd44277686103125ae843a679324f3eb1abb35c03b0.scope: Deactivated successfully.
Nov 26 01:24:59 compute-0 systemd[1]: libpod-83b4d628c7e495846bd63bd44277686103125ae843a679324f3eb1abb35c03b0.scope: Consumed 1.201s CPU time.
Nov 26 01:24:59 compute-0 podman[263479]: 2025-11-26 01:24:59.288554969 +0000 UTC m=+1.515882298 container died 83b4d628c7e495846bd63bd44277686103125ae843a679324f3eb1abb35c03b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 01:24:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:24:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cef95196b3b2ed22b6c4662e4a0b8e11b2fd8b35dc6363e56b07879f7231707-merged.mount: Deactivated successfully.
Nov 26 01:24:59 compute-0 podman[263479]: 2025-11-26 01:24:59.414346952 +0000 UTC m=+1.641674231 container remove 83b4d628c7e495846bd63bd44277686103125ae843a679324f3eb1abb35c03b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 01:24:59 compute-0 systemd[1]: libpod-conmon-83b4d628c7e495846bd63bd44277686103125ae843a679324f3eb1abb35c03b0.scope: Deactivated successfully.
Nov 26 01:24:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:24:59 compute-0 podman[263710]: 2025-11-26 01:24:59.475350445 +0000 UTC m=+0.133045199 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Nov 26 01:24:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:24:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:24:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:24:59 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 4bb59f8e-92a6-4d03-b4c3-61827fb851a8 does not exist
Nov 26 01:24:59 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev d55700e8-7ea4-4113-8a31-90ef34f52f6a does not exist
Nov 26 01:24:59 compute-0 podman[263717]: 2025-11-26 01:24:59.499039654 +0000 UTC m=+0.156365978 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:24:59 compute-0 podman[158021]: time="2025-11-26T01:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:24:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:24:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6817 "" "Go-http-client/1.1"
Nov 26 01:24:59 compute-0 python3.9[263902]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:25:00 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:25:00 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:25:00 compute-0 python3.9[264054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:25:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:01 compute-0 openstack_network_exporter[160178]: ERROR   01:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:25:01 compute-0 openstack_network_exporter[160178]: ERROR   01:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:25:01 compute-0 openstack_network_exporter[160178]: ERROR   01:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:25:01 compute-0 openstack_network_exporter[160178]: ERROR   01:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:25:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:25:01 compute-0 openstack_network_exporter[160178]: ERROR   01:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:25:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:25:01 compute-0 python3.9[264177]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764120300.2085235-441-100565344266330/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=661af12c565470228d854ced01dfaeaefe9a4726 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:25:03 compute-0 python3.9[264329]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:25:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:25:04 compute-0 python3.9[264481]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:25:04 compute-0 podman[264532]: 2025-11-26 01:25:04.812043464 +0000 UTC m=+0.148752792 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 01:25:04 compute-0 podman[264531]: 2025-11-26 01:25:04.818761634 +0000 UTC m=+0.160952897 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, container_name=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vcs-type=git, architecture=x86_64)
Nov 26 01:25:04 compute-0 python3.9[264596]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:25:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:06 compute-0 python3.9[264748]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:25:07 compute-0 python3.9[264900]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:25:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:07 compute-0 python3.9[264978]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:25:08 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 26 01:25:08 compute-0 systemd[1]: session-50.scope: Consumed 58.406s CPU time.
Nov 26 01:25:08 compute-0 systemd-logind[800]: Session 50 logged out. Waiting for processes to exit.
Nov 26 01:25:08 compute-0 systemd-logind[800]: Removed session 50.
Nov 26 01:25:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:25:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:25:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:25:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:25:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:25:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:25:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:25:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:13 compute-0 systemd-logind[800]: New session 51 of user zuul.
Nov 26 01:25:13 compute-0 systemd[1]: Started Session 51 of User zuul.
Nov 26 01:25:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:25:15 compute-0 python3.9[265158]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:25:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:16 compute-0 python3.9[265310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:25:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:17 compute-0 python3.9[265433]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764120315.4252212-34-34553419204045/.source.conf _original_basename=ceph.conf follow=False checksum=8ba320f6f139e3664cdda7140f1997f97b59dc75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:25:18 compute-0 python3.9[265585]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:25:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:25:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:19 compute-0 python3.9[265708]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764120317.920118-34-206284619055688/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=f0b66bb9353ce94c732bb9473056fe6c0a7a3767 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:25:19 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Nov 26 01:25:19 compute-0 systemd[1]: session-51.scope: Consumed 4.721s CPU time.
Nov 26 01:25:19 compute-0 systemd-logind[800]: Session 51 logged out. Waiting for processes to exit.
Nov 26 01:25:19 compute-0 systemd-logind[800]: Removed session 51.
Nov 26 01:25:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:25:24 compute-0 podman[265734]: 2025-11-26 01:25:24.557677993 +0000 UTC m=+0.103065973 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:25:24 compute-0 podman[265735]: 2025-11-26 01:25:24.570754882 +0000 UTC m=+0.122674556 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 01:25:24 compute-0 podman[265736]: 2025-11-26 01:25:24.632001802 +0000 UTC m=+0.161223785 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 01:25:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:25 compute-0 systemd-logind[800]: New session 52 of user zuul.
Nov 26 01:25:25 compute-0 systemd[1]: Started Session 52 of User zuul.
Nov 26 01:25:27 compute-0 python3.9[265953]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:25:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:28 compute-0 python3.9[266109]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:25:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:25:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:29 compute-0 python3.9[266261]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:25:29 compute-0 podman[158021]: time="2025-11-26T01:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:25:29 compute-0 podman[266263]: 2025-11-26 01:25:29.75147109 +0000 UTC m=+0.110763580 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 01:25:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:25:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6827 "" "Go-http-client/1.1"
Nov 26 01:25:29 compute-0 podman[266262]: 2025-11-26 01:25:29.788455594 +0000 UTC m=+0.147130056 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6)
Nov 26 01:25:30 compute-0 python3.9[266454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:25:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:31 compute-0 openstack_network_exporter[160178]: ERROR   01:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:25:31 compute-0 openstack_network_exporter[160178]: ERROR   01:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:25:31 compute-0 openstack_network_exporter[160178]: ERROR   01:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:25:31 compute-0 openstack_network_exporter[160178]: ERROR   01:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:25:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:25:31 compute-0 openstack_network_exporter[160178]: ERROR   01:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:25:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:25:31 compute-0 python3.9[266606]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 26 01:25:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:33 compute-0 python3.9[266758]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 01:25:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:25:35 compute-0 python3.9[266842]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 01:25:35 compute-0 podman[266844]: 2025-11-26 01:25:35.2047036 +0000 UTC m=+0.139534802 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.tags=base rhel9, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release-0.7.12=, architecture=x86_64, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, version=9.4, com.redhat.component=ubi9-container)
Nov 26 01:25:35 compute-0 podman[266845]: 2025-11-26 01:25:35.227990888 +0000 UTC m=+0.159918918 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118)
Nov 26 01:25:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:37 compute-0 python3.9[267031]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 01:25:39 compute-0 python3[267186]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 26 01:25:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:25:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:40 compute-0 python3.9[267338]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:25:41
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.meta', 'images', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control']
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:25:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:41 compute-0 python3.9[267490]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:25:42 compute-0 python3.9[267568]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:25:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:43 compute-0 python3.9[267720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:25:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:25:44 compute-0 python3.9[267798]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._nkves42 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:25:45 compute-0 python3.9[267950]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:25:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:45 compute-0 python3.9[268028]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:25:47 compute-0 python3.9[268180]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:25:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:48 compute-0 python3[268333]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 01:25:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:25:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:49 compute-0 python3.9[268486]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:25:50 compute-0 python3.9[268564]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:25:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:25:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:51 compute-0 python3.9[268716]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:25:52 compute-0 python3.9[268794]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:25:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:25:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5468 writes, 23K keys, 5468 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5468 writes, 790 syncs, 6.92 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5468 writes, 23K keys, 5468 commit groups, 1.0 writes per commit group, ingest: 18.46 MB, 0.03 MB/s#012Interval WAL: 5468 writes, 790 syncs, 6.92 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a132e68dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 8.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a132e68dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 8.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 26 01:25:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:53 compute-0 python3.9[268946]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:25:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:25:54 compute-0 python3.9[269024]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:25:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:55 compute-0 podman[269102]: 2025-11-26 01:25:55.611715485 +0000 UTC m=+0.147059695 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:25:55 compute-0 podman[269101]: 2025-11-26 01:25:55.616361936 +0000 UTC m=+0.157685025 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4)
Nov 26 01:25:55 compute-0 podman[269103]: 2025-11-26 01:25:55.654499653 +0000 UTC m=+0.181226129 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 26 01:25:56 compute-0 python3.9[269240]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:25:57 compute-0 python3.9[269318]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:25:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:59 compute-0 python3.9[269470]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:25:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:25:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6769 writes, 28K keys, 6769 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6769 writes, 1181 syncs, 5.73 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6769 writes, 28K keys, 6769 commit groups, 1.0 writes per commit group, ingest: 19.57 MB, 0.03 MB/s#012Interval WAL: 6769 writes, 1181 syncs, 5.73 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a56a188dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a56a188dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 26 01:25:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:25:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:25:59 compute-0 python3.9[269548]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:25:59 compute-0 podman[158021]: time="2025-11-26T01:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:25:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:25:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6826 "" "Go-http-client/1.1"
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.778 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.779 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feff248b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff25140e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248a270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff35fda90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.782 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feff25140b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feff248b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feff248b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feff248b740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feff248b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feff248b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feff248b1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feff248ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feff248b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feff248b0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff5310410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feff248ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feff248bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feff248bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feff24894f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feff248b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feff248bc20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff2489520>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feff248b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feff248bcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff4ce75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feff55e84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feff248bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feff248b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feff248bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feff248a2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feff248aea0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feff248afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:25:59.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:25:59 compute-0 podman[269598]: 2025-11-26 01:25:59.921557361 +0000 UTC m=+0.076171773 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 26 01:25:59 compute-0 podman[269599]: 2025-11-26 01:25:59.980919867 +0000 UTC m=+0.125643419 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:26:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:26:00 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:26:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:26:00 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:26:00 compute-0 python3.9[269883]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:26:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:01 compute-0 openstack_network_exporter[160178]: ERROR   01:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:26:01 compute-0 openstack_network_exporter[160178]: ERROR   01:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:26:01 compute-0 openstack_network_exporter[160178]: ERROR   01:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:26:01 compute-0 openstack_network_exporter[160178]: ERROR   01:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:26:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:26:01 compute-0 openstack_network_exporter[160178]: ERROR   01:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:26:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:26:01 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:26:01 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:26:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:26:01 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:26:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:26:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:26:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:26:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:26:01 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 902d4712-0df2-4e22-8ef5-8ca88deaa737 does not exist
Nov 26 01:26:01 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 1112ef9f-f0aa-4fe7-9677-9a2dddccb61a does not exist
Nov 26 01:26:01 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev ab154a9a-8f3c-4dc3-90e3-62d8ef28f1d5 does not exist
Nov 26 01:26:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:26:01 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:26:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:26:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:26:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:26:01 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:26:02 compute-0 python3.9[270193]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:26:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:26:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:26:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:26:02 compute-0 podman[270363]: 2025-11-26 01:26:02.764268031 +0000 UTC m=+0.089897840 container create 966a7122fbe03659d52c14caf717ef9692bad7be372059b108c74494b855578e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:26:02 compute-0 podman[270363]: 2025-11-26 01:26:02.730716593 +0000 UTC m=+0.056346442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:26:02 compute-0 systemd[1]: Started libpod-conmon-966a7122fbe03659d52c14caf717ef9692bad7be372059b108c74494b855578e.scope.
Nov 26 01:26:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:26:02 compute-0 podman[270363]: 2025-11-26 01:26:02.900387736 +0000 UTC m=+0.226017585 container init 966a7122fbe03659d52c14caf717ef9692bad7be372059b108c74494b855578e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 01:26:02 compute-0 podman[270363]: 2025-11-26 01:26:02.916224713 +0000 UTC m=+0.241854512 container start 966a7122fbe03659d52c14caf717ef9692bad7be372059b108c74494b855578e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 26 01:26:02 compute-0 podman[270363]: 2025-11-26 01:26:02.923733195 +0000 UTC m=+0.249362994 container attach 966a7122fbe03659d52c14caf717ef9692bad7be372059b108c74494b855578e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:26:02 compute-0 zealous_faraday[270407]: 167 167
Nov 26 01:26:02 compute-0 systemd[1]: libpod-966a7122fbe03659d52c14caf717ef9692bad7be372059b108c74494b855578e.scope: Deactivated successfully.
Nov 26 01:26:02 compute-0 podman[270363]: 2025-11-26 01:26:02.928678485 +0000 UTC m=+0.254308284 container died 966a7122fbe03659d52c14caf717ef9692bad7be372059b108c74494b855578e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:26:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b07dcc29bcc491cf1fe8a96dfefbc419df8b0fa91e3dcbc5e0e7b89bbc1e0d67-merged.mount: Deactivated successfully.
Nov 26 01:26:03 compute-0 podman[270363]: 2025-11-26 01:26:03.008383895 +0000 UTC m=+0.334013674 container remove 966a7122fbe03659d52c14caf717ef9692bad7be372059b108c74494b855578e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 01:26:03 compute-0 systemd[1]: libpod-conmon-966a7122fbe03659d52c14caf717ef9692bad7be372059b108c74494b855578e.scope: Deactivated successfully.
Nov 26 01:26:03 compute-0 podman[270477]: 2025-11-26 01:26:03.270491518 +0000 UTC m=+0.085873797 container create 987ac6e802e0577e98356d9b9458f1415029ee9d1630670fb356834debf04e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:26:03 compute-0 python3.9[270471]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:26:03 compute-0 podman[270477]: 2025-11-26 01:26:03.24048175 +0000 UTC m=+0.055864089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:26:03 compute-0 systemd[1]: Started libpod-conmon-987ac6e802e0577e98356d9b9458f1415029ee9d1630670fb356834debf04e9d.scope.
Nov 26 01:26:03 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b89b5104ece21ec796772994ddc6721fc2d8b66553a7b88f79e2848f935f1dd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b89b5104ece21ec796772994ddc6721fc2d8b66553a7b88f79e2848f935f1dd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b89b5104ece21ec796772994ddc6721fc2d8b66553a7b88f79e2848f935f1dd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b89b5104ece21ec796772994ddc6721fc2d8b66553a7b88f79e2848f935f1dd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b89b5104ece21ec796772994ddc6721fc2d8b66553a7b88f79e2848f935f1dd7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:26:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:03 compute-0 podman[270477]: 2025-11-26 01:26:03.438536014 +0000 UTC m=+0.253918313 container init 987ac6e802e0577e98356d9b9458f1415029ee9d1630670fb356834debf04e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nightingale, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 01:26:03 compute-0 podman[270477]: 2025-11-26 01:26:03.473730488 +0000 UTC m=+0.289112767 container start 987ac6e802e0577e98356d9b9458f1415029ee9d1630670fb356834debf04e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:26:03 compute-0 podman[270477]: 2025-11-26 01:26:03.481001634 +0000 UTC m=+0.296383923 container attach 987ac6e802e0577e98356d9b9458f1415029ee9d1630670fb356834debf04e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:26:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:26:04 compute-0 python3.9[270659]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:26:04 compute-0 upbeat_nightingale[270494]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:26:04 compute-0 upbeat_nightingale[270494]: --> relative data size: 1.0
Nov 26 01:26:04 compute-0 upbeat_nightingale[270494]: --> All data devices are unavailable
Nov 26 01:26:04 compute-0 systemd[1]: libpod-987ac6e802e0577e98356d9b9458f1415029ee9d1630670fb356834debf04e9d.scope: Deactivated successfully.
Nov 26 01:26:04 compute-0 systemd[1]: libpod-987ac6e802e0577e98356d9b9458f1415029ee9d1630670fb356834debf04e9d.scope: Consumed 1.186s CPU time.
Nov 26 01:26:04 compute-0 podman[270477]: 2025-11-26 01:26:04.731792601 +0000 UTC m=+1.547174920 container died 987ac6e802e0577e98356d9b9458f1415029ee9d1630670fb356834debf04e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nightingale, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 01:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b89b5104ece21ec796772994ddc6721fc2d8b66553a7b88f79e2848f935f1dd7-merged.mount: Deactivated successfully.
Nov 26 01:26:04 compute-0 podman[270477]: 2025-11-26 01:26:04.853059046 +0000 UTC m=+1.668441335 container remove 987ac6e802e0577e98356d9b9458f1415029ee9d1630670fb356834debf04e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_nightingale, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 01:26:04 compute-0 systemd[1]: libpod-conmon-987ac6e802e0577e98356d9b9458f1415029ee9d1630670fb356834debf04e9d.scope: Deactivated successfully.
Nov 26 01:26:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:05 compute-0 podman[270882]: 2025-11-26 01:26:05.396037642 +0000 UTC m=+0.114048082 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, distribution-scope=public, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_id=edpm, io.openshift.expose-services=)
Nov 26 01:26:05 compute-0 podman[270885]: 2025-11-26 01:26:05.414050681 +0000 UTC m=+0.123168450 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:26:05 compute-0 python3.9[270974]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:26:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:26:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 5588 writes, 23K keys, 5588 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5588 writes, 826 syncs, 6.77 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5588 writes, 23K keys, 5588 commit groups, 1.0 writes per commit group, ingest: 18.43 MB, 0.03 MB/s#012Interval WAL: 5588 writes, 826 syncs, 6.77 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55731391f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55731391f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 26 01:26:05 compute-0 podman[271035]: 2025-11-26 01:26:05.949252057 +0000 UTC m=+0.057673860 container create 3dd56aec0f0eb690f74532fd0b6aed55df77f995b62996a68f2c7f3db16976ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ganguly, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:26:06 compute-0 systemd[1]: Started libpod-conmon-3dd56aec0f0eb690f74532fd0b6aed55df77f995b62996a68f2c7f3db16976ae.scope.
Nov 26 01:26:06 compute-0 podman[271035]: 2025-11-26 01:26:05.926889286 +0000 UTC m=+0.035311069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:26:06 compute-0 ceph-mgr[193049]: [devicehealth INFO root] Check health
Nov 26 01:26:06 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:26:06 compute-0 podman[271035]: 2025-11-26 01:26:06.086673529 +0000 UTC m=+0.195095372 container init 3dd56aec0f0eb690f74532fd0b6aed55df77f995b62996a68f2c7f3db16976ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 01:26:06 compute-0 podman[271035]: 2025-11-26 01:26:06.10514242 +0000 UTC m=+0.213564213 container start 3dd56aec0f0eb690f74532fd0b6aed55df77f995b62996a68f2c7f3db16976ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ganguly, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:26:06 compute-0 podman[271035]: 2025-11-26 01:26:06.112507538 +0000 UTC m=+0.220929391 container attach 3dd56aec0f0eb690f74532fd0b6aed55df77f995b62996a68f2c7f3db16976ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 26 01:26:06 compute-0 stoic_ganguly[271049]: 167 167
Nov 26 01:26:06 compute-0 systemd[1]: libpod-3dd56aec0f0eb690f74532fd0b6aed55df77f995b62996a68f2c7f3db16976ae.scope: Deactivated successfully.
Nov 26 01:26:06 compute-0 podman[271035]: 2025-11-26 01:26:06.117622983 +0000 UTC m=+0.226044776 container died 3dd56aec0f0eb690f74532fd0b6aed55df77f995b62996a68f2c7f3db16976ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ganguly, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 01:26:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-56141311749f15cf09be9af2692a000036136d2b0c526636ac6cac99f4e3ee59-merged.mount: Deactivated successfully.
Nov 26 01:26:06 compute-0 podman[271035]: 2025-11-26 01:26:06.198335543 +0000 UTC m=+0.306757346 container remove 3dd56aec0f0eb690f74532fd0b6aed55df77f995b62996a68f2c7f3db16976ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:26:06 compute-0 systemd[1]: libpod-conmon-3dd56aec0f0eb690f74532fd0b6aed55df77f995b62996a68f2c7f3db16976ae.scope: Deactivated successfully.
Nov 26 01:26:06 compute-0 podman[271119]: 2025-11-26 01:26:06.484913977 +0000 UTC m=+0.091029062 container create 34c9ce124cbdf486e312309d70a855f28cd415a9075819186d036218f162aa71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:26:06 compute-0 podman[271119]: 2025-11-26 01:26:06.443152397 +0000 UTC m=+0.049267552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:26:06 compute-0 systemd[1]: Started libpod-conmon-34c9ce124cbdf486e312309d70a855f28cd415a9075819186d036218f162aa71.scope.
Nov 26 01:26:06 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ecbea08f5dfff9fa78efc88bfe164fe13aca4f0cf5d7567f8ec8d035248f82d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ecbea08f5dfff9fa78efc88bfe164fe13aca4f0cf5d7567f8ec8d035248f82d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ecbea08f5dfff9fa78efc88bfe164fe13aca4f0cf5d7567f8ec8d035248f82d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ecbea08f5dfff9fa78efc88bfe164fe13aca4f0cf5d7567f8ec8d035248f82d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:26:06 compute-0 podman[271119]: 2025-11-26 01:26:06.651949283 +0000 UTC m=+0.258064398 container init 34c9ce124cbdf486e312309d70a855f28cd415a9075819186d036218f162aa71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_galileo, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 01:26:06 compute-0 podman[271119]: 2025-11-26 01:26:06.67095071 +0000 UTC m=+0.277065835 container start 34c9ce124cbdf486e312309d70a855f28cd415a9075819186d036218f162aa71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_galileo, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:26:06 compute-0 podman[271119]: 2025-11-26 01:26:06.677999869 +0000 UTC m=+0.284114974 container attach 34c9ce124cbdf486e312309d70a855f28cd415a9075819186d036218f162aa71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_galileo, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:26:07 compute-0 python3.9[271218]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:26:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:07 compute-0 modest_galileo[271162]: {
Nov 26 01:26:07 compute-0 modest_galileo[271162]:    "0": [
Nov 26 01:26:07 compute-0 modest_galileo[271162]:        {
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "devices": [
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "/dev/loop3"
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            ],
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "lv_name": "ceph_lv0",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "lv_size": "21470642176",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "name": "ceph_lv0",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "tags": {
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.cluster_name": "ceph",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.crush_device_class": "",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.encrypted": "0",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.osd_id": "0",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.type": "block",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.vdo": "0"
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            },
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "type": "block",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "vg_name": "ceph_vg0"
Nov 26 01:26:07 compute-0 modest_galileo[271162]:        }
Nov 26 01:26:07 compute-0 modest_galileo[271162]:    ],
Nov 26 01:26:07 compute-0 modest_galileo[271162]:    "1": [
Nov 26 01:26:07 compute-0 modest_galileo[271162]:        {
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "devices": [
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "/dev/loop4"
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            ],
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "lv_name": "ceph_lv1",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "lv_size": "21470642176",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "name": "ceph_lv1",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "tags": {
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.cluster_name": "ceph",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.crush_device_class": "",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.encrypted": "0",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.osd_id": "1",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.type": "block",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.vdo": "0"
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            },
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "type": "block",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "vg_name": "ceph_vg1"
Nov 26 01:26:07 compute-0 modest_galileo[271162]:        }
Nov 26 01:26:07 compute-0 modest_galileo[271162]:    ],
Nov 26 01:26:07 compute-0 modest_galileo[271162]:    "2": [
Nov 26 01:26:07 compute-0 modest_galileo[271162]:        {
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "devices": [
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "/dev/loop5"
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            ],
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "lv_name": "ceph_lv2",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "lv_size": "21470642176",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "name": "ceph_lv2",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "tags": {
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.cluster_name": "ceph",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.crush_device_class": "",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.encrypted": "0",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.osd_id": "2",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.type": "block",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:                "ceph.vdo": "0"
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            },
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "type": "block",
Nov 26 01:26:07 compute-0 modest_galileo[271162]:            "vg_name": "ceph_vg2"
Nov 26 01:26:07 compute-0 modest_galileo[271162]:        }
Nov 26 01:26:07 compute-0 modest_galileo[271162]:    ]
Nov 26 01:26:07 compute-0 modest_galileo[271162]: }
Nov 26 01:26:07 compute-0 systemd[1]: libpod-34c9ce124cbdf486e312309d70a855f28cd415a9075819186d036218f162aa71.scope: Deactivated successfully.
Nov 26 01:26:07 compute-0 podman[271119]: 2025-11-26 01:26:07.466801198 +0000 UTC m=+1.072916273 container died 34c9ce124cbdf486e312309d70a855f28cd415a9075819186d036218f162aa71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_galileo, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:26:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ecbea08f5dfff9fa78efc88bfe164fe13aca4f0cf5d7567f8ec8d035248f82d-merged.mount: Deactivated successfully.
Nov 26 01:26:07 compute-0 podman[271119]: 2025-11-26 01:26:07.548215018 +0000 UTC m=+1.154330123 container remove 34c9ce124cbdf486e312309d70a855f28cd415a9075819186d036218f162aa71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 01:26:07 compute-0 systemd[1]: libpod-conmon-34c9ce124cbdf486e312309d70a855f28cd415a9075819186d036218f162aa71.scope: Deactivated successfully.
Nov 26 01:26:08 compute-0 podman[271525]: 2025-11-26 01:26:08.683231606 +0000 UTC m=+0.089764237 container create 0d07b3f0811ac30b60e10fc9ae3a965d6e7cfca1922fd025dd29ed6d8d6d64f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 01:26:08 compute-0 podman[271525]: 2025-11-26 01:26:08.64726805 +0000 UTC m=+0.053800731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:26:08 compute-0 python3.9[271524]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:26:08 compute-0 systemd[1]: Started libpod-conmon-0d07b3f0811ac30b60e10fc9ae3a965d6e7cfca1922fd025dd29ed6d8d6d64f5.scope.
Nov 26 01:26:08 compute-0 ovs-vsctl[271541]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 26 01:26:08 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:26:08 compute-0 podman[271525]: 2025-11-26 01:26:08.83596652 +0000 UTC m=+0.242499151 container init 0d07b3f0811ac30b60e10fc9ae3a965d6e7cfca1922fd025dd29ed6d8d6d64f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mclaren, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 01:26:08 compute-0 podman[271525]: 2025-11-26 01:26:08.845729305 +0000 UTC m=+0.252261896 container start 0d07b3f0811ac30b60e10fc9ae3a965d6e7cfca1922fd025dd29ed6d8d6d64f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 01:26:08 compute-0 podman[271525]: 2025-11-26 01:26:08.850608063 +0000 UTC m=+0.257140684 container attach 0d07b3f0811ac30b60e10fc9ae3a965d6e7cfca1922fd025dd29ed6d8d6d64f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mclaren, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:26:08 compute-0 distracted_mclaren[271542]: 167 167
Nov 26 01:26:08 compute-0 systemd[1]: libpod-0d07b3f0811ac30b60e10fc9ae3a965d6e7cfca1922fd025dd29ed6d8d6d64f5.scope: Deactivated successfully.
Nov 26 01:26:08 compute-0 podman[271525]: 2025-11-26 01:26:08.854441931 +0000 UTC m=+0.260974562 container died 0d07b3f0811ac30b60e10fc9ae3a965d6e7cfca1922fd025dd29ed6d8d6d64f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:26:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdf9c8e76258778432026fb27968e5de004f9ddacb1272b8cba29d4eb2556594-merged.mount: Deactivated successfully.
Nov 26 01:26:08 compute-0 podman[271525]: 2025-11-26 01:26:08.930937992 +0000 UTC m=+0.337470623 container remove 0d07b3f0811ac30b60e10fc9ae3a965d6e7cfca1922fd025dd29ed6d8d6d64f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 01:26:08 compute-0 systemd[1]: libpod-conmon-0d07b3f0811ac30b60e10fc9ae3a965d6e7cfca1922fd025dd29ed6d8d6d64f5.scope: Deactivated successfully.
Nov 26 01:26:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:26:09 compute-0 podman[271602]: 2025-11-26 01:26:09.19427428 +0000 UTC m=+0.096133867 container create 7ea7b395af1108b08d3225dee0ba1516f24972d0d10db7e9ba9d0e503afaa949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:26:09 compute-0 podman[271602]: 2025-11-26 01:26:09.156181434 +0000 UTC m=+0.058041091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:26:09 compute-0 systemd[1]: Started libpod-conmon-7ea7b395af1108b08d3225dee0ba1516f24972d0d10db7e9ba9d0e503afaa949.scope.
Nov 26 01:26:09 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:26:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa5c67abccd4e0ea4eb4252114195e4dc5a5c6a0b41bd24b8f93e04064174f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:26:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa5c67abccd4e0ea4eb4252114195e4dc5a5c6a0b41bd24b8f93e04064174f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:26:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa5c67abccd4e0ea4eb4252114195e4dc5a5c6a0b41bd24b8f93e04064174f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:26:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56aa5c67abccd4e0ea4eb4252114195e4dc5a5c6a0b41bd24b8f93e04064174f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:26:09 compute-0 podman[271602]: 2025-11-26 01:26:09.374416168 +0000 UTC m=+0.276275815 container init 7ea7b395af1108b08d3225dee0ba1516f24972d0d10db7e9ba9d0e503afaa949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:26:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:09 compute-0 podman[271602]: 2025-11-26 01:26:09.398368244 +0000 UTC m=+0.300227841 container start 7ea7b395af1108b08d3225dee0ba1516f24972d0d10db7e9ba9d0e503afaa949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:26:09 compute-0 podman[271602]: 2025-11-26 01:26:09.405199347 +0000 UTC m=+0.307058984 container attach 7ea7b395af1108b08d3225dee0ba1516f24972d0d10db7e9ba9d0e503afaa949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:26:09 compute-0 python3.9[271736]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]: {
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:        "osd_id": 0,
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:        "type": "bluestore"
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:    },
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:        "osd_id": 2,
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:        "type": "bluestore"
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:    },
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:        "osd_id": 1,
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:        "type": "bluestore"
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]:    }
Nov 26 01:26:10 compute-0 cool_ramanujan[271655]: }
Nov 26 01:26:10 compute-0 systemd[1]: libpod-7ea7b395af1108b08d3225dee0ba1516f24972d0d10db7e9ba9d0e503afaa949.scope: Deactivated successfully.
Nov 26 01:26:10 compute-0 podman[271602]: 2025-11-26 01:26:10.588172968 +0000 UTC m=+1.490032535 container died 7ea7b395af1108b08d3225dee0ba1516f24972d0d10db7e9ba9d0e503afaa949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 01:26:10 compute-0 systemd[1]: libpod-7ea7b395af1108b08d3225dee0ba1516f24972d0d10db7e9ba9d0e503afaa949.scope: Consumed 1.187s CPU time.
Nov 26 01:26:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-56aa5c67abccd4e0ea4eb4252114195e4dc5a5c6a0b41bd24b8f93e04064174f-merged.mount: Deactivated successfully.
Nov 26 01:26:10 compute-0 podman[271602]: 2025-11-26 01:26:10.689055798 +0000 UTC m=+1.590915365 container remove 7ea7b395af1108b08d3225dee0ba1516f24972d0d10db7e9ba9d0e503afaa949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ramanujan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 26 01:26:10 compute-0 systemd[1]: libpod-conmon-7ea7b395af1108b08d3225dee0ba1516f24972d0d10db7e9ba9d0e503afaa949.scope: Deactivated successfully.
Nov 26 01:26:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:26:10 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:26:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:26:10 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:26:10 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev ebf476e4-99fa-41a4-910c-82cd077ef63f does not exist
Nov 26 01:26:10 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev d15a3aa2-4ddb-45bb-950a-25497b8e3415 does not exist
Nov 26 01:26:11 compute-0 python3.9[271956]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:26:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:26:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:26:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:26:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:26:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:26:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:26:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:11 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:26:11 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:26:12 compute-0 python3.9[272135]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:26:13 compute-0 python3.9[272287]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:26:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:13 compute-0 python3.9[272365]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:26:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:26:15 compute-0 python3.9[272517]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:26:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:15 compute-0 python3.9[272595]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:26:16 compute-0 python3.9[272747]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:26:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:18 compute-0 python3.9[272899]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:26:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:26:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:20 compute-0 python3.9[272978]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:26:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:21 compute-0 python3.9[273130]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:26:22 compute-0 python3.9[273208]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:26:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:23 compute-0 python3.9[273360]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:26:23 compute-0 systemd[1]: Reloading.
Nov 26 01:26:23 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:26:23 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:26:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:26:25 compute-0 python3.9[273549]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:26:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:25 compute-0 python3.9[273627]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:26:26 compute-0 podman[273727]: 2025-11-26 01:26:26.60230166 +0000 UTC m=+0.145831420 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:26:26 compute-0 podman[273729]: 2025-11-26 01:26:26.605566252 +0000 UTC m=+0.143693279 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 01:26:26 compute-0 podman[273732]: 2025-11-26 01:26:26.669148848 +0000 UTC m=+0.198652222 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 26 01:26:26 compute-0 python3.9[273844]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:26:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:27 compute-0 python3.9[273922]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:26:28 compute-0 python3.9[274074]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:26:28 compute-0 systemd[1]: Reloading.
Nov 26 01:26:29 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:26:29 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:26:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:26:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:29 compute-0 systemd[1]: Starting Create netns directory...
Nov 26 01:26:29 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 01:26:29 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 01:26:29 compute-0 systemd[1]: Finished Create netns directory.
Nov 26 01:26:29 compute-0 podman[158021]: time="2025-11-26T01:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:26:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:26:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6828 "" "Go-http-client/1.1"
Nov 26 01:26:30 compute-0 podman[274239]: 2025-11-26 01:26:30.453433241 +0000 UTC m=+0.103974597 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:26:30 compute-0 podman[274238]: 2025-11-26 01:26:30.485287581 +0000 UTC m=+0.132332369 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 01:26:30 compute-0 python3.9[274311]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:26:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:31 compute-0 openstack_network_exporter[160178]: ERROR   01:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:26:31 compute-0 openstack_network_exporter[160178]: ERROR   01:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:26:31 compute-0 openstack_network_exporter[160178]: ERROR   01:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:26:31 compute-0 openstack_network_exporter[160178]: ERROR   01:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:26:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:26:31 compute-0 openstack_network_exporter[160178]: ERROR   01:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:26:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:26:31 compute-0 python3.9[274463]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:26:32 compute-0 python3.9[274541]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ovn_controller/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ovn_controller/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:26:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:33 compute-0 python3.9[274693]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:26:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:26:34 compute-0 python3.9[274845]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:26:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:35 compute-0 podman[274924]: 2025-11-26 01:26:35.585320516 +0000 UTC m=+0.133083430 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, version=9.4, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, io.buildah.version=1.29.0, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., release=1214.1726694543)
Nov 26 01:26:35 compute-0 podman[274925]: 2025-11-26 01:26:35.594401702 +0000 UTC m=+0.136389563 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 01:26:35 compute-0 python3.9[274923]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ovn_controller.json _original_basename=.f5hbdv9g recurse=False state=file path=/var/lib/kolla/config_files/ovn_controller.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:26:36 compute-0 python3.9[275114]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:26:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:26:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:39 compute-0 python3.9[275496]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:26:41
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'backups', '.mgr', 'images', 'default.rgw.control', 'volumes', 'vms']
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:26:41 compute-0 python3.9[275648]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 01:26:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:42 compute-0 python3.9[275800]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 26 01:26:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:26:44 compute-0 python3[275978]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 01:26:45 compute-0 python3[275978]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c",#012          "Digest": "sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e",#012          "RepoTags": [#012               "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-11-21T06:40:43.504967825Z",#012          "Config": {#012               "User": "root",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.3",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251118",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 9 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 345731014,#012          "VirtualSize": 345731014,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/0ff11ed3154c8bbd91096301c9cfc5b95bbe726d99c5650ba8d355053fb0bbad/diff:/var/lib/containers/storage/overlay/6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068/diff:/var/lib/containers/storage/overlay/ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/d16160b7dcc2f7ec400dce38b825ab93d5279c0ca0a9a7ff351e435b4aeeea92/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/d16160b7dcc2f7ec400dce38b825ab93d5279c0ca0a9a7ff351e435b4aeeea92/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6",#012                    "sha256:573e98f577c8b1610c1485067040ff856a142394fcd22ad4cb9c66b7d1de6bef",#012                    "sha256:2e0f9ca9a8387a3566096aacaecfe5797e3fc2585f07cb97a1706897fa1a86a3",#012                    "sha256:db37b2d335b44e6a9cb2eb88713051bc469233d1e0a06670f1303bc9539b97a0"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.3",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251118",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 9 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "root",#012          "History": [#012               {#012                    "created": "2025-11-18T01:56:49.795434035Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:6d427dd138d2b0977a7ef7feaa8bd82d04e99cc5f4a16d555d6cff0cb52d43c6 in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:49.795512415Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251118\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:52.547242013Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947310748Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream9",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947327778Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947358359Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947372589Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94738527Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94739397Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:02.324930938Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:36.349393468Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:39.924297673Z",#012                    "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-li
Nov 26 01:26:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:46 compute-0 python3.9[276187]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:26:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:47 compute-0 python3.9[276341]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:26:48 compute-0 python3.9[276417]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:26:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:26:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:49 compute-0 python3.9[276569]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764120408.5017245-536-153281699893839/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:26:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:26:50 compute-0 python3.9[276645]: ansible-systemd Invoked with state=started name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:26:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:51 compute-0 python3.9[276799]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:26:51 compute-0 ovs-vsctl[276800]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 26 01:26:52 compute-0 python3.9[276952]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:26:52 compute-0 ovs-vsctl[276954]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 26 01:26:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:26:54 compute-0 python3.9[277107]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:26:54 compute-0 ovs-vsctl[277108]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 26 01:26:54 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Nov 26 01:26:54 compute-0 systemd[1]: session-52.scope: Consumed 1min 11.471s CPU time.
Nov 26 01:26:54 compute-0 systemd-logind[800]: Session 52 logged out. Waiting for processes to exit.
Nov 26 01:26:54 compute-0 systemd-logind[800]: Removed session 52.
Nov 26 01:26:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:57 compute-0 podman[277133]: 2025-11-26 01:26:57.583567143 +0000 UTC m=+0.120681490 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 01:26:57 compute-0 podman[277134]: 2025-11-26 01:26:57.61709075 +0000 UTC m=+0.153386234 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:26:57 compute-0 podman[277135]: 2025-11-26 01:26:57.641938522 +0000 UTC m=+0.168398828 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:26:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:26:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:26:59 compute-0 podman[158021]: time="2025-11-26T01:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:26:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:26:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6822 "" "Go-http-client/1.1"
Nov 26 01:27:00 compute-0 systemd-logind[800]: New session 53 of user zuul.
Nov 26 01:27:00 compute-0 systemd[1]: Started Session 53 of User zuul.
Nov 26 01:27:01 compute-0 openstack_network_exporter[160178]: ERROR   01:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:27:01 compute-0 openstack_network_exporter[160178]: ERROR   01:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:27:01 compute-0 openstack_network_exporter[160178]: ERROR   01:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:27:01 compute-0 openstack_network_exporter[160178]: ERROR   01:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:27:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:27:01 compute-0 openstack_network_exporter[160178]: ERROR   01:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:27:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:27:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:01 compute-0 podman[277329]: 2025-11-26 01:27:01.448582455 +0000 UTC m=+0.136959509 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, version=9.6)
Nov 26 01:27:01 compute-0 podman[277330]: 2025-11-26 01:27:01.481204167 +0000 UTC m=+0.164702833 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:27:01 compute-0 python3.9[277385]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:27:03 compute-0 python3.9[277554]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:27:04 compute-0 python3.9[277706]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:05 compute-0 python3.9[277858]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:06 compute-0 podman[277982]: 2025-11-26 01:27:06.370513961 +0000 UTC m=+0.116091840 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, name=ubi9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_id=edpm)
Nov 26 01:27:06 compute-0 podman[277983]: 2025-11-26 01:27:06.393616213 +0000 UTC m=+0.134122559 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 01:27:06 compute-0 python3.9[278047]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:07 compute-0 python3.9[278200]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:08 compute-0 python3.9[278350]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:27:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:27:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:09 compute-0 python3.9[278502]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 26 01:27:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:27:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:27:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:27:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:27:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:27:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:27:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:11 compute-0 python3.9[278752]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:27:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 26 01:27:12 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 01:27:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:27:12 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:27:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:27:12 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:27:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:27:12 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:27:12 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 72a1405b-9dc5-4999-bf8d-b872f22b00bd does not exist
Nov 26 01:27:12 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev e5fa0073-82da-4517-b5a0-e190ad1b7298 does not exist
Nov 26 01:27:12 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 6dcc08e0-591e-425e-bca4-399a2d98663d does not exist
Nov 26 01:27:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:27:12 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:27:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:27:12 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:27:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:27:12 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:27:12 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 01:27:12 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:27:12 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:27:12 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:27:12 compute-0 python3.9[278986]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764120430.9810066-86-21447497166990/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:13 compute-0 podman[279090]: 2025-11-26 01:27:13.159996403 +0000 UTC m=+0.089180320 container create 7ff946c6b33a31d5cce488156e416d2445ca6717310aec5cce7e05bde6241c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:27:13 compute-0 podman[279090]: 2025-11-26 01:27:13.124260373 +0000 UTC m=+0.053444340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:27:13 compute-0 systemd[1]: Started libpod-conmon-7ff946c6b33a31d5cce488156e416d2445ca6717310aec5cce7e05bde6241c24.scope.
Nov 26 01:27:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:27:13 compute-0 podman[279090]: 2025-11-26 01:27:13.300217243 +0000 UTC m=+0.229401160 container init 7ff946c6b33a31d5cce488156e416d2445ca6717310aec5cce7e05bde6241c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_moser, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 01:27:13 compute-0 podman[279090]: 2025-11-26 01:27:13.311380698 +0000 UTC m=+0.240564585 container start 7ff946c6b33a31d5cce488156e416d2445ca6717310aec5cce7e05bde6241c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 01:27:13 compute-0 podman[279090]: 2025-11-26 01:27:13.316333238 +0000 UTC m=+0.245517165 container attach 7ff946c6b33a31d5cce488156e416d2445ca6717310aec5cce7e05bde6241c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_moser, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:27:13 compute-0 nice_moser[279136]: 167 167
Nov 26 01:27:13 compute-0 systemd[1]: libpod-7ff946c6b33a31d5cce488156e416d2445ca6717310aec5cce7e05bde6241c24.scope: Deactivated successfully.
Nov 26 01:27:13 compute-0 podman[279090]: 2025-11-26 01:27:13.321420982 +0000 UTC m=+0.250604959 container died 7ff946c6b33a31d5cce488156e416d2445ca6717310aec5cce7e05bde6241c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:27:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a34b40979340cbf38dd3773eb78a7739480c081b73b6e5451757f3ac7d0635b-merged.mount: Deactivated successfully.
Nov 26 01:27:13 compute-0 podman[279090]: 2025-11-26 01:27:13.39322488 +0000 UTC m=+0.322408767 container remove 7ff946c6b33a31d5cce488156e416d2445ca6717310aec5cce7e05bde6241c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_moser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 01:27:13 compute-0 systemd[1]: libpod-conmon-7ff946c6b33a31d5cce488156e416d2445ca6717310aec5cce7e05bde6241c24.scope: Deactivated successfully.
Nov 26 01:27:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:13 compute-0 podman[279215]: 2025-11-26 01:27:13.661367393 +0000 UTC m=+0.079091654 container create 0963acc2d4ef1d049962c4cbc60f1a30ef9a1966faffd9fb6472928ae69a340e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hopper, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:27:13 compute-0 podman[279215]: 2025-11-26 01:27:13.627793915 +0000 UTC m=+0.045518236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:27:13 compute-0 systemd[1]: Started libpod-conmon-0963acc2d4ef1d049962c4cbc60f1a30ef9a1966faffd9fb6472928ae69a340e.scope.
Nov 26 01:27:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78afd85facea6b46ad3614e856681aaf0a5acf20b4040cec35b8d18a286ea42e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78afd85facea6b46ad3614e856681aaf0a5acf20b4040cec35b8d18a286ea42e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78afd85facea6b46ad3614e856681aaf0a5acf20b4040cec35b8d18a286ea42e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78afd85facea6b46ad3614e856681aaf0a5acf20b4040cec35b8d18a286ea42e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78afd85facea6b46ad3614e856681aaf0a5acf20b4040cec35b8d18a286ea42e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:27:13 compute-0 python3.9[279242]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:27:13 compute-0 podman[279215]: 2025-11-26 01:27:13.829111541 +0000 UTC m=+0.246835832 container init 0963acc2d4ef1d049962c4cbc60f1a30ef9a1966faffd9fb6472928ae69a340e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hopper, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 01:27:13 compute-0 podman[279215]: 2025-11-26 01:27:13.848458968 +0000 UTC m=+0.266183219 container start 0963acc2d4ef1d049962c4cbc60f1a30ef9a1966faffd9fb6472928ae69a340e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hopper, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:27:13 compute-0 podman[279215]: 2025-11-26 01:27:13.854525109 +0000 UTC m=+0.272249430 container attach 0963acc2d4ef1d049962c4cbc60f1a30ef9a1966faffd9fb6472928ae69a340e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hopper, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:27:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:27:14 compute-0 python3.9[279376]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764120433.0674698-101-119492470273069/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:15 compute-0 heuristic_hopper[279249]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:27:15 compute-0 heuristic_hopper[279249]: --> relative data size: 1.0
Nov 26 01:27:15 compute-0 heuristic_hopper[279249]: --> All data devices are unavailable
Nov 26 01:27:15 compute-0 systemd[1]: libpod-0963acc2d4ef1d049962c4cbc60f1a30ef9a1966faffd9fb6472928ae69a340e.scope: Deactivated successfully.
Nov 26 01:27:15 compute-0 podman[279215]: 2025-11-26 01:27:15.078421956 +0000 UTC m=+1.496146187 container died 0963acc2d4ef1d049962c4cbc60f1a30ef9a1966faffd9fb6472928ae69a340e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hopper, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 01:27:15 compute-0 systemd[1]: libpod-0963acc2d4ef1d049962c4cbc60f1a30ef9a1966faffd9fb6472928ae69a340e.scope: Consumed 1.167s CPU time.
Nov 26 01:27:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-78afd85facea6b46ad3614e856681aaf0a5acf20b4040cec35b8d18a286ea42e-merged.mount: Deactivated successfully.
Nov 26 01:27:15 compute-0 podman[279215]: 2025-11-26 01:27:15.182552207 +0000 UTC m=+1.600276468 container remove 0963acc2d4ef1d049962c4cbc60f1a30ef9a1966faffd9fb6472928ae69a340e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:27:15 compute-0 systemd[1]: libpod-conmon-0963acc2d4ef1d049962c4cbc60f1a30ef9a1966faffd9fb6472928ae69a340e.scope: Deactivated successfully.
Nov 26 01:27:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:16 compute-0 python3.9[279660]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 01:27:16 compute-0 podman[279705]: 2025-11-26 01:27:16.434012604 +0000 UTC m=+0.084949241 container create 9937dadd9db85216ccf9a5c55ba03009bf99839911c5f142300880ccf437cf6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_jackson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:27:16 compute-0 podman[279705]: 2025-11-26 01:27:16.401976519 +0000 UTC m=+0.052913216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:27:16 compute-0 systemd[1]: Started libpod-conmon-9937dadd9db85216ccf9a5c55ba03009bf99839911c5f142300880ccf437cf6b.scope.
Nov 26 01:27:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:27:16 compute-0 podman[279705]: 2025-11-26 01:27:16.577192748 +0000 UTC m=+0.228129425 container init 9937dadd9db85216ccf9a5c55ba03009bf99839911c5f142300880ccf437cf6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:27:16 compute-0 podman[279705]: 2025-11-26 01:27:16.593728985 +0000 UTC m=+0.244665622 container start 9937dadd9db85216ccf9a5c55ba03009bf99839911c5f142300880ccf437cf6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_jackson, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 01:27:16 compute-0 podman[279705]: 2025-11-26 01:27:16.60027592 +0000 UTC m=+0.251212627 container attach 9937dadd9db85216ccf9a5c55ba03009bf99839911c5f142300880ccf437cf6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_jackson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:27:16 compute-0 vigorous_jackson[279726]: 167 167
Nov 26 01:27:16 compute-0 systemd[1]: libpod-9937dadd9db85216ccf9a5c55ba03009bf99839911c5f142300880ccf437cf6b.scope: Deactivated successfully.
Nov 26 01:27:16 compute-0 podman[279705]: 2025-11-26 01:27:16.605397064 +0000 UTC m=+0.256333711 container died 9937dadd9db85216ccf9a5c55ba03009bf99839911c5f142300880ccf437cf6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d4a325001a18a596e89a9568f63af2f12155d0e08e71b1fb5bb3b0927bf715a-merged.mount: Deactivated successfully.
Nov 26 01:27:16 compute-0 podman[279705]: 2025-11-26 01:27:16.68135708 +0000 UTC m=+0.332293727 container remove 9937dadd9db85216ccf9a5c55ba03009bf99839911c5f142300880ccf437cf6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:27:16 compute-0 systemd[1]: libpod-conmon-9937dadd9db85216ccf9a5c55ba03009bf99839911c5f142300880ccf437cf6b.scope: Deactivated successfully.
Nov 26 01:27:16 compute-0 podman[279794]: 2025-11-26 01:27:16.956578593 +0000 UTC m=+0.082769329 container create e32e6129eb3f8b53202055ccd071bef4afd1b30d66fa0df1e07afc4488e06675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kare, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 01:27:17 compute-0 podman[279794]: 2025-11-26 01:27:16.926095422 +0000 UTC m=+0.052286208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:27:17 compute-0 systemd[1]: Started libpod-conmon-e32e6129eb3f8b53202055ccd071bef4afd1b30d66fa0df1e07afc4488e06675.scope.
Nov 26 01:27:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8acc3e7156b91caae6f6732704f61df1441a508c67788eda5594bd99861c8e0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8acc3e7156b91caae6f6732704f61df1441a508c67788eda5594bd99861c8e0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8acc3e7156b91caae6f6732704f61df1441a508c67788eda5594bd99861c8e0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8acc3e7156b91caae6f6732704f61df1441a508c67788eda5594bd99861c8e0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:27:17 compute-0 podman[279794]: 2025-11-26 01:27:17.128080557 +0000 UTC m=+0.254271343 container init e32e6129eb3f8b53202055ccd071bef4afd1b30d66fa0df1e07afc4488e06675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kare, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 01:27:17 compute-0 podman[279794]: 2025-11-26 01:27:17.157690443 +0000 UTC m=+0.283881149 container start e32e6129eb3f8b53202055ccd071bef4afd1b30d66fa0df1e07afc4488e06675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 01:27:17 compute-0 podman[279794]: 2025-11-26 01:27:17.162407467 +0000 UTC m=+0.288598213 container attach e32e6129eb3f8b53202055ccd071bef4afd1b30d66fa0df1e07afc4488e06675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:27:17 compute-0 python3.9[279843]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 01:27:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:17 compute-0 admiring_kare[279841]: {
Nov 26 01:27:17 compute-0 admiring_kare[279841]:    "0": [
Nov 26 01:27:17 compute-0 admiring_kare[279841]:        {
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "devices": [
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "/dev/loop3"
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            ],
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "lv_name": "ceph_lv0",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "lv_size": "21470642176",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "name": "ceph_lv0",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "tags": {
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.cluster_name": "ceph",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.crush_device_class": "",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.encrypted": "0",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.osd_id": "0",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.type": "block",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.vdo": "0"
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            },
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "type": "block",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "vg_name": "ceph_vg0"
Nov 26 01:27:17 compute-0 admiring_kare[279841]:        }
Nov 26 01:27:17 compute-0 admiring_kare[279841]:    ],
Nov 26 01:27:17 compute-0 admiring_kare[279841]:    "1": [
Nov 26 01:27:17 compute-0 admiring_kare[279841]:        {
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "devices": [
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "/dev/loop4"
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            ],
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "lv_name": "ceph_lv1",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "lv_size": "21470642176",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "name": "ceph_lv1",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "tags": {
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.cluster_name": "ceph",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.crush_device_class": "",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.encrypted": "0",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.osd_id": "1",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.type": "block",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.vdo": "0"
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            },
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "type": "block",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "vg_name": "ceph_vg1"
Nov 26 01:27:17 compute-0 admiring_kare[279841]:        }
Nov 26 01:27:17 compute-0 admiring_kare[279841]:    ],
Nov 26 01:27:17 compute-0 admiring_kare[279841]:    "2": [
Nov 26 01:27:17 compute-0 admiring_kare[279841]:        {
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "devices": [
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "/dev/loop5"
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            ],
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "lv_name": "ceph_lv2",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "lv_size": "21470642176",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "name": "ceph_lv2",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "tags": {
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.cluster_name": "ceph",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.crush_device_class": "",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.encrypted": "0",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.osd_id": "2",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.type": "block",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:                "ceph.vdo": "0"
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            },
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "type": "block",
Nov 26 01:27:17 compute-0 admiring_kare[279841]:            "vg_name": "ceph_vg2"
Nov 26 01:27:17 compute-0 admiring_kare[279841]:        }
Nov 26 01:27:17 compute-0 admiring_kare[279841]:    ]
Nov 26 01:27:17 compute-0 admiring_kare[279841]: }
Nov 26 01:27:17 compute-0 podman[279794]: 2025-11-26 01:27:17.978454285 +0000 UTC m=+1.104645001 container died e32e6129eb3f8b53202055ccd071bef4afd1b30d66fa0df1e07afc4488e06675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 01:27:17 compute-0 systemd[1]: libpod-e32e6129eb3f8b53202055ccd071bef4afd1b30d66fa0df1e07afc4488e06675.scope: Deactivated successfully.
Nov 26 01:27:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-8acc3e7156b91caae6f6732704f61df1441a508c67788eda5594bd99861c8e0d-merged.mount: Deactivated successfully.
Nov 26 01:27:18 compute-0 podman[279794]: 2025-11-26 01:27:18.075060014 +0000 UTC m=+1.201250730 container remove e32e6129eb3f8b53202055ccd071bef4afd1b30d66fa0df1e07afc4488e06675 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:27:18 compute-0 systemd[1]: libpod-conmon-e32e6129eb3f8b53202055ccd071bef4afd1b30d66fa0df1e07afc4488e06675.scope: Deactivated successfully.
Nov 26 01:27:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:27:19 compute-0 podman[280065]: 2025-11-26 01:27:19.161455697 +0000 UTC m=+0.085790404 container create cdbe3be0d107fde36fdb4509b86b5897334eaca55057140aa51decda7e57d438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 01:27:19 compute-0 podman[280065]: 2025-11-26 01:27:19.13110207 +0000 UTC m=+0.055436827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:27:19 compute-0 systemd[1]: Started libpod-conmon-cdbe3be0d107fde36fdb4509b86b5897334eaca55057140aa51decda7e57d438.scope.
Nov 26 01:27:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:27:19 compute-0 podman[280065]: 2025-11-26 01:27:19.288529156 +0000 UTC m=+0.212863943 container init cdbe3be0d107fde36fdb4509b86b5897334eaca55057140aa51decda7e57d438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 26 01:27:19 compute-0 podman[280065]: 2025-11-26 01:27:19.30352677 +0000 UTC m=+0.227861517 container start cdbe3be0d107fde36fdb4509b86b5897334eaca55057140aa51decda7e57d438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 01:27:19 compute-0 inspiring_vaughan[280092]: 167 167
Nov 26 01:27:19 compute-0 podman[280065]: 2025-11-26 01:27:19.310393054 +0000 UTC m=+0.234727821 container attach cdbe3be0d107fde36fdb4509b86b5897334eaca55057140aa51decda7e57d438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 01:27:19 compute-0 systemd[1]: libpod-cdbe3be0d107fde36fdb4509b86b5897334eaca55057140aa51decda7e57d438.scope: Deactivated successfully.
Nov 26 01:27:19 compute-0 podman[280065]: 2025-11-26 01:27:19.312754911 +0000 UTC m=+0.237089628 container died cdbe3be0d107fde36fdb4509b86b5897334eaca55057140aa51decda7e57d438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe51d4e202d28efdc7aeea3101c49ee272b1a095190628372b81ca159534386a-merged.mount: Deactivated successfully.
Nov 26 01:27:19 compute-0 podman[280065]: 2025-11-26 01:27:19.38886534 +0000 UTC m=+0.313200057 container remove cdbe3be0d107fde36fdb4509b86b5897334eaca55057140aa51decda7e57d438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 01:27:19 compute-0 systemd[1]: libpod-conmon-cdbe3be0d107fde36fdb4509b86b5897334eaca55057140aa51decda7e57d438.scope: Deactivated successfully.
Nov 26 01:27:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:19 compute-0 podman[280138]: 2025-11-26 01:27:19.63493095 +0000 UTC m=+0.070949945 container create 80aaab3fc32efe2d642e755e7ec1fea2cf24d82dcb08e689445386af6e7349e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 01:27:19 compute-0 systemd[1]: Started libpod-conmon-80aaab3fc32efe2d642e755e7ec1fea2cf24d82dcb08e689445386af6e7349e0.scope.
Nov 26 01:27:19 compute-0 podman[280138]: 2025-11-26 01:27:19.612539828 +0000 UTC m=+0.048558803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:27:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128857cf77f0b9f5db5b85fd8a3804e62b0fc6af51736a570a22a4e90f10c248/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128857cf77f0b9f5db5b85fd8a3804e62b0fc6af51736a570a22a4e90f10c248/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128857cf77f0b9f5db5b85fd8a3804e62b0fc6af51736a570a22a4e90f10c248/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128857cf77f0b9f5db5b85fd8a3804e62b0fc6af51736a570a22a4e90f10c248/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:27:19 compute-0 podman[280138]: 2025-11-26 01:27:19.783721803 +0000 UTC m=+0.219740848 container init 80aaab3fc32efe2d642e755e7ec1fea2cf24d82dcb08e689445386af6e7349e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_blackburn, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:27:19 compute-0 podman[280138]: 2025-11-26 01:27:19.808715389 +0000 UTC m=+0.244734374 container start 80aaab3fc32efe2d642e755e7ec1fea2cf24d82dcb08e689445386af6e7349e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_blackburn, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:27:19 compute-0 podman[280138]: 2025-11-26 01:27:19.814374368 +0000 UTC m=+0.250393343 container attach 80aaab3fc32efe2d642e755e7ec1fea2cf24d82dcb08e689445386af6e7349e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 01:27:20 compute-0 python3.9[280209]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]: {
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:        "osd_id": 0,
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:        "type": "bluestore"
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:    },
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:        "osd_id": 2,
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:        "type": "bluestore"
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:    },
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:        "osd_id": 1,
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:        "type": "bluestore"
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]:    }
Nov 26 01:27:20 compute-0 suspicious_blackburn[280177]: }
Nov 26 01:27:21 compute-0 systemd[1]: libpod-80aaab3fc32efe2d642e755e7ec1fea2cf24d82dcb08e689445386af6e7349e0.scope: Deactivated successfully.
Nov 26 01:27:21 compute-0 systemd[1]: libpod-80aaab3fc32efe2d642e755e7ec1fea2cf24d82dcb08e689445386af6e7349e0.scope: Consumed 1.195s CPU time.
Nov 26 01:27:21 compute-0 podman[280138]: 2025-11-26 01:27:21.002543537 +0000 UTC m=+1.438562542 container died 80aaab3fc32efe2d642e755e7ec1fea2cf24d82dcb08e689445386af6e7349e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_blackburn, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:27:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-128857cf77f0b9f5db5b85fd8a3804e62b0fc6af51736a570a22a4e90f10c248-merged.mount: Deactivated successfully.
Nov 26 01:27:21 compute-0 podman[280138]: 2025-11-26 01:27:21.117323059 +0000 UTC m=+1.553342014 container remove 80aaab3fc32efe2d642e755e7ec1fea2cf24d82dcb08e689445386af6e7349e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_blackburn, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:27:21 compute-0 systemd[1]: libpod-conmon-80aaab3fc32efe2d642e755e7ec1fea2cf24d82dcb08e689445386af6e7349e0.scope: Deactivated successfully.
Nov 26 01:27:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:27:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:27:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:27:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:27:21 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 475231d2-7c13-43df-bf21-0e1045d4b06f does not exist
Nov 26 01:27:21 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev fca83a24-cef9-4967-b9c6-ef7bb2c7e62f does not exist
Nov 26 01:27:21 compute-0 python3.9[280403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:27:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:22 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:27:22 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:27:22 compute-0 python3.9[280574]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764120440.6368964-138-224581222424400/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:23 compute-0 python3.9[280724]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:27:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:27:24 compute-0 python3.9[280845]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764120442.5178065-138-119018908027112/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:25 compute-0 python3.9[280995]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:27:26 compute-0 python3.9[281116]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764120445.1364417-182-52617114185128/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:27 compute-0 python3.9[281266]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:27:28 compute-0 podman[281362]: 2025-11-26 01:27:28.453351038 +0000 UTC m=+0.118241711 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:27:28 compute-0 podman[281361]: 2025-11-26 01:27:28.466631083 +0000 UTC m=+0.133921083 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, managed_by=edpm_ansible)
Nov 26 01:27:28 compute-0 podman[281363]: 2025-11-26 01:27:28.512665543 +0000 UTC m=+0.171082393 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 26 01:27:28 compute-0 python3.9[281427]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764120446.9483645-182-185478221589608/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:27:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:29 compute-0 podman[158021]: time="2025-11-26T01:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:27:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:27:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6831 "" "Go-http-client/1.1"
Nov 26 01:27:29 compute-0 python3.9[281602]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:27:31 compute-0 python3.9[281756]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:31 compute-0 openstack_network_exporter[160178]: ERROR   01:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:27:31 compute-0 openstack_network_exporter[160178]: ERROR   01:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:27:31 compute-0 openstack_network_exporter[160178]: ERROR   01:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:27:31 compute-0 openstack_network_exporter[160178]: ERROR   01:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:27:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:27:31 compute-0 openstack_network_exporter[160178]: ERROR   01:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:27:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:27:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:32 compute-0 podman[281881]: 2025-11-26 01:27:32.032395344 +0000 UTC m=+0.112808917 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:27:32 compute-0 podman[281880]: 2025-11-26 01:27:32.046130612 +0000 UTC m=+0.121839592 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 01:27:32 compute-0 python3.9[281950]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:27:33 compute-0 python3.9[282029]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Nov 26 01:27:34 compute-0 python3.9[282181]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:27:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:27:34 compute-0 python3.9[282259]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Nov 26 01:27:35 compute-0 python3.9[282411]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:27:36 compute-0 podman[282489]: 2025-11-26 01:27:36.579135831 +0000 UTC m=+0.119106885 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:27:36 compute-0 podman[282482]: 2025-11-26 01:27:36.592499219 +0000 UTC m=+0.140245962 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, vcs-type=git, distribution-scope=public, container_name=kepler, name=ubi9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible)
Nov 26 01:27:37 compute-0 python3.9[282600]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:27:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 01:27:37 compute-0 python3.9[282678]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:27:38 compute-0 python3.9[282830]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:27:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:27:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 01:27:39 compute-0 python3.9[282908]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:27:40 compute-0 python3.9[283060]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:27:40 compute-0 systemd[1]: Reloading.
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:27:41
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', '.mgr', 'backups', 'vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.log']
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:27:41 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:27:41 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:27:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 01:27:42 compute-0 python3.9[283249]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:27:43 compute-0 python3.9[283327]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:27:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 01:27:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:27:44 compute-0 python3.9[283479]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:27:45 compute-0 python3.9[283557]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:27:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Nov 26 01:27:46 compute-0 python3.9[283709]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:27:46 compute-0 systemd[1]: Reloading.
Nov 26 01:27:46 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:27:46 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:27:46 compute-0 systemd[1]: Starting Create netns directory...
Nov 26 01:27:46 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 01:27:46 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 01:27:46 compute-0 systemd[1]: Finished Create netns directory.
Nov 26 01:27:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Nov 26 01:27:48 compute-0 python3.9[283901]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:27:49 compute-0 python3.9[284053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:27:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:50 compute-0 python3.9[284177]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764120468.5239513-333-56267458283559/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:27:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:27:51 compute-0 python3.9[284329]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:27:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:52 compute-0 python3.9[284481]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:27:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:53 compute-0 python3.9[284604]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764120471.7660894-358-162032637015660/.source.json _original_basename=.eae4dw3c follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:27:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:27:54 compute-0 python3.9[284756]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:27:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:58 compute-0 python3.9[285183]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 26 01:27:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:27:59 compute-0 podman[285308]: 2025-11-26 01:27:59.392652176 +0000 UTC m=+0.129066406 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:27:59 compute-0 podman[285307]: 2025-11-26 01:27:59.424208918 +0000 UTC m=+0.163168369 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 26 01:27:59 compute-0 podman[285309]: 2025-11-26 01:27:59.453593808 +0000 UTC m=+0.176053158 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:27:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:27:59 compute-0 python3.9[285392]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 01:27:59 compute-0 podman[158021]: time="2025-11-26T01:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:27:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Nov 26 01:27:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6829 "" "Go-http-client/1.1"
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.779 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.780 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feff248b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff25140e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248a270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff35fda90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff5310410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff2489520>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff4ce75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feff25140b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feff248b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feff248b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feff248b740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feff248b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feff248b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feff248b1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feff248ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feff248b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feff248b0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feff248ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feff248bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feff248bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feff24894f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feff248b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feff248bc20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feff248b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feff248bcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feff55e84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feff248bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feff248b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feff248bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feff248a2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feff248aea0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feff248afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.794 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.795 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:27:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:27:59.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:28:01 compute-0 python3.9[285552]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 26 01:28:01 compute-0 openstack_network_exporter[160178]: ERROR   01:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:28:01 compute-0 openstack_network_exporter[160178]: ERROR   01:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:28:01 compute-0 openstack_network_exporter[160178]: ERROR   01:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:28:01 compute-0 openstack_network_exporter[160178]: ERROR   01:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:28:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:28:01 compute-0 openstack_network_exporter[160178]: ERROR   01:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:28:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:28:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:02 compute-0 podman[285629]: 2025-11-26 01:28:02.5730216 +0000 UTC m=+0.118379057 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:28:02 compute-0 podman[285626]: 2025-11-26 01:28:02.579306436 +0000 UTC m=+0.127129972 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 26 01:28:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:03 compute-0 python3[285772]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 01:28:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:28:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:28:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:28:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:28:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:28:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:28:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:28:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:28:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:12 compute-0 podman[285828]: 2025-11-26 01:28:12.496638891 +0000 UTC m=+5.218398226 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vcs-type=git, name=ubi9, release-0.7.12=, architecture=x86_64, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 01:28:12 compute-0 podman[285829]: 2025-11-26 01:28:12.505613462 +0000 UTC m=+5.226311278 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 01:28:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:28:15 compute-0 podman[285786]: 2025-11-26 01:28:15.16258669 +0000 UTC m=+11.550684195 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 01:28:15 compute-0 podman[285916]: 2025-11-26 01:28:15.465015465 +0000 UTC m=+0.101634498 container create e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 01:28:15 compute-0 podman[285916]: 2025-11-26 01:28:15.419062152 +0000 UTC m=+0.055681245 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 01:28:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:15 compute-0 python3[285772]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 01:28:16 compute-0 python3.9[286100]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:28:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:17 compute-0 python3.9[286254]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:28:18 compute-0 python3.9[286330]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:28:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:28:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:19 compute-0 python3.9[286482]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764120498.8234794-446-231993044546033/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:28:20 compute-0 python3.9[286558]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 01:28:20 compute-0 systemd[1]: Reloading.
Nov 26 01:28:20 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:28:20 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:28:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:21 compute-0 python3.9[286695]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:28:22 compute-0 systemd[1]: Reloading.
Nov 26 01:28:22 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:28:22 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:28:22 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 26 01:28:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:28:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6cc3785ecd2caa97f4699bfe41aac1270551cfb4ec44cb3b3195535b043ef4/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 26 01:28:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6cc3785ecd2caa97f4699bfe41aac1270551cfb4ec44cb3b3195535b043ef4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 01:28:22 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce.
Nov 26 01:28:22 compute-0 podman[286811]: 2025-11-26 01:28:22.795379678 +0000 UTC m=+0.250015214 container init e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: + sudo -E kolla_set_configs
Nov 26 01:28:22 compute-0 podman[286811]: 2025-11-26 01:28:22.855662161 +0000 UTC m=+0.310297657 container start e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 01:28:22 compute-0 edpm-start-podman-container[286811]: ovn_metadata_agent
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: INFO:__main__:Validating config file
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: INFO:__main__:Copying service configuration files
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: INFO:__main__:Writing out command to execute
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: ++ cat /run_command
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: + CMD=neutron-ovn-metadata-agent
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: + ARGS=
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: + sudo kolla_copy_cacerts
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: + [[ ! -n '' ]]
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: + . kolla_extend_start
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: Running command: 'neutron-ovn-metadata-agent'
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: + umask 0022
Nov 26 01:28:22 compute-0 ovn_metadata_agent[286828]: + exec neutron-ovn-metadata-agent
Nov 26 01:28:22 compute-0 edpm-start-podman-container[286809]: Creating additional drop-in dependency for "ovn_metadata_agent" (e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce)
Nov 26 01:28:22 compute-0 podman[286846]: 2025-11-26 01:28:22.990283381 +0000 UTC m=+0.113607214 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 26 01:28:23 compute-0 systemd[1]: Reloading.
Nov 26 01:28:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:28:23 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:28:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:28:23 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:28:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:28:23 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:28:23 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 6c3f2d55-4568-4491-9505-28597521b74c does not exist
Nov 26 01:28:23 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 25acbadb-e758-4c55-a0aa-f9d334b1cc42 does not exist
Nov 26 01:28:23 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 7b3e9c06-d1e7-448a-8b7a-86c6a3ed34ec does not exist
Nov 26 01:28:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:28:23 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:28:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:28:23 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:28:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:28:23 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:28:23 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:28:23 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:28:23 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:28:23 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:28:23 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:28:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:23 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 26 01:28:24 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Nov 26 01:28:24 compute-0 systemd[1]: session-53.scope: Consumed 1min 34.159s CPU time.
Nov 26 01:28:24 compute-0 systemd-logind[800]: Session 53 logged out. Waiting for processes to exit.
Nov 26 01:28:24 compute-0 systemd-logind[800]: Removed session 53.
Nov 26 01:28:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.217427) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120504217492, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2039, "num_deletes": 251, "total_data_size": 3472798, "memory_usage": 3522496, "flush_reason": "Manual Compaction"}
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120504244331, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3408043, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9685, "largest_seqno": 11723, "table_properties": {"data_size": 3398778, "index_size": 5887, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17829, "raw_average_key_size": 19, "raw_value_size": 3380419, "raw_average_value_size": 3690, "num_data_blocks": 267, "num_entries": 916, "num_filter_entries": 916, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764120271, "oldest_key_time": 1764120271, "file_creation_time": 1764120504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 26981 microseconds, and 12950 cpu microseconds.
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.244404) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3408043 bytes OK
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.244426) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.247409) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.247431) EVENT_LOG_v1 {"time_micros": 1764120504247424, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.247453) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3464298, prev total WAL file size 3464298, number of live WAL files 2.
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.249534) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3328KB)], [26(5924KB)]
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120504249599, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9474918, "oldest_snapshot_seqno": -1}
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3682 keys, 7766737 bytes, temperature: kUnknown
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120504288668, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7766737, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7738629, "index_size": 17822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9221, "raw_key_size": 88449, "raw_average_key_size": 24, "raw_value_size": 7668617, "raw_average_value_size": 2082, "num_data_blocks": 771, "num_entries": 3682, "num_filter_entries": 3682, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764120504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.288985) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7766737 bytes
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.291518) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 242.0 rd, 198.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4196, records dropped: 514 output_compression: NoCompression
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.291540) EVENT_LOG_v1 {"time_micros": 1764120504291530, "job": 10, "event": "compaction_finished", "compaction_time_micros": 39146, "compaction_time_cpu_micros": 16599, "output_level": 6, "num_output_files": 1, "total_output_size": 7766737, "num_input_records": 4196, "num_output_records": 3682, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120504292350, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120504293579, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.249417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.293766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.293773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.293776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.293779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:28:24 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:28:24.293782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:28:24 compute-0 podman[287109]: 2025-11-26 01:28:24.520754707 +0000 UTC m=+0.079022589 container create 361f042d50edb32782db01c16ced8d532709511a45e9c51eadf7bd340c6a4ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:28:24 compute-0 podman[287109]: 2025-11-26 01:28:24.48434399 +0000 UTC m=+0.042611922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:28:24 compute-0 systemd[1]: Started libpod-conmon-361f042d50edb32782db01c16ced8d532709511a45e9c51eadf7bd340c6a4ebc.scope.
Nov 26 01:28:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:28:24 compute-0 podman[287109]: 2025-11-26 01:28:24.657395133 +0000 UTC m=+0.215663015 container init 361f042d50edb32782db01c16ced8d532709511a45e9c51eadf7bd340c6a4ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:28:24 compute-0 podman[287109]: 2025-11-26 01:28:24.667509355 +0000 UTC m=+0.225777197 container start 361f042d50edb32782db01c16ced8d532709511a45e9c51eadf7bd340c6a4ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 01:28:24 compute-0 podman[287109]: 2025-11-26 01:28:24.672175406 +0000 UTC m=+0.230443268 container attach 361f042d50edb32782db01c16ced8d532709511a45e9c51eadf7bd340c6a4ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bohr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 01:28:24 compute-0 nervous_bohr[287125]: 167 167
Nov 26 01:28:24 compute-0 systemd[1]: libpod-361f042d50edb32782db01c16ced8d532709511a45e9c51eadf7bd340c6a4ebc.scope: Deactivated successfully.
Nov 26 01:28:24 compute-0 conmon[287125]: conmon 361f042d50edb32782db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-361f042d50edb32782db01c16ced8d532709511a45e9c51eadf7bd340c6a4ebc.scope/container/memory.events
Nov 26 01:28:24 compute-0 podman[287109]: 2025-11-26 01:28:24.676068264 +0000 UTC m=+0.234336146 container died 361f042d50edb32782db01c16ced8d532709511a45e9c51eadf7bd340c6a4ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 01:28:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-72cf6fc3c78e550e7bf3daba2a5bd2a1c4a1f07e4b9fec44eed8014d8fa51f57-merged.mount: Deactivated successfully.
Nov 26 01:28:24 compute-0 podman[287109]: 2025-11-26 01:28:24.734984 +0000 UTC m=+0.293251852 container remove 361f042d50edb32782db01c16ced8d532709511a45e9c51eadf7bd340c6a4ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bohr, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 01:28:24 compute-0 systemd[1]: libpod-conmon-361f042d50edb32782db01c16ced8d532709511a45e9c51eadf7bd340c6a4ebc.scope: Deactivated successfully.
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.896 286844 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.896 286844 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.896 286844 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.897 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.897 286844 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.897 286844 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.897 286844 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.897 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.897 286844 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.897 286844 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.898 286844 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.898 286844 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.898 286844 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.898 286844 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.898 286844 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.898 286844 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.898 286844 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.898 286844 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.898 286844 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.898 286844 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.899 286844 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.899 286844 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.899 286844 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.899 286844 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.899 286844 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.899 286844 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.899 286844 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.899 286844 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.899 286844 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.900 286844 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.900 286844 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.900 286844 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.900 286844 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.900 286844 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.900 286844 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.900 286844 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.900 286844 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.900 286844 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.901 286844 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.901 286844 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.901 286844 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.901 286844 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.901 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.901 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.901 286844 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.902 286844 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.902 286844 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.902 286844 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.902 286844 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.902 286844 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.902 286844 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.902 286844 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.902 286844 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.902 286844 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.902 286844 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.903 286844 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.903 286844 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.903 286844 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.903 286844 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.903 286844 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.903 286844 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.903 286844 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.903 286844 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.904 286844 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.904 286844 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.904 286844 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.904 286844 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.904 286844 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.904 286844 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.904 286844 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.904 286844 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.905 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.905 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.905 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.905 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.905 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.905 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.905 286844 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.905 286844 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.905 286844 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.906 286844 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.906 286844 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.906 286844 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.906 286844 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.906 286844 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.906 286844 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.906 286844 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.907 286844 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.907 286844 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.907 286844 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.907 286844 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.907 286844 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.907 286844 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.907 286844 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.907 286844 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.907 286844 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.907 286844 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.908 286844 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.908 286844 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.908 286844 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.908 286844 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.908 286844 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.908 286844 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.908 286844 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.908 286844 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.908 286844 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.908 286844 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.909 286844 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.909 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.909 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.909 286844 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.909 286844 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.909 286844 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.909 286844 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.909 286844 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.909 286844 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.910 286844 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.910 286844 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.910 286844 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.910 286844 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.910 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.910 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.910 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.910 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.911 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.911 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.911 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.911 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.911 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.911 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.911 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.911 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.911 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.912 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.912 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.912 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.912 286844 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.912 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.912 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.912 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.912 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.912 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.913 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.913 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.913 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.913 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.913 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.913 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.913 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.913 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.913 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.914 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.914 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.914 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.914 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.914 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.914 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.914 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.914 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.915 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.915 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.915 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.915 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.915 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.915 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.915 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.915 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.915 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.916 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.916 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.916 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.916 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.916 286844 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.916 286844 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.916 286844 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.916 286844 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.916 286844 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.917 286844 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.917 286844 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.917 286844 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.917 286844 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.917 286844 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.917 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.917 286844 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.917 286844 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.917 286844 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.918 286844 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.918 286844 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.918 286844 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.918 286844 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.918 286844 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.918 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.918 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.918 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.919 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.919 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.919 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.919 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.919 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.919 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.919 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.919 286844 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.919 286844 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.920 286844 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.920 286844 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.920 286844 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.920 286844 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.920 286844 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.920 286844 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.920 286844 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.920 286844 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.920 286844 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.921 286844 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.921 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.921 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.921 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.921 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.921 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.921 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.921 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.921 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.922 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.922 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.922 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.922 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.922 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.922 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.922 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.922 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.922 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.923 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.923 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.923 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.923 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.923 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.923 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.923 286844 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.923 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.923 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.924 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.924 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.924 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.924 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.924 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.924 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.924 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.924 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.925 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.925 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.925 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.925 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.925 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.925 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.925 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.925 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.925 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.926 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.926 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.926 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.926 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.926 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.926 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.926 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.926 286844 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.926 286844 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.927 286844 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.927 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.927 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.927 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.927 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.927 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.927 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.927 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.928 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.928 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.928 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.928 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.928 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.928 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.928 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.929 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.929 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.929 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.929 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.929 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.929 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.929 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.929 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.930 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.930 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.930 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.930 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.930 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.930 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.930 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.930 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.930 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.931 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.931 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.931 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.931 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.931 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.931 286844 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.931 286844 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.941 286844 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.941 286844 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.941 286844 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.941 286844 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.942 286844 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.956 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 27d03014-5e51-4d89-b5a1-b13242894075 (UUID: 27d03014-5e51-4d89-b5a1-b13242894075) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.988 286844 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.988 286844 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.989 286844 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 01:28:24 compute-0 podman[287149]: 2025-11-26 01:28:24.988924442 +0000 UTC m=+0.089429708 container create b400b0238c8c45bb51bc099692f981b575163c07e0bea9a6e2403b2fbe1944eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_visvesvaraya, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.989 286844 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 01:28:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:24.995 286844 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.003 286844 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.015 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '27d03014-5e51-4d89-b5a1-b13242894075'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], external_ids={}, name=27d03014-5e51-4d89-b5a1-b13242894075, nb_cfg_timestamp=1764118898498, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.016 286844 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f78c00d5370>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.017 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.017 286844 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.017 286844 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.018 286844 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.023 286844 DEBUG oslo_service.service [-] Started child 287163 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.027 287163 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-237087'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.028 286844 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpf3eka0_2/privsep.sock']#033[00m
Nov 26 01:28:25 compute-0 podman[287149]: 2025-11-26 01:28:24.961151337 +0000 UTC m=+0.061656573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.059 287163 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.060 287163 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.060 287163 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.064 287163 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.074 287163 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 26 01:28:25 compute-0 systemd[1]: Started libpod-conmon-b400b0238c8c45bb51bc099692f981b575163c07e0bea9a6e2403b2fbe1944eb.scope.
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.083 287163 INFO eventlet.wsgi.server [-] (287163) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 26 01:28:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbc1d37a4f1f73dd2c19f263912037141aaefff9e4f7b818ae066b2fb5f4294/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbc1d37a4f1f73dd2c19f263912037141aaefff9e4f7b818ae066b2fb5f4294/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbc1d37a4f1f73dd2c19f263912037141aaefff9e4f7b818ae066b2fb5f4294/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbc1d37a4f1f73dd2c19f263912037141aaefff9e4f7b818ae066b2fb5f4294/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbc1d37a4f1f73dd2c19f263912037141aaefff9e4f7b818ae066b2fb5f4294/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:28:25 compute-0 podman[287149]: 2025-11-26 01:28:25.190626626 +0000 UTC m=+0.291131892 container init b400b0238c8c45bb51bc099692f981b575163c07e0bea9a6e2403b2fbe1944eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:28:25 compute-0 podman[287149]: 2025-11-26 01:28:25.21621019 +0000 UTC m=+0.316715416 container start b400b0238c8c45bb51bc099692f981b575163c07e0bea9a6e2403b2fbe1944eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:28:25 compute-0 podman[287149]: 2025-11-26 01:28:25.221272502 +0000 UTC m=+0.321777778 container attach b400b0238c8c45bb51bc099692f981b575163c07e0bea9a6e2403b2fbe1944eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 01:28:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.745 286844 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.745 286844 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpf3eka0_2/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.624 287175 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.632 287175 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.635 287175 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.636 287175 INFO oslo.privsep.daemon [-] privsep daemon running as pid 287175#033[00m
Nov 26 01:28:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:25.749 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[68f93222-716f-4f1a-94a5-4a0b1a110b9b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.268 287175 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.268 287175 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.269 287175 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:28:26 compute-0 practical_visvesvaraya[287169]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:28:26 compute-0 practical_visvesvaraya[287169]: --> relative data size: 1.0
Nov 26 01:28:26 compute-0 practical_visvesvaraya[287169]: --> All data devices are unavailable
Nov 26 01:28:26 compute-0 systemd[1]: libpod-b400b0238c8c45bb51bc099692f981b575163c07e0bea9a6e2403b2fbe1944eb.scope: Deactivated successfully.
Nov 26 01:28:26 compute-0 systemd[1]: libpod-b400b0238c8c45bb51bc099692f981b575163c07e0bea9a6e2403b2fbe1944eb.scope: Consumed 1.235s CPU time.
Nov 26 01:28:26 compute-0 podman[287149]: 2025-11-26 01:28:26.528233603 +0000 UTC m=+1.628738869 container died b400b0238c8c45bb51bc099692f981b575163c07e0bea9a6e2403b2fbe1944eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_visvesvaraya, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 01:28:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dbc1d37a4f1f73dd2c19f263912037141aaefff9e4f7b818ae066b2fb5f4294-merged.mount: Deactivated successfully.
Nov 26 01:28:26 compute-0 podman[287149]: 2025-11-26 01:28:26.639724037 +0000 UTC m=+1.740229263 container remove b400b0238c8c45bb51bc099692f981b575163c07e0bea9a6e2403b2fbe1944eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:28:26 compute-0 systemd[1]: libpod-conmon-b400b0238c8c45bb51bc099692f981b575163c07e0bea9a6e2403b2fbe1944eb.scope: Deactivated successfully.
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.821 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[ceaf8867-1ccb-49b1-b497-144ffac26176]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.824 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, column=external_ids, values=({'neutron:ovn-metadata-id': '9194697b-e45c-5a8d-bcaf-8aad234faa1c'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.841 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.849 286844 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.849 286844 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.849 286844 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.850 286844 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.850 286844 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.850 286844 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.850 286844 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.851 286844 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.851 286844 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.851 286844 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.851 286844 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.852 286844 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.852 286844 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.852 286844 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.852 286844 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.853 286844 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.853 286844 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.853 286844 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.854 286844 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.854 286844 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.854 286844 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.854 286844 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.855 286844 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.855 286844 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.855 286844 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.856 286844 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.856 286844 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.856 286844 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.856 286844 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.857 286844 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.857 286844 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.857 286844 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.857 286844 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.858 286844 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.858 286844 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.858 286844 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.858 286844 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.859 286844 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.859 286844 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.859 286844 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.860 286844 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.860 286844 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.860 286844 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.861 286844 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.861 286844 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.861 286844 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.861 286844 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.862 286844 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.862 286844 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.862 286844 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.862 286844 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.863 286844 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.863 286844 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.863 286844 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.863 286844 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.863 286844 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.864 286844 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.864 286844 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.864 286844 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.864 286844 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.865 286844 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.865 286844 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.865 286844 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.865 286844 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.866 286844 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.866 286844 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.866 286844 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.867 286844 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.867 286844 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.867 286844 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.868 286844 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.868 286844 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.868 286844 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.869 286844 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.869 286844 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.869 286844 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.870 286844 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.870 286844 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.870 286844 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.870 286844 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.871 286844 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.871 286844 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.871 286844 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.871 286844 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.872 286844 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.872 286844 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.872 286844 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.873 286844 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.873 286844 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.873 286844 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.873 286844 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.874 286844 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.874 286844 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.874 286844 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.874 286844 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.875 286844 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.875 286844 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.875 286844 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.875 286844 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.876 286844 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.876 286844 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.876 286844 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.876 286844 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.877 286844 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.877 286844 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.878 286844 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.878 286844 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.878 286844 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.879 286844 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.879 286844 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.879 286844 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.880 286844 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.880 286844 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.880 286844 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.881 286844 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.881 286844 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.881 286844 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.882 286844 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.882 286844 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.883 286844 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.883 286844 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.883 286844 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.883 286844 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.884 286844 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.884 286844 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.884 286844 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.884 286844 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.884 286844 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.884 286844 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.885 286844 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.885 286844 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.885 286844 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.885 286844 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.885 286844 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.885 286844 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.886 286844 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.886 286844 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.886 286844 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.886 286844 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.886 286844 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.887 286844 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.887 286844 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.887 286844 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.887 286844 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.887 286844 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.888 286844 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.888 286844 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.888 286844 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.888 286844 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.889 286844 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.889 286844 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.889 286844 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.889 286844 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.889 286844 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.889 286844 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.890 286844 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.890 286844 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.890 286844 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.890 286844 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.890 286844 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.890 286844 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.890 286844 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.891 286844 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.891 286844 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.891 286844 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.891 286844 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.891 286844 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.892 286844 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.892 286844 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.892 286844 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.892 286844 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.892 286844 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.893 286844 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.893 286844 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.893 286844 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.893 286844 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.893 286844 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.893 286844 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.894 286844 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.894 286844 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.894 286844 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.894 286844 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.894 286844 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.895 286844 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.895 286844 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.895 286844 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.895 286844 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.895 286844 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.895 286844 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.896 286844 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.896 286844 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.896 286844 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.896 286844 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.896 286844 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.896 286844 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.897 286844 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.897 286844 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.897 286844 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.897 286844 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.897 286844 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.897 286844 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.898 286844 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.898 286844 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.898 286844 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.898 286844 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.898 286844 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.899 286844 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.899 286844 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.899 286844 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.899 286844 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.899 286844 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.899 286844 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.900 286844 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.900 286844 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.900 286844 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.900 286844 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.900 286844 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.900 286844 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.901 286844 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.901 286844 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.901 286844 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.901 286844 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.901 286844 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.902 286844 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.902 286844 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.902 286844 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.902 286844 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.902 286844 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.902 286844 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.903 286844 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.904 286844 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.904 286844 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.905 286844 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.905 286844 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.905 286844 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.905 286844 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.905 286844 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.905 286844 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.906 286844 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.906 286844 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.906 286844 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.906 286844 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.906 286844 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.906 286844 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.907 286844 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.907 286844 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.907 286844 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.907 286844 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.907 286844 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.907 286844 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.908 286844 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.908 286844 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.908 286844 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.908 286844 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.908 286844 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.909 286844 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.909 286844 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.909 286844 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.909 286844 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.909 286844 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.909 286844 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.910 286844 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.910 286844 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.910 286844 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.910 286844 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.910 286844 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.911 286844 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.911 286844 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.911 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.911 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.911 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.911 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.912 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.912 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.912 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.912 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.912 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.913 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.913 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.913 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.913 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.913 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.914 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.914 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.914 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.914 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.914 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.915 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.915 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.915 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.915 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.915 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.916 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.916 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.916 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.917 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.917 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.918 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.918 286844 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.918 286844 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.918 286844 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.918 286844 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.919 286844 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:28:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:28:26.919 286844 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 26 01:28:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:27 compute-0 podman[287355]: 2025-11-26 01:28:27.752810525 +0000 UTC m=+0.088502302 container create 841ffcfe2ddb6942d04f567aff35776b28579410d1e583dbb8bc3ebbcd051d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermat, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:28:27 compute-0 podman[287355]: 2025-11-26 01:28:27.719438433 +0000 UTC m=+0.055130250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:28:27 compute-0 systemd[1]: Started libpod-conmon-841ffcfe2ddb6942d04f567aff35776b28579410d1e583dbb8bc3ebbcd051d39.scope.
Nov 26 01:28:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:28:27 compute-0 podman[287355]: 2025-11-26 01:28:27.910562231 +0000 UTC m=+0.246254048 container init 841ffcfe2ddb6942d04f567aff35776b28579410d1e583dbb8bc3ebbcd051d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:28:27 compute-0 podman[287355]: 2025-11-26 01:28:27.92199065 +0000 UTC m=+0.257682397 container start 841ffcfe2ddb6942d04f567aff35776b28579410d1e583dbb8bc3ebbcd051d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermat, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:28:27 compute-0 podman[287355]: 2025-11-26 01:28:27.926545158 +0000 UTC m=+0.262236955 container attach 841ffcfe2ddb6942d04f567aff35776b28579410d1e583dbb8bc3ebbcd051d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:28:27 compute-0 brave_fermat[287371]: 167 167
Nov 26 01:28:27 compute-0 systemd[1]: libpod-841ffcfe2ddb6942d04f567aff35776b28579410d1e583dbb8bc3ebbcd051d39.scope: Deactivated successfully.
Nov 26 01:28:27 compute-0 podman[287355]: 2025-11-26 01:28:27.93127603 +0000 UTC m=+0.266967767 container died 841ffcfe2ddb6942d04f567aff35776b28579410d1e583dbb8bc3ebbcd051d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermat, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 26 01:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-86e1bc640207d2f20ff4ce4af2b3d984e2df0385462202b97e753457694cec29-merged.mount: Deactivated successfully.
Nov 26 01:28:27 compute-0 podman[287355]: 2025-11-26 01:28:27.99321416 +0000 UTC m=+0.328905907 container remove 841ffcfe2ddb6942d04f567aff35776b28579410d1e583dbb8bc3ebbcd051d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermat, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:28:28 compute-0 systemd[1]: libpod-conmon-841ffcfe2ddb6942d04f567aff35776b28579410d1e583dbb8bc3ebbcd051d39.scope: Deactivated successfully.
Nov 26 01:28:28 compute-0 podman[287395]: 2025-11-26 01:28:28.245225578 +0000 UTC m=+0.083415060 container create a98fe8ca8e5f56c9e3021a6d711de00856097cc937ccdddb53645075d667a4c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 01:28:28 compute-0 podman[287395]: 2025-11-26 01:28:28.204981894 +0000 UTC m=+0.043171436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:28:28 compute-0 systemd[1]: Started libpod-conmon-a98fe8ca8e5f56c9e3021a6d711de00856097cc937ccdddb53645075d667a4c5.scope.
Nov 26 01:28:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a6492afe278259ce2f9b5ae37cd39b01f7168f618305f8ee2532cc3c433541/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a6492afe278259ce2f9b5ae37cd39b01f7168f618305f8ee2532cc3c433541/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a6492afe278259ce2f9b5ae37cd39b01f7168f618305f8ee2532cc3c433541/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a6492afe278259ce2f9b5ae37cd39b01f7168f618305f8ee2532cc3c433541/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:28:28 compute-0 podman[287395]: 2025-11-26 01:28:28.40570455 +0000 UTC m=+0.243894102 container init a98fe8ca8e5f56c9e3021a6d711de00856097cc937ccdddb53645075d667a4c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 26 01:28:28 compute-0 podman[287395]: 2025-11-26 01:28:28.436728557 +0000 UTC m=+0.274918019 container start a98fe8ca8e5f56c9e3021a6d711de00856097cc937ccdddb53645075d667a4c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_black, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:28:28 compute-0 podman[287395]: 2025-11-26 01:28:28.443244849 +0000 UTC m=+0.281434361 container attach a98fe8ca8e5f56c9e3021a6d711de00856097cc937ccdddb53645075d667a4c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_black, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:28:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:28:29 compute-0 podman[158021]: time="2025-11-26T01:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:28:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 37301 "" "Go-http-client/1.1"
Nov 26 01:28:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7695 "" "Go-http-client/1.1"
Nov 26 01:28:29 compute-0 agitated_black[287411]: {
Nov 26 01:28:29 compute-0 agitated_black[287411]:    "0": [
Nov 26 01:28:29 compute-0 agitated_black[287411]:        {
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "devices": [
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "/dev/loop3"
Nov 26 01:28:29 compute-0 agitated_black[287411]:            ],
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "lv_name": "ceph_lv0",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "lv_size": "21470642176",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "name": "ceph_lv0",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "tags": {
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.cluster_name": "ceph",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.crush_device_class": "",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.encrypted": "0",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.osd_id": "0",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.type": "block",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.vdo": "0"
Nov 26 01:28:29 compute-0 agitated_black[287411]:            },
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "type": "block",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "vg_name": "ceph_vg0"
Nov 26 01:28:29 compute-0 agitated_black[287411]:        }
Nov 26 01:28:29 compute-0 agitated_black[287411]:    ],
Nov 26 01:28:29 compute-0 agitated_black[287411]:    "1": [
Nov 26 01:28:29 compute-0 agitated_black[287411]:        {
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "devices": [
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "/dev/loop4"
Nov 26 01:28:29 compute-0 agitated_black[287411]:            ],
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "lv_name": "ceph_lv1",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "lv_size": "21470642176",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "name": "ceph_lv1",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "tags": {
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.cluster_name": "ceph",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.crush_device_class": "",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.encrypted": "0",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.osd_id": "1",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.type": "block",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.vdo": "0"
Nov 26 01:28:29 compute-0 agitated_black[287411]:            },
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "type": "block",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "vg_name": "ceph_vg1"
Nov 26 01:28:29 compute-0 agitated_black[287411]:        }
Nov 26 01:28:29 compute-0 agitated_black[287411]:    ],
Nov 26 01:28:29 compute-0 agitated_black[287411]:    "2": [
Nov 26 01:28:29 compute-0 agitated_black[287411]:        {
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "devices": [
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "/dev/loop5"
Nov 26 01:28:29 compute-0 agitated_black[287411]:            ],
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "lv_name": "ceph_lv2",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "lv_size": "21470642176",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "name": "ceph_lv2",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "tags": {
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.cluster_name": "ceph",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.crush_device_class": "",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.encrypted": "0",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.osd_id": "2",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.type": "block",
Nov 26 01:28:29 compute-0 agitated_black[287411]:                "ceph.vdo": "0"
Nov 26 01:28:29 compute-0 agitated_black[287411]:            },
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "type": "block",
Nov 26 01:28:29 compute-0 agitated_black[287411]:            "vg_name": "ceph_vg2"
Nov 26 01:28:29 compute-0 agitated_black[287411]:        }
Nov 26 01:28:29 compute-0 agitated_black[287411]:    ]
Nov 26 01:28:29 compute-0 agitated_black[287411]: }
Nov 26 01:28:29 compute-0 systemd[1]: libpod-a98fe8ca8e5f56c9e3021a6d711de00856097cc937ccdddb53645075d667a4c5.scope: Deactivated successfully.
Nov 26 01:28:29 compute-0 podman[287395]: 2025-11-26 01:28:29.936231776 +0000 UTC m=+1.774421248 container died a98fe8ca8e5f56c9e3021a6d711de00856097cc937ccdddb53645075d667a4c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_black, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:28:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7a6492afe278259ce2f9b5ae37cd39b01f7168f618305f8ee2532cc3c433541-merged.mount: Deactivated successfully.
Nov 26 01:28:30 compute-0 podman[287422]: 2025-11-26 01:28:29.99939517 +0000 UTC m=+0.103885182 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:28:30 compute-0 podman[287395]: 2025-11-26 01:28:30.014277036 +0000 UTC m=+1.852466508 container remove a98fe8ca8e5f56c9e3021a6d711de00856097cc937ccdddb53645075d667a4c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_black, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:28:30 compute-0 podman[287423]: 2025-11-26 01:28:30.015476189 +0000 UTC m=+0.105466656 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 01:28:30 compute-0 systemd[1]: libpod-conmon-a98fe8ca8e5f56c9e3021a6d711de00856097cc937ccdddb53645075d667a4c5.scope: Deactivated successfully.
Nov 26 01:28:30 compute-0 podman[287424]: 2025-11-26 01:28:30.051153866 +0000 UTC m=+0.137898862 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Nov 26 01:28:30 compute-0 systemd-logind[800]: New session 54 of user zuul.
Nov 26 01:28:30 compute-0 systemd[1]: Started Session 54 of User zuul.
Nov 26 01:28:31 compute-0 podman[287718]: 2025-11-26 01:28:31.183773139 +0000 UTC m=+0.087551896 container create f2866422a6ad1428a87218e5a0c6c32f10ef9d349c8d3c7df928bbfee31ebe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:28:31 compute-0 podman[287718]: 2025-11-26 01:28:31.15086805 +0000 UTC m=+0.054646807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:28:31 compute-0 systemd[1]: Started libpod-conmon-f2866422a6ad1428a87218e5a0c6c32f10ef9d349c8d3c7df928bbfee31ebe48.scope.
Nov 26 01:28:31 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:28:31 compute-0 podman[287718]: 2025-11-26 01:28:31.33238669 +0000 UTC m=+0.236165477 container init f2866422a6ad1428a87218e5a0c6c32f10ef9d349c8d3c7df928bbfee31ebe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 01:28:31 compute-0 podman[287718]: 2025-11-26 01:28:31.348764867 +0000 UTC m=+0.252543654 container start f2866422a6ad1428a87218e5a0c6c32f10ef9d349c8d3c7df928bbfee31ebe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:28:31 compute-0 podman[287718]: 2025-11-26 01:28:31.355345371 +0000 UTC m=+0.259124128 container attach f2866422a6ad1428a87218e5a0c6c32f10ef9d349c8d3c7df928bbfee31ebe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 01:28:31 compute-0 modest_keldysh[287769]: 167 167
Nov 26 01:28:31 compute-0 systemd[1]: libpod-f2866422a6ad1428a87218e5a0c6c32f10ef9d349c8d3c7df928bbfee31ebe48.scope: Deactivated successfully.
Nov 26 01:28:31 compute-0 conmon[287769]: conmon f2866422a6ad1428a872 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2866422a6ad1428a87218e5a0c6c32f10ef9d349c8d3c7df928bbfee31ebe48.scope/container/memory.events
Nov 26 01:28:31 compute-0 podman[287718]: 2025-11-26 01:28:31.363757806 +0000 UTC m=+0.267536583 container died f2866422a6ad1428a87218e5a0c6c32f10ef9d349c8d3c7df928bbfee31ebe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7aa2ce0d73f8c5421682185a990d2ff8b6f53476fd457bbc8d6bbd76fcd99a8-merged.mount: Deactivated successfully.
Nov 26 01:28:31 compute-0 openstack_network_exporter[160178]: ERROR   01:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:28:31 compute-0 openstack_network_exporter[160178]: ERROR   01:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:28:31 compute-0 openstack_network_exporter[160178]: ERROR   01:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:28:31 compute-0 openstack_network_exporter[160178]: ERROR   01:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:28:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:28:31 compute-0 openstack_network_exporter[160178]: ERROR   01:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:28:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:28:31 compute-0 podman[287718]: 2025-11-26 01:28:31.437584838 +0000 UTC m=+0.341363605 container remove f2866422a6ad1428a87218e5a0c6c32f10ef9d349c8d3c7df928bbfee31ebe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 01:28:31 compute-0 systemd[1]: libpod-conmon-f2866422a6ad1428a87218e5a0c6c32f10ef9d349c8d3c7df928bbfee31ebe48.scope: Deactivated successfully.
Nov 26 01:28:31 compute-0 podman[287830]: 2025-11-26 01:28:31.708902576 +0000 UTC m=+0.097000000 container create 5cc75fe80c597b722d81daa9cd3c610de7ed2ad518e3cd2616e6adc917519ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 01:28:31 compute-0 podman[287830]: 2025-11-26 01:28:31.67396178 +0000 UTC m=+0.062059214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:28:31 compute-0 python3.9[287824]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:28:31 compute-0 systemd[1]: Started libpod-conmon-5cc75fe80c597b722d81daa9cd3c610de7ed2ad518e3cd2616e6adc917519ddb.scope.
Nov 26 01:28:31 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf910aaa2c34d4bbabbd33b6f808a136f6173953ffcb100b1011d830e6a29dd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf910aaa2c34d4bbabbd33b6f808a136f6173953ffcb100b1011d830e6a29dd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf910aaa2c34d4bbabbd33b6f808a136f6173953ffcb100b1011d830e6a29dd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf910aaa2c34d4bbabbd33b6f808a136f6173953ffcb100b1011d830e6a29dd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:28:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:31 compute-0 podman[287830]: 2025-11-26 01:28:31.896456054 +0000 UTC m=+0.284553478 container init 5cc75fe80c597b722d81daa9cd3c610de7ed2ad518e3cd2616e6adc917519ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:28:31 compute-0 podman[287830]: 2025-11-26 01:28:31.91278555 +0000 UTC m=+0.300882944 container start 5cc75fe80c597b722d81daa9cd3c610de7ed2ad518e3cd2616e6adc917519ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 01:28:31 compute-0 podman[287830]: 2025-11-26 01:28:31.929019854 +0000 UTC m=+0.317117308 container attach 5cc75fe80c597b722d81daa9cd3c610de7ed2ad518e3cd2616e6adc917519ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curran, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 01:28:33 compute-0 adoring_curran[287848]: {
Nov 26 01:28:33 compute-0 adoring_curran[287848]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:28:33 compute-0 adoring_curran[287848]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:28:33 compute-0 adoring_curran[287848]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:28:33 compute-0 adoring_curran[287848]:        "osd_id": 0,
Nov 26 01:28:33 compute-0 adoring_curran[287848]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:28:33 compute-0 adoring_curran[287848]:        "type": "bluestore"
Nov 26 01:28:33 compute-0 adoring_curran[287848]:    },
Nov 26 01:28:33 compute-0 adoring_curran[287848]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:28:33 compute-0 adoring_curran[287848]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:28:33 compute-0 adoring_curran[287848]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:28:33 compute-0 adoring_curran[287848]:        "osd_id": 2,
Nov 26 01:28:33 compute-0 adoring_curran[287848]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:28:33 compute-0 adoring_curran[287848]:        "type": "bluestore"
Nov 26 01:28:33 compute-0 adoring_curran[287848]:    },
Nov 26 01:28:33 compute-0 adoring_curran[287848]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:28:33 compute-0 adoring_curran[287848]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:28:33 compute-0 adoring_curran[287848]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:28:33 compute-0 adoring_curran[287848]:        "osd_id": 1,
Nov 26 01:28:33 compute-0 adoring_curran[287848]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:28:33 compute-0 adoring_curran[287848]:        "type": "bluestore"
Nov 26 01:28:33 compute-0 adoring_curran[287848]:    }
Nov 26 01:28:33 compute-0 adoring_curran[287848]: }
Nov 26 01:28:33 compute-0 systemd[1]: libpod-5cc75fe80c597b722d81daa9cd3c610de7ed2ad518e3cd2616e6adc917519ddb.scope: Deactivated successfully.
Nov 26 01:28:33 compute-0 podman[287830]: 2025-11-26 01:28:33.086285786 +0000 UTC m=+1.474383220 container died 5cc75fe80c597b722d81daa9cd3c610de7ed2ad518e3cd2616e6adc917519ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curran, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 01:28:33 compute-0 systemd[1]: libpod-5cc75fe80c597b722d81daa9cd3c610de7ed2ad518e3cd2616e6adc917519ddb.scope: Consumed 1.174s CPU time.
Nov 26 01:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf910aaa2c34d4bbabbd33b6f808a136f6173953ffcb100b1011d830e6a29dd6-merged.mount: Deactivated successfully.
Nov 26 01:28:33 compute-0 podman[287830]: 2025-11-26 01:28:33.189138318 +0000 UTC m=+1.577235702 container remove 5cc75fe80c597b722d81daa9cd3c610de7ed2ad518e3cd2616e6adc917519ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curran, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:28:33 compute-0 systemd[1]: libpod-conmon-5cc75fe80c597b722d81daa9cd3c610de7ed2ad518e3cd2616e6adc917519ddb.scope: Deactivated successfully.
Nov 26 01:28:33 compute-0 podman[288007]: 2025-11-26 01:28:33.23503135 +0000 UTC m=+0.108072579 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.openshift.tags=minimal rhel9)
Nov 26 01:28:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:28:33 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:28:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:28:33 compute-0 podman[288012]: 2025-11-26 01:28:33.252797566 +0000 UTC m=+0.121916766 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:28:33 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:28:33 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 20fca5d9-2fab-4c13-a093-cd30be98831d does not exist
Nov 26 01:28:33 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev d44dae3f-0d80-4154-86fb-5a8e641212de does not exist
Nov 26 01:28:33 compute-0 python3.9[288086]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:28:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:28:34 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:28:34 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:28:35 compute-0 python3.9[288301]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 01:28:35 compute-0 systemd[1]: Reloading.
Nov 26 01:28:35 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:28:35 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:28:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:37 compute-0 python3.9[288487]: ansible-ansible.builtin.service_facts Invoked
Nov 26 01:28:37 compute-0 network[288504]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 01:28:37 compute-0 network[288505]: 'network-scripts' will be removed from distribution in near future.
Nov 26 01:28:37 compute-0 network[288506]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 01:28:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:28:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:28:41
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.log', '.mgr', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'backups', 'volumes']
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:28:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:43 compute-0 python3.9[288777]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:28:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:28:44 compute-0 podman[288878]: 2025-11-26 01:28:44.565387189 +0000 UTC m=+0.120353242 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.buildah.version=1.29.0)
Nov 26 01:28:44 compute-0 podman[288879]: 2025-11-26 01:28:44.583124445 +0000 UTC m=+0.125917038 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 26 01:28:44 compute-0 python3.9[288968]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:28:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:46 compute-0 python3.9[289121]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:28:47 compute-0 python3.9[289274]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:28:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:48 compute-0 python3.9[289427]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:28:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:28:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:49 compute-0 python3.9[289581]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:28:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:28:51 compute-0 python3.9[289734]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:28:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:52 compute-0 python3.9[289887]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:28:53 compute-0 podman[290010]: 2025-11-26 01:28:53.575469935 +0000 UTC m=+0.134226940 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 26 01:28:53 compute-0 python3.9[290057]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:28:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:28:55 compute-0 python3.9[290209]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:28:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:56 compute-0 python3.9[290361]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:28:57 compute-0 python3.9[290513]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:28:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:28:58 compute-0 python3.9[290665]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:28:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:28:59 compute-0 python3.9[290817]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:28:59 compute-0 podman[158021]: time="2025-11-26T01:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:28:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Nov 26 01:28:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7272 "" "Go-http-client/1.1"
Nov 26 01:28:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:00 compute-0 python3.9[290969]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:29:00 compute-0 podman[291002]: 2025-11-26 01:29:00.558724332 +0000 UTC m=+0.108115261 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Nov 26 01:29:00 compute-0 podman[291012]: 2025-11-26 01:29:00.568981138 +0000 UTC m=+0.102874584 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:29:00 compute-0 podman[291014]: 2025-11-26 01:29:00.63133409 +0000 UTC m=+0.163157738 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 26 01:29:01 compute-0 python3.9[291185]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:29:01 compute-0 openstack_network_exporter[160178]: ERROR   01:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:29:01 compute-0 openstack_network_exporter[160178]: ERROR   01:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:29:01 compute-0 openstack_network_exporter[160178]: ERROR   01:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:29:01 compute-0 openstack_network_exporter[160178]: ERROR   01:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:29:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:29:01 compute-0 openstack_network_exporter[160178]: ERROR   01:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:29:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:29:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:02 compute-0 python3.9[291337]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:29:03 compute-0 python3.9[291489]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:29:03 compute-0 podman[291515]: 2025-11-26 01:29:03.583681786 +0000 UTC m=+0.128425047 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:29:03 compute-0 podman[291514]: 2025-11-26 01:29:03.603327295 +0000 UTC m=+0.152531891 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, version=9.6, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Nov 26 01:29:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:29:04 compute-0 python3.9[291684]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:29:05 compute-0 python3.9[291836]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:29:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:06 compute-0 python3.9[291988]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:29:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:08 compute-0 python3.9[292140]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:29:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:29:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:10 compute-0 python3.9[292292]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 01:29:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:29:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:29:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:29:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:29:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:29:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:29:11 compute-0 python3.9[292444]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 01:29:11 compute-0 systemd[1]: Reloading.
Nov 26 01:29:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:12 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:29:12 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:29:13 compute-0 python3.9[292631]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:29:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:29:14 compute-0 python3.9[292784]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:29:14 compute-0 podman[292811]: 2025-11-26 01:29:14.805422832 +0000 UTC m=+0.093832802 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 01:29:14 compute-0 podman[292809]: 2025-11-26 01:29:14.829814833 +0000 UTC m=+0.116736131 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, release=1214.1726694543, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, vcs-type=git, distribution-scope=public, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, maintainer=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 26 01:29:15 compute-0 python3.9[292973]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:29:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:16 compute-0 python3.9[293126]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:29:17 compute-0 python3.9[293279]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:29:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:18 compute-0 python3.9[293432]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:29:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:29:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:19 compute-0 python3.9[293586]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:29:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:22 compute-0 python3.9[293739]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 26 01:29:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:23 compute-0 podman[293864]: 2025-11-26 01:29:23.954251234 +0000 UTC m=+0.105998402 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:29:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:29:24 compute-0 python3.9[293911]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 01:29:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:29:24.934 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:29:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:29:24.934 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:29:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:29:24.935 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:29:25 compute-0 python3.9[293995]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 01:29:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:28 compute-0 python3.9[294148]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 01:29:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:29:29 compute-0 podman[158021]: time="2025-11-26T01:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:29:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Nov 26 01:29:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7268 "" "Go-http-client/1.1"
Nov 26 01:29:29 compute-0 python3.9[294303]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 01:29:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:30 compute-0 podman[294430]: 2025-11-26 01:29:30.836013566 +0000 UTC m=+0.096436455 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Nov 26 01:29:30 compute-0 podman[294431]: 2025-11-26 01:29:30.855739337 +0000 UTC m=+0.105695543 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:29:30 compute-0 podman[294432]: 2025-11-26 01:29:30.89881355 +0000 UTC m=+0.144502117 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:29:31 compute-0 python3.9[294513]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 01:29:31 compute-0 openstack_network_exporter[160178]: ERROR   01:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:29:31 compute-0 openstack_network_exporter[160178]: ERROR   01:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:29:31 compute-0 openstack_network_exporter[160178]: ERROR   01:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:29:31 compute-0 openstack_network_exporter[160178]: ERROR   01:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:29:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:29:31 compute-0 openstack_network_exporter[160178]: ERROR   01:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:29:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:29:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:33 compute-0 python3.9[294675]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 01:29:33 compute-0 podman[294828]: 2025-11-26 01:29:33.833764672 +0000 UTC m=+0.103543223 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:29:33 compute-0 podman[294827]: 2025-11-26 01:29:33.852910976 +0000 UTC m=+0.124948861 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 26 01:29:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:29:34 compute-0 python3.9[294970]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:34 compute-0 podman[295039]: 2025-11-26 01:29:34.775007119 +0000 UTC m=+0.121063982 container exec 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 01:29:34 compute-0 podman[295039]: 2025-11-26 01:29:34.862096982 +0000 UTC m=+0.208153795 container exec_died 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:29:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:36 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:29:36 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:29:36 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:29:36 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:29:36 compute-0 python3.9[295332]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:37 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:29:37 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:29:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:29:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:29:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:29:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:29:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:29:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:29:37 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev a67138c8-d815-4634-a758-c950ca63702a does not exist
Nov 26 01:29:37 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 9e271138-d6d6-483f-8dac-fb1b7afccf84 does not exist
Nov 26 01:29:37 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev f761d221-d2bc-4da4-85b1-247cf55a9d85 does not exist
Nov 26 01:29:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:29:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:29:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:29:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:29:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:29:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:29:37 compute-0 python3.9[295630]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:29:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:29:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:29:38 compute-0 podman[295870]: 2025-11-26 01:29:38.273001696 +0000 UTC m=+0.096011343 container create 420348b38b1258a22ed434e3e18b7b5ea294e034369bf64ea0eced597fe7c71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 01:29:38 compute-0 podman[295870]: 2025-11-26 01:29:38.231378473 +0000 UTC m=+0.054388180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:29:38 compute-0 systemd[1]: Started libpod-conmon-420348b38b1258a22ed434e3e18b7b5ea294e034369bf64ea0eced597fe7c71f.scope.
Nov 26 01:29:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:29:38 compute-0 podman[295870]: 2025-11-26 01:29:38.414630321 +0000 UTC m=+0.237640038 container init 420348b38b1258a22ed434e3e18b7b5ea294e034369bf64ea0eced597fe7c71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:29:38 compute-0 podman[295870]: 2025-11-26 01:29:38.431939535 +0000 UTC m=+0.254949192 container start 420348b38b1258a22ed434e3e18b7b5ea294e034369bf64ea0eced597fe7c71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 01:29:38 compute-0 podman[295870]: 2025-11-26 01:29:38.438310543 +0000 UTC m=+0.261320200 container attach 420348b38b1258a22ed434e3e18b7b5ea294e034369bf64ea0eced597fe7c71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 01:29:38 compute-0 zen_perlman[295910]: 167 167
Nov 26 01:29:38 compute-0 systemd[1]: libpod-420348b38b1258a22ed434e3e18b7b5ea294e034369bf64ea0eced597fe7c71f.scope: Deactivated successfully.
Nov 26 01:29:38 compute-0 podman[295870]: 2025-11-26 01:29:38.444326311 +0000 UTC m=+0.267335968 container died 420348b38b1258a22ed434e3e18b7b5ea294e034369bf64ea0eced597fe7c71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:29:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf9604aa977efc65beddb04c4d57e815065ebd2d77e6587bd43acb611c2773e0-merged.mount: Deactivated successfully.
Nov 26 01:29:38 compute-0 podman[295870]: 2025-11-26 01:29:38.529682515 +0000 UTC m=+0.352692172 container remove 420348b38b1258a22ed434e3e18b7b5ea294e034369bf64ea0eced597fe7c71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:29:38 compute-0 systemd[1]: libpod-conmon-420348b38b1258a22ed434e3e18b7b5ea294e034369bf64ea0eced597fe7c71f.scope: Deactivated successfully.
Nov 26 01:29:38 compute-0 podman[295961]: 2025-11-26 01:29:38.788517094 +0000 UTC m=+0.072112485 container create 5e5a17afa8179ac5dbe4ffff3f0d077ba0f5c7c7e5eca142ab341e0161fe5e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_grothendieck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:29:38 compute-0 python3.9[295953]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:38 compute-0 podman[295961]: 2025-11-26 01:29:38.765410348 +0000 UTC m=+0.049005769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:29:38 compute-0 systemd[1]: Started libpod-conmon-5e5a17afa8179ac5dbe4ffff3f0d077ba0f5c7c7e5eca142ab341e0161fe5e36.scope.
Nov 26 01:29:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:29:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc7e68347a70d7b7f13d598418bb945e4de5935157b5c1e2e87555f8081d43a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:29:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc7e68347a70d7b7f13d598418bb945e4de5935157b5c1e2e87555f8081d43a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:29:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc7e68347a70d7b7f13d598418bb945e4de5935157b5c1e2e87555f8081d43a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:29:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc7e68347a70d7b7f13d598418bb945e4de5935157b5c1e2e87555f8081d43a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:29:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc7e68347a70d7b7f13d598418bb945e4de5935157b5c1e2e87555f8081d43a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:29:38 compute-0 podman[295961]: 2025-11-26 01:29:38.960413925 +0000 UTC m=+0.244009346 container init 5e5a17afa8179ac5dbe4ffff3f0d077ba0f5c7c7e5eca142ab341e0161fe5e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 01:29:38 compute-0 podman[295961]: 2025-11-26 01:29:38.976805073 +0000 UTC m=+0.260400494 container start 5e5a17afa8179ac5dbe4ffff3f0d077ba0f5c7c7e5eca142ab341e0161fe5e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 01:29:38 compute-0 podman[295961]: 2025-11-26 01:29:38.983198721 +0000 UTC m=+0.266794182 container attach 5e5a17afa8179ac5dbe4ffff3f0d077ba0f5c7c7e5eca142ab341e0161fe5e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:29:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:29:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:40 compute-0 python3.9[296144]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:40 compute-0 nervous_grothendieck[295979]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:29:40 compute-0 nervous_grothendieck[295979]: --> relative data size: 1.0
Nov 26 01:29:40 compute-0 nervous_grothendieck[295979]: --> All data devices are unavailable
Nov 26 01:29:40 compute-0 systemd[1]: libpod-5e5a17afa8179ac5dbe4ffff3f0d077ba0f5c7c7e5eca142ab341e0161fe5e36.scope: Deactivated successfully.
Nov 26 01:29:40 compute-0 podman[295961]: 2025-11-26 01:29:40.261331329 +0000 UTC m=+1.544926740 container died 5e5a17afa8179ac5dbe4ffff3f0d077ba0f5c7c7e5eca142ab341e0161fe5e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 01:29:40 compute-0 systemd[1]: libpod-5e5a17afa8179ac5dbe4ffff3f0d077ba0f5c7c7e5eca142ab341e0161fe5e36.scope: Consumed 1.220s CPU time.
Nov 26 01:29:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cc7e68347a70d7b7f13d598418bb945e4de5935157b5c1e2e87555f8081d43a-merged.mount: Deactivated successfully.
Nov 26 01:29:40 compute-0 podman[295961]: 2025-11-26 01:29:40.35842545 +0000 UTC m=+1.642020871 container remove 5e5a17afa8179ac5dbe4ffff3f0d077ba0f5c7c7e5eca142ab341e0161fe5e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_grothendieck, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:29:40 compute-0 systemd[1]: libpod-conmon-5e5a17afa8179ac5dbe4ffff3f0d077ba0f5c7c7e5eca142ab341e0161fe5e36.scope: Deactivated successfully.
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:29:41
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'default.rgw.control', 'images', '.rgw.root', '.mgr', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data']
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:29:41 compute-0 podman[296468]: 2025-11-26 01:29:41.506509355 +0000 UTC m=+0.082880846 container create b7ff6cc24581f26e07b6b9435cdaa0a6abf8fd022d5839b1bf93ec4558cbcd77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:29:41 compute-0 python3.9[296450]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 01:29:41 compute-0 podman[296468]: 2025-11-26 01:29:41.470106528 +0000 UTC m=+0.046478099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:29:41 compute-0 systemd[1]: Started libpod-conmon-b7ff6cc24581f26e07b6b9435cdaa0a6abf8fd022d5839b1bf93ec4558cbcd77.scope.
Nov 26 01:29:41 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:29:41 compute-0 podman[296468]: 2025-11-26 01:29:41.684246889 +0000 UTC m=+0.260618440 container init b7ff6cc24581f26e07b6b9435cdaa0a6abf8fd022d5839b1bf93ec4558cbcd77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 01:29:41 compute-0 podman[296468]: 2025-11-26 01:29:41.701896872 +0000 UTC m=+0.278268383 container start b7ff6cc24581f26e07b6b9435cdaa0a6abf8fd022d5839b1bf93ec4558cbcd77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lichterman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:29:41 compute-0 kind_lichterman[296485]: 167 167
Nov 26 01:29:41 compute-0 systemd[1]: libpod-b7ff6cc24581f26e07b6b9435cdaa0a6abf8fd022d5839b1bf93ec4558cbcd77.scope: Deactivated successfully.
Nov 26 01:29:41 compute-0 podman[296468]: 2025-11-26 01:29:41.709017561 +0000 UTC m=+0.285389132 container attach b7ff6cc24581f26e07b6b9435cdaa0a6abf8fd022d5839b1bf93ec4558cbcd77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lichterman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 01:29:41 compute-0 podman[296468]: 2025-11-26 01:29:41.717710334 +0000 UTC m=+0.294081885 container died b7ff6cc24581f26e07b6b9435cdaa0a6abf8fd022d5839b1bf93ec4558cbcd77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lichterman, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 01:29:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e886dde7ae8c5b1bd44fa4fec531edccf15d6c4cc7b0957503ab7a90215335a6-merged.mount: Deactivated successfully.
Nov 26 01:29:41 compute-0 podman[296468]: 2025-11-26 01:29:41.784560011 +0000 UTC m=+0.360931502 container remove b7ff6cc24581f26e07b6b9435cdaa0a6abf8fd022d5839b1bf93ec4558cbcd77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 01:29:41 compute-0 systemd[1]: libpod-conmon-b7ff6cc24581f26e07b6b9435cdaa0a6abf8fd022d5839b1bf93ec4558cbcd77.scope: Deactivated successfully.
Nov 26 01:29:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:42 compute-0 podman[296558]: 2025-11-26 01:29:42.021498528 +0000 UTC m=+0.084432079 container create 3e1a8a76ccdc09054bf5ba02c99aea4d993654f9e9d9690540a5f2288c483e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 01:29:42 compute-0 podman[296558]: 2025-11-26 01:29:41.991725317 +0000 UTC m=+0.054658908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:29:42 compute-0 systemd[1]: Started libpod-conmon-3e1a8a76ccdc09054bf5ba02c99aea4d993654f9e9d9690540a5f2288c483e65.scope.
Nov 26 01:29:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ea5462875f8682b1e6319fb86e0b24db04cfb481d78493bf5072d6bf8ddaba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ea5462875f8682b1e6319fb86e0b24db04cfb481d78493bf5072d6bf8ddaba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ea5462875f8682b1e6319fb86e0b24db04cfb481d78493bf5072d6bf8ddaba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:29:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ea5462875f8682b1e6319fb86e0b24db04cfb481d78493bf5072d6bf8ddaba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:29:42 compute-0 podman[296558]: 2025-11-26 01:29:42.216921606 +0000 UTC m=+0.279855227 container init 3e1a8a76ccdc09054bf5ba02c99aea4d993654f9e9d9690540a5f2288c483e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dirac, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:29:42 compute-0 podman[296558]: 2025-11-26 01:29:42.236153864 +0000 UTC m=+0.299087455 container start 3e1a8a76ccdc09054bf5ba02c99aea4d993654f9e9d9690540a5f2288c483e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:29:42 compute-0 podman[296558]: 2025-11-26 01:29:42.244544178 +0000 UTC m=+0.307477759 container attach 3e1a8a76ccdc09054bf5ba02c99aea4d993654f9e9d9690540a5f2288c483e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 01:29:42 compute-0 python3.9[296683]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:43 compute-0 festive_dirac[296603]: {
Nov 26 01:29:43 compute-0 festive_dirac[296603]:    "0": [
Nov 26 01:29:43 compute-0 festive_dirac[296603]:        {
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "devices": [
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "/dev/loop3"
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            ],
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "lv_name": "ceph_lv0",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "lv_size": "21470642176",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "name": "ceph_lv0",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "tags": {
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.cluster_name": "ceph",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.crush_device_class": "",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.encrypted": "0",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.osd_id": "0",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.type": "block",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.vdo": "0"
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            },
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "type": "block",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "vg_name": "ceph_vg0"
Nov 26 01:29:43 compute-0 festive_dirac[296603]:        }
Nov 26 01:29:43 compute-0 festive_dirac[296603]:    ],
Nov 26 01:29:43 compute-0 festive_dirac[296603]:    "1": [
Nov 26 01:29:43 compute-0 festive_dirac[296603]:        {
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "devices": [
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "/dev/loop4"
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            ],
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "lv_name": "ceph_lv1",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "lv_size": "21470642176",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "name": "ceph_lv1",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "tags": {
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.cluster_name": "ceph",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.crush_device_class": "",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.encrypted": "0",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.osd_id": "1",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.type": "block",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.vdo": "0"
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            },
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "type": "block",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "vg_name": "ceph_vg1"
Nov 26 01:29:43 compute-0 festive_dirac[296603]:        }
Nov 26 01:29:43 compute-0 festive_dirac[296603]:    ],
Nov 26 01:29:43 compute-0 festive_dirac[296603]:    "2": [
Nov 26 01:29:43 compute-0 festive_dirac[296603]:        {
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "devices": [
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "/dev/loop5"
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            ],
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "lv_name": "ceph_lv2",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "lv_size": "21470642176",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "name": "ceph_lv2",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "tags": {
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.cluster_name": "ceph",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.crush_device_class": "",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.encrypted": "0",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.osd_id": "2",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.type": "block",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:                "ceph.vdo": "0"
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            },
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "type": "block",
Nov 26 01:29:43 compute-0 festive_dirac[296603]:            "vg_name": "ceph_vg2"
Nov 26 01:29:43 compute-0 festive_dirac[296603]:        }
Nov 26 01:29:43 compute-0 festive_dirac[296603]:    ]
Nov 26 01:29:43 compute-0 festive_dirac[296603]: }
Nov 26 01:29:43 compute-0 systemd[1]: libpod-3e1a8a76ccdc09054bf5ba02c99aea4d993654f9e9d9690540a5f2288c483e65.scope: Deactivated successfully.
Nov 26 01:29:43 compute-0 podman[296558]: 2025-11-26 01:29:43.103436956 +0000 UTC m=+1.166370547 container died 3e1a8a76ccdc09054bf5ba02c99aea4d993654f9e9d9690540a5f2288c483e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dirac, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:29:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-98ea5462875f8682b1e6319fb86e0b24db04cfb481d78493bf5072d6bf8ddaba-merged.mount: Deactivated successfully.
Nov 26 01:29:43 compute-0 podman[296558]: 2025-11-26 01:29:43.187784242 +0000 UTC m=+1.250717783 container remove 3e1a8a76ccdc09054bf5ba02c99aea4d993654f9e9d9690540a5f2288c483e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 01:29:43 compute-0 systemd[1]: libpod-conmon-3e1a8a76ccdc09054bf5ba02c99aea4d993654f9e9d9690540a5f2288c483e65.scope: Deactivated successfully.
Nov 26 01:29:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:29:44 compute-0 python3.9[296961]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:44 compute-0 podman[296992]: 2025-11-26 01:29:44.337213695 +0000 UTC m=+0.087953697 container create c2a5e00f75689e772a0396305f648705edc3cccaf99dc44fe87aecac565b12f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gauss, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:29:44 compute-0 systemd[1]: Started libpod-conmon-c2a5e00f75689e772a0396305f648705edc3cccaf99dc44fe87aecac565b12f9.scope.
Nov 26 01:29:44 compute-0 podman[296992]: 2025-11-26 01:29:44.307017922 +0000 UTC m=+0.057757984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:29:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:29:44 compute-0 podman[296992]: 2025-11-26 01:29:44.458668157 +0000 UTC m=+0.209408179 container init c2a5e00f75689e772a0396305f648705edc3cccaf99dc44fe87aecac565b12f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gauss, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:29:44 compute-0 podman[296992]: 2025-11-26 01:29:44.469937762 +0000 UTC m=+0.220677754 container start c2a5e00f75689e772a0396305f648705edc3cccaf99dc44fe87aecac565b12f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:29:44 compute-0 podman[296992]: 2025-11-26 01:29:44.475853847 +0000 UTC m=+0.226593869 container attach c2a5e00f75689e772a0396305f648705edc3cccaf99dc44fe87aecac565b12f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:29:44 compute-0 vigorous_gauss[297014]: 167 167
Nov 26 01:29:44 compute-0 systemd[1]: libpod-c2a5e00f75689e772a0396305f648705edc3cccaf99dc44fe87aecac565b12f9.scope: Deactivated successfully.
Nov 26 01:29:44 compute-0 podman[296992]: 2025-11-26 01:29:44.48204174 +0000 UTC m=+0.232781732 container died c2a5e00f75689e772a0396305f648705edc3cccaf99dc44fe87aecac565b12f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 01:29:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5aa0123d5c575af5c0cba110d43d09ddd3583da0777a387d577e1061be3459ad-merged.mount: Deactivated successfully.
Nov 26 01:29:44 compute-0 podman[296992]: 2025-11-26 01:29:44.547682003 +0000 UTC m=+0.298422005 container remove c2a5e00f75689e772a0396305f648705edc3cccaf99dc44fe87aecac565b12f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gauss, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:29:44 compute-0 systemd[1]: libpod-conmon-c2a5e00f75689e772a0396305f648705edc3cccaf99dc44fe87aecac565b12f9.scope: Deactivated successfully.
Nov 26 01:29:44 compute-0 podman[297079]: 2025-11-26 01:29:44.790147315 +0000 UTC m=+0.059712738 container create 7016f6eb6a04275864e5b2b2d96d6a3ec641e0e4e713110124c6de04286fc8f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chaum, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:29:44 compute-0 podman[297079]: 2025-11-26 01:29:44.757054741 +0000 UTC m=+0.026620174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:29:44 compute-0 systemd[1]: Started libpod-conmon-7016f6eb6a04275864e5b2b2d96d6a3ec641e0e4e713110124c6de04286fc8f2.scope.
Nov 26 01:29:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:29:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e86e9c6b850faccde6bc23a3c3a38e53227ae20911f1be2f021514a6518824/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:29:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e86e9c6b850faccde6bc23a3c3a38e53227ae20911f1be2f021514a6518824/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:29:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e86e9c6b850faccde6bc23a3c3a38e53227ae20911f1be2f021514a6518824/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:29:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e86e9c6b850faccde6bc23a3c3a38e53227ae20911f1be2f021514a6518824/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:29:44 compute-0 podman[297079]: 2025-11-26 01:29:44.964465223 +0000 UTC m=+0.234030636 container init 7016f6eb6a04275864e5b2b2d96d6a3ec641e0e4e713110124c6de04286fc8f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chaum, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:29:44 compute-0 podman[297079]: 2025-11-26 01:29:44.980445959 +0000 UTC m=+0.250011382 container start 7016f6eb6a04275864e5b2b2d96d6a3ec641e0e4e713110124c6de04286fc8f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chaum, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:29:44 compute-0 podman[297079]: 2025-11-26 01:29:44.987185437 +0000 UTC m=+0.256750901 container attach 7016f6eb6a04275864e5b2b2d96d6a3ec641e0e4e713110124c6de04286fc8f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 01:29:45 compute-0 podman[297097]: 2025-11-26 01:29:45.044533479 +0000 UTC m=+0.136373300 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 26 01:29:45 compute-0 podman[297095]: 2025-11-26 01:29:45.052762609 +0000 UTC m=+0.145000911 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, version=9.4, architecture=x86_64, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Nov 26 01:29:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]: {
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:        "osd_id": 0,
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:        "type": "bluestore"
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:    },
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:        "osd_id": 2,
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:        "type": "bluestore"
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:    },
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:        "osd_id": 1,
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:        "type": "bluestore"
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]:    }
Nov 26 01:29:46 compute-0 inspiring_chaum[297094]: }
Nov 26 01:29:46 compute-0 systemd[1]: libpod-7016f6eb6a04275864e5b2b2d96d6a3ec641e0e4e713110124c6de04286fc8f2.scope: Deactivated successfully.
Nov 26 01:29:46 compute-0 systemd[1]: libpod-7016f6eb6a04275864e5b2b2d96d6a3ec641e0e4e713110124c6de04286fc8f2.scope: Consumed 1.172s CPU time.
Nov 26 01:29:46 compute-0 podman[297079]: 2025-11-26 01:29:46.167235836 +0000 UTC m=+1.436801259 container died 7016f6eb6a04275864e5b2b2d96d6a3ec641e0e4e713110124c6de04286fc8f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chaum, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 01:29:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2e86e9c6b850faccde6bc23a3c3a38e53227ae20911f1be2f021514a6518824-merged.mount: Deactivated successfully.
Nov 26 01:29:46 compute-0 python3.9[297259]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:46 compute-0 podman[297079]: 2025-11-26 01:29:46.301041043 +0000 UTC m=+1.570606466 container remove 7016f6eb6a04275864e5b2b2d96d6a3ec641e0e4e713110124c6de04286fc8f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 01:29:46 compute-0 systemd[1]: libpod-conmon-7016f6eb6a04275864e5b2b2d96d6a3ec641e0e4e713110124c6de04286fc8f2.scope: Deactivated successfully.
Nov 26 01:29:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:29:46 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:29:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:29:46 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:29:46 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5579f55a-1585-403f-b2c8-461791f5e28a does not exist
Nov 26 01:29:46 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev b119a5ec-ca70-4e72-9d11-ab18ae2bb0ed does not exist
Nov 26 01:29:47 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:29:47 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:29:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:48 compute-0 python3.9[297483]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:29:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:50 compute-0 python3.9[297639]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:29:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:29:51 compute-0 python3.9[297794]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:52 compute-0 python3.9[297949]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:53 compute-0 python3.9[298104]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:54 compute-0 podman[298107]: 2025-11-26 01:29:54.163316903 +0000 UTC m=+0.144633201 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 01:29:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:29:55 compute-0 python3.9[298278]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:56 compute-0 python3.9[298433]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:29:58 compute-0 python3.9[298588]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:29:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:29:59 compute-0 podman[158021]: time="2025-11-26T01:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:29:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Nov 26 01:29:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7266 "" "Go-http-client/1.1"
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.780 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.780 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feff248b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff25140e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248a270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feff25140b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feff248b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feff248b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff35fda90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff5310410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff2489520>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff4ce75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feff248b740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feff248b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feff248b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feff248b1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feff248ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feff248b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feff248b0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feff248ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feff248bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feff248bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feff24894f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feff248b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feff248bc20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feff248b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feff248bcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.801 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feff55e84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.801 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feff248bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.802 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feff248b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.802 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feff248bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feff248a2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feff248aea0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feff248afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.804 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:29:59.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:29:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:00 compute-0 python3.9[298744]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:30:01 compute-0 podman[298872]: 2025-11-26 01:30:01.307137414 +0000 UTC m=+0.116320150 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 01:30:01 compute-0 podman[298871]: 2025-11-26 01:30:01.342242194 +0000 UTC m=+0.154979319 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true)
Nov 26 01:30:01 compute-0 podman[298873]: 2025-11-26 01:30:01.364874106 +0000 UTC m=+0.164422003 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:30:01 compute-0 openstack_network_exporter[160178]: ERROR   01:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:30:01 compute-0 openstack_network_exporter[160178]: ERROR   01:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:30:01 compute-0 openstack_network_exporter[160178]: ERROR   01:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:30:01 compute-0 openstack_network_exporter[160178]: ERROR   01:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:30:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:30:01 compute-0 openstack_network_exporter[160178]: ERROR   01:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:30:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:30:01 compute-0 python3.9[298955]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:30:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:02 compute-0 python3.9[299117]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 01:30:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:04 compute-0 podman[299245]: 2025-11-26 01:30:04.119021887 +0000 UTC m=+0.116707620 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:30:04 compute-0 podman[299244]: 2025-11-26 01:30:04.123535193 +0000 UTC m=+0.129195889 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 01:30:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:30:04 compute-0 python3.9[299313]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:30:05 compute-0 python3.9[299465]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:30:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:06 compute-0 python3.9[299617]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:30:07 compute-0 python3.9[299769]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:30:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:08 compute-0 python3.9[299921]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:30:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:30:09 compute-0 python3.9[300073]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:30:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:30:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:30:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:30:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:30:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:30:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:30:11 compute-0 python3.9[300225]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:12 compute-0 python3.9[300303]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtlogd.conf _original_basename=virtlogd.conf recurse=False state=file path=/etc/libvirt/virtlogd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:30:14 compute-0 python3.9[300455]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:15 compute-0 podman[300505]: 2025-11-26 01:30:15.255360607 +0000 UTC m=+0.119709505 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, architecture=x86_64, name=ubi9, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 26 01:30:15 compute-0 podman[300506]: 2025-11-26 01:30:15.287168386 +0000 UTC m=+0.146769990 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:30:15 compute-0 python3.9[300567]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtnodedevd.conf _original_basename=virtnodedevd.conf recurse=False state=file path=/etc/libvirt/virtnodedevd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:16 compute-0 python3.9[300719]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:17 compute-0 python3.9[300797]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtproxyd.conf _original_basename=virtproxyd.conf recurse=False state=file path=/etc/libvirt/virtproxyd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:18 compute-0 python3.9[300949]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:19 compute-0 python3.9[301027]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtqemud.conf _original_basename=virtqemud.conf recurse=False state=file path=/etc/libvirt/virtqemud.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:30:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:20 compute-0 python3.9[301180]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:20 compute-0 python3.9[301258]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/qemu.conf _original_basename=qemu.conf.j2 recurse=False state=file path=/etc/libvirt/qemu.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:21 compute-0 python3.9[301410]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:22 compute-0 python3.9[301488]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtsecretd.conf _original_basename=virtsecretd.conf recurse=False state=file path=/etc/libvirt/virtsecretd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:23 compute-0 python3.9[301640]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:30:24 compute-0 podman[301666]: 2025-11-26 01:30:24.615317923 +0000 UTC m=+0.157174801 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 26 01:30:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:30:24.935 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:30:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:30:24.935 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:30:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:30:24.935 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:30:24 compute-0 python3.9[301737]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0600 owner=libvirt dest=/etc/libvirt/auth.conf _original_basename=auth.conf recurse=False state=file path=/etc/libvirt/auth.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:26 compute-0 python3.9[301889]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:27 compute-0 python3.9[301967]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/sasl2/libvirt.conf _original_basename=sasl_libvirt.conf recurse=False state=file path=/etc/sasl2/libvirt.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:28 compute-0 python3.9[302119]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 26 01:30:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:30:29 compute-0 python3.9[302272]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:29 compute-0 podman[158021]: time="2025-11-26T01:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:30:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Nov 26 01:30:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7272 "" "Go-http-client/1.1"
Nov 26 01:30:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:30 compute-0 python3.9[302424]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:31 compute-0 openstack_network_exporter[160178]: ERROR   01:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:30:31 compute-0 openstack_network_exporter[160178]: ERROR   01:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:30:31 compute-0 openstack_network_exporter[160178]: ERROR   01:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:30:31 compute-0 openstack_network_exporter[160178]: ERROR   01:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:30:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:30:31 compute-0 openstack_network_exporter[160178]: ERROR   01:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:30:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:30:31 compute-0 podman[302576]: 2025-11-26 01:30:31.508944598 +0000 UTC m=+0.124491968 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:30:31 compute-0 podman[302577]: 2025-11-26 01:30:31.51654191 +0000 UTC m=+0.125497766 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 26 01:30:31 compute-0 podman[302594]: 2025-11-26 01:30:31.570984131 +0000 UTC m=+0.122960025 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 26 01:30:31 compute-0 python3.9[302578]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:32 compute-0 python3.9[302794]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:33 compute-0 python3.9[302946]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:30:34 compute-0 podman[303070]: 2025-11-26 01:30:34.553574593 +0000 UTC m=+0.115347093 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9-minimal)
Nov 26 01:30:34 compute-0 podman[303071]: 2025-11-26 01:30:34.561565726 +0000 UTC m=+0.108519602 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:30:34 compute-0 python3.9[303137]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:35 compute-0 python3.9[303292]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:37 compute-0 python3.9[303444]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:39 compute-0 python3.9[303596]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:30:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:40 compute-0 python3.9[303748]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:30:41
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['images', 'backups', 'volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta']
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:30:41 compute-0 python3.9[303900]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:30:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:42 compute-0 python3.9[304052]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:43 compute-0 python3.9[304204]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:30:44 compute-0 python3.9[304356]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:45 compute-0 python3.9[304508]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:45 compute-0 podman[304509]: 2025-11-26 01:30:45.578660146 +0000 UTC m=+0.140664830 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, name=ubi9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, vcs-type=git, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_id=edpm, vendor=Red Hat, Inc., com.redhat.component=ubi9-container)
Nov 26 01:30:45 compute-0 podman[304510]: 2025-11-26 01:30:45.612764328 +0000 UTC m=+0.158299952 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 26 01:30:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:46 compute-0 python3.9[304622]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:47 compute-0 python3.9[304852]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:30:47 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:30:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:30:47 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:30:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:30:47 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:30:47 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 8662ae7f-62bc-4a5c-9221-afd5a6250a5b does not exist
Nov 26 01:30:47 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5db08651-4727-4c26-8cf4-382b6e4a20a5 does not exist
Nov 26 01:30:47 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev c1dd238f-74fd-4035-b2a0-f04cf83dad77 does not exist
Nov 26 01:30:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:30:47 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:30:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:30:47 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:30:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:30:47 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:30:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:47 compute-0 python3.9[304980]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:48 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:30:48 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:30:48 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:30:49 compute-0 podman[305198]: 2025-11-26 01:30:49.023344283 +0000 UTC m=+0.065578092 container create a5f24b7aeec76e6a78d1afbfec324966ffa29353f1ca79a299d66cbf6f8f931e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_gagarin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 01:30:49 compute-0 systemd[1]: Started libpod-conmon-a5f24b7aeec76e6a78d1afbfec324966ffa29353f1ca79a299d66cbf6f8f931e.scope.
Nov 26 01:30:49 compute-0 podman[305198]: 2025-11-26 01:30:49.00174112 +0000 UTC m=+0.043974919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:30:49 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:30:49 compute-0 podman[305198]: 2025-11-26 01:30:49.16858788 +0000 UTC m=+0.210821739 container init a5f24b7aeec76e6a78d1afbfec324966ffa29353f1ca79a299d66cbf6f8f931e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:30:49 compute-0 podman[305198]: 2025-11-26 01:30:49.190175523 +0000 UTC m=+0.232409342 container start a5f24b7aeec76e6a78d1afbfec324966ffa29353f1ca79a299d66cbf6f8f931e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 01:30:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:30:49 compute-0 podman[305198]: 2025-11-26 01:30:49.198590478 +0000 UTC m=+0.240824337 container attach a5f24b7aeec76e6a78d1afbfec324966ffa29353f1ca79a299d66cbf6f8f931e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 01:30:49 compute-0 priceless_gagarin[305213]: 167 167
Nov 26 01:30:49 compute-0 systemd[1]: libpod-a5f24b7aeec76e6a78d1afbfec324966ffa29353f1ca79a299d66cbf6f8f931e.scope: Deactivated successfully.
Nov 26 01:30:49 compute-0 podman[305198]: 2025-11-26 01:30:49.202032084 +0000 UTC m=+0.244265903 container died a5f24b7aeec76e6a78d1afbfec324966ffa29353f1ca79a299d66cbf6f8f931e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_gagarin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:30:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-997b5e571c90162ec361fba53bc8e06abdac836a51d0a91a2dac7ff3767b67fa-merged.mount: Deactivated successfully.
Nov 26 01:30:49 compute-0 podman[305198]: 2025-11-26 01:30:49.273623163 +0000 UTC m=+0.315856952 container remove a5f24b7aeec76e6a78d1afbfec324966ffa29353f1ca79a299d66cbf6f8f931e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 01:30:49 compute-0 systemd[1]: libpod-conmon-a5f24b7aeec76e6a78d1afbfec324966ffa29353f1ca79a299d66cbf6f8f931e.scope: Deactivated successfully.
Nov 26 01:30:49 compute-0 podman[305282]: 2025-11-26 01:30:49.521495335 +0000 UTC m=+0.073151374 container create 005387192c83e228f162353d47455fd5c13aa422562765e1e49bdd90f8dc484e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mcnulty, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:30:49 compute-0 podman[305282]: 2025-11-26 01:30:49.494487291 +0000 UTC m=+0.046143410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:30:49 compute-0 systemd[1]: Started libpod-conmon-005387192c83e228f162353d47455fd5c13aa422562765e1e49bdd90f8dc484e.scope.
Nov 26 01:30:49 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/548ca2a60841faf7b7a01f89c14d5949dc3d220f8ecb39aca7b79bb6a27558cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/548ca2a60841faf7b7a01f89c14d5949dc3d220f8ecb39aca7b79bb6a27558cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/548ca2a60841faf7b7a01f89c14d5949dc3d220f8ecb39aca7b79bb6a27558cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/548ca2a60841faf7b7a01f89c14d5949dc3d220f8ecb39aca7b79bb6a27558cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/548ca2a60841faf7b7a01f89c14d5949dc3d220f8ecb39aca7b79bb6a27558cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:30:49 compute-0 podman[305282]: 2025-11-26 01:30:49.688260513 +0000 UTC m=+0.239916632 container init 005387192c83e228f162353d47455fd5c13aa422562765e1e49bdd90f8dc484e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mcnulty, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:30:49 compute-0 podman[305282]: 2025-11-26 01:30:49.713425036 +0000 UTC m=+0.265081105 container start 005387192c83e228f162353d47455fd5c13aa422562765e1e49bdd90f8dc484e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 01:30:49 compute-0 podman[305282]: 2025-11-26 01:30:49.720506524 +0000 UTC m=+0.272162593 container attach 005387192c83e228f162353d47455fd5c13aa422562765e1e49bdd90f8dc484e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mcnulty, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 01:30:49 compute-0 python3.9[305329]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:50 compute-0 python3.9[305410]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:30:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:30:50 compute-0 goofy_mcnulty[305327]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:30:50 compute-0 goofy_mcnulty[305327]: --> relative data size: 1.0
Nov 26 01:30:50 compute-0 goofy_mcnulty[305327]: --> All data devices are unavailable
Nov 26 01:30:50 compute-0 systemd[1]: libpod-005387192c83e228f162353d47455fd5c13aa422562765e1e49bdd90f8dc484e.scope: Deactivated successfully.
Nov 26 01:30:50 compute-0 systemd[1]: libpod-005387192c83e228f162353d47455fd5c13aa422562765e1e49bdd90f8dc484e.scope: Consumed 1.175s CPU time.
Nov 26 01:30:50 compute-0 podman[305282]: 2025-11-26 01:30:50.948369117 +0000 UTC m=+1.500025256 container died 005387192c83e228f162353d47455fd5c13aa422562765e1e49bdd90f8dc484e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mcnulty, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 01:30:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-548ca2a60841faf7b7a01f89c14d5949dc3d220f8ecb39aca7b79bb6a27558cd-merged.mount: Deactivated successfully.
Nov 26 01:30:51 compute-0 podman[305282]: 2025-11-26 01:30:51.058465442 +0000 UTC m=+1.610121501 container remove 005387192c83e228f162353d47455fd5c13aa422562765e1e49bdd90f8dc484e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mcnulty, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 01:30:51 compute-0 systemd[1]: libpod-conmon-005387192c83e228f162353d47455fd5c13aa422562765e1e49bdd90f8dc484e.scope: Deactivated successfully.
Nov 26 01:30:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:52 compute-0 python3.9[305705]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:52 compute-0 podman[305739]: 2025-11-26 01:30:52.231671509 +0000 UTC m=+0.077904727 container create 3e41b1752bca0a6ed667d1ccf40a13986ef720ab21351054af6ed9d09ae251d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bardeen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 01:30:52 compute-0 podman[305739]: 2025-11-26 01:30:52.197285889 +0000 UTC m=+0.043519177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:30:52 compute-0 systemd[1]: Started libpod-conmon-3e41b1752bca0a6ed667d1ccf40a13986ef720ab21351054af6ed9d09ae251d0.scope.
Nov 26 01:30:52 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:30:52 compute-0 podman[305739]: 2025-11-26 01:30:52.375811805 +0000 UTC m=+0.222293610 container init 3e41b1752bca0a6ed667d1ccf40a13986ef720ab21351054af6ed9d09ae251d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:30:52 compute-0 podman[305739]: 2025-11-26 01:30:52.394639371 +0000 UTC m=+0.240872609 container start 3e41b1752bca0a6ed667d1ccf40a13986ef720ab21351054af6ed9d09ae251d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 01:30:52 compute-0 podman[305739]: 2025-11-26 01:30:52.400189276 +0000 UTC m=+0.246422494 container attach 3e41b1752bca0a6ed667d1ccf40a13986ef720ab21351054af6ed9d09ae251d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:30:52 compute-0 dreamy_bardeen[305780]: 167 167
Nov 26 01:30:52 compute-0 systemd[1]: libpod-3e41b1752bca0a6ed667d1ccf40a13986ef720ab21351054af6ed9d09ae251d0.scope: Deactivated successfully.
Nov 26 01:30:52 compute-0 podman[305739]: 2025-11-26 01:30:52.404809205 +0000 UTC m=+0.251042463 container died 3e41b1752bca0a6ed667d1ccf40a13986ef720ab21351054af6ed9d09ae251d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:30:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a6cd863c68a68f39691f73832d803d1122b4a90ca1b414bf5c8372db8e8bff2-merged.mount: Deactivated successfully.
Nov 26 01:30:52 compute-0 podman[305739]: 2025-11-26 01:30:52.48590259 +0000 UTC m=+0.332135838 container remove 3e41b1752bca0a6ed667d1ccf40a13986ef720ab21351054af6ed9d09ae251d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:30:52 compute-0 systemd[1]: libpod-conmon-3e41b1752bca0a6ed667d1ccf40a13986ef720ab21351054af6ed9d09ae251d0.scope: Deactivated successfully.
Nov 26 01:30:52 compute-0 podman[305855]: 2025-11-26 01:30:52.725421019 +0000 UTC m=+0.056456957 container create f5aa81c661c6db90a6fe8798ea27727d5a98a779f6d39eb5cfba2ecca41fc363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_thompson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 01:30:52 compute-0 systemd[1]: Started libpod-conmon-f5aa81c661c6db90a6fe8798ea27727d5a98a779f6d39eb5cfba2ecca41fc363.scope.
Nov 26 01:30:52 compute-0 python3.9[305849]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:52 compute-0 podman[305855]: 2025-11-26 01:30:52.703866307 +0000 UTC m=+0.034902235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:30:52 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96ec8fd76094cd0ab129c7904147a20ecb4abf89b73f651740c3b66dcebf293/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96ec8fd76094cd0ab129c7904147a20ecb4abf89b73f651740c3b66dcebf293/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96ec8fd76094cd0ab129c7904147a20ecb4abf89b73f651740c3b66dcebf293/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96ec8fd76094cd0ab129c7904147a20ecb4abf89b73f651740c3b66dcebf293/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:30:52 compute-0 podman[305855]: 2025-11-26 01:30:52.885260853 +0000 UTC m=+0.216296811 container init f5aa81c661c6db90a6fe8798ea27727d5a98a779f6d39eb5cfba2ecca41fc363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:30:52 compute-0 podman[305855]: 2025-11-26 01:30:52.904418298 +0000 UTC m=+0.235454236 container start f5aa81c661c6db90a6fe8798ea27727d5a98a779f6d39eb5cfba2ecca41fc363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_thompson, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 01:30:52 compute-0 podman[305855]: 2025-11-26 01:30:52.909738586 +0000 UTC m=+0.240774524 container attach f5aa81c661c6db90a6fe8798ea27727d5a98a779f6d39eb5cfba2ecca41fc363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:30:53 compute-0 loving_thompson[305871]: {
Nov 26 01:30:53 compute-0 loving_thompson[305871]:    "0": [
Nov 26 01:30:53 compute-0 loving_thompson[305871]:        {
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "devices": [
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "/dev/loop3"
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            ],
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "lv_name": "ceph_lv0",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "lv_size": "21470642176",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "name": "ceph_lv0",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "tags": {
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.cluster_name": "ceph",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.crush_device_class": "",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.encrypted": "0",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.osd_id": "0",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.type": "block",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.vdo": "0"
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            },
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "type": "block",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "vg_name": "ceph_vg0"
Nov 26 01:30:53 compute-0 loving_thompson[305871]:        }
Nov 26 01:30:53 compute-0 loving_thompson[305871]:    ],
Nov 26 01:30:53 compute-0 loving_thompson[305871]:    "1": [
Nov 26 01:30:53 compute-0 loving_thompson[305871]:        {
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "devices": [
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "/dev/loop4"
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            ],
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "lv_name": "ceph_lv1",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "lv_size": "21470642176",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "name": "ceph_lv1",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "tags": {
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.cluster_name": "ceph",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.crush_device_class": "",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.encrypted": "0",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.osd_id": "1",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.type": "block",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.vdo": "0"
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            },
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "type": "block",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "vg_name": "ceph_vg1"
Nov 26 01:30:53 compute-0 loving_thompson[305871]:        }
Nov 26 01:30:53 compute-0 loving_thompson[305871]:    ],
Nov 26 01:30:53 compute-0 loving_thompson[305871]:    "2": [
Nov 26 01:30:53 compute-0 loving_thompson[305871]:        {
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "devices": [
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "/dev/loop5"
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            ],
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "lv_name": "ceph_lv2",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "lv_size": "21470642176",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "name": "ceph_lv2",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "tags": {
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.cluster_name": "ceph",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.crush_device_class": "",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.encrypted": "0",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.osd_id": "2",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.type": "block",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:                "ceph.vdo": "0"
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            },
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "type": "block",
Nov 26 01:30:53 compute-0 loving_thompson[305871]:            "vg_name": "ceph_vg2"
Nov 26 01:30:53 compute-0 loving_thompson[305871]:        }
Nov 26 01:30:53 compute-0 loving_thompson[305871]:    ]
Nov 26 01:30:53 compute-0 loving_thompson[305871]: }
Nov 26 01:30:53 compute-0 systemd[1]: libpod-f5aa81c661c6db90a6fe8798ea27727d5a98a779f6d39eb5cfba2ecca41fc363.scope: Deactivated successfully.
Nov 26 01:30:53 compute-0 podman[305855]: 2025-11-26 01:30:53.700579964 +0000 UTC m=+1.031615892 container died f5aa81c661c6db90a6fe8798ea27727d5a98a779f6d39eb5cfba2ecca41fc363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:30:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f96ec8fd76094cd0ab129c7904147a20ecb4abf89b73f651740c3b66dcebf293-merged.mount: Deactivated successfully.
Nov 26 01:30:53 compute-0 podman[305855]: 2025-11-26 01:30:53.78994756 +0000 UTC m=+1.120983488 container remove f5aa81c661c6db90a6fe8798ea27727d5a98a779f6d39eb5cfba2ecca41fc363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_thompson, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 01:30:53 compute-0 systemd[1]: libpod-conmon-f5aa81c661c6db90a6fe8798ea27727d5a98a779f6d39eb5cfba2ecca41fc363.scope: Deactivated successfully.
Nov 26 01:30:53 compute-0 python3.9[306031]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:30:54 compute-0 python3.9[306219]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:54 compute-0 podman[306289]: 2025-11-26 01:30:54.941878203 +0000 UTC m=+0.081528648 container create 766ea30768c56e2dac78dee7d3fba4aafaf2251b6b74a71dc4e8523cde4bca5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 26 01:30:55 compute-0 podman[306289]: 2025-11-26 01:30:54.918069488 +0000 UTC m=+0.057719963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:30:55 compute-0 systemd[1]: Started libpod-conmon-766ea30768c56e2dac78dee7d3fba4aafaf2251b6b74a71dc4e8523cde4bca5d.scope.
Nov 26 01:30:55 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:30:55 compute-0 podman[306289]: 2025-11-26 01:30:55.082926862 +0000 UTC m=+0.222577387 container init 766ea30768c56e2dac78dee7d3fba4aafaf2251b6b74a71dc4e8523cde4bca5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:30:55 compute-0 podman[306289]: 2025-11-26 01:30:55.101701737 +0000 UTC m=+0.241352182 container start 766ea30768c56e2dac78dee7d3fba4aafaf2251b6b74a71dc4e8523cde4bca5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 01:30:55 compute-0 podman[306289]: 2025-11-26 01:30:55.108057604 +0000 UTC m=+0.247708089 container attach 766ea30768c56e2dac78dee7d3fba4aafaf2251b6b74a71dc4e8523cde4bca5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:30:55 compute-0 eager_bose[306345]: 167 167
Nov 26 01:30:55 compute-0 systemd[1]: libpod-766ea30768c56e2dac78dee7d3fba4aafaf2251b6b74a71dc4e8523cde4bca5d.scope: Deactivated successfully.
Nov 26 01:30:55 compute-0 podman[306289]: 2025-11-26 01:30:55.111870281 +0000 UTC m=+0.251520746 container died 766ea30768c56e2dac78dee7d3fba4aafaf2251b6b74a71dc4e8523cde4bca5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:30:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-38d09261c79201dec65ae7e8e8f002fbaf57332bd15ca38cbdbeb5c7c6be4d2f-merged.mount: Deactivated successfully.
Nov 26 01:30:55 compute-0 podman[306332]: 2025-11-26 01:30:55.179775637 +0000 UTC m=+0.162020216 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 01:30:55 compute-0 podman[306289]: 2025-11-26 01:30:55.189107438 +0000 UTC m=+0.328757873 container remove 766ea30768c56e2dac78dee7d3fba4aafaf2251b6b74a71dc4e8523cde4bca5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:30:55 compute-0 systemd[1]: libpod-conmon-766ea30768c56e2dac78dee7d3fba4aafaf2251b6b74a71dc4e8523cde4bca5d.scope: Deactivated successfully.
Nov 26 01:30:55 compute-0 podman[306439]: 2025-11-26 01:30:55.395305787 +0000 UTC m=+0.071341353 container create f6c82ca1e11fd6acb003b2e62efc3ae82a9c3d3ee53351b823d91881ec8844cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 01:30:55 compute-0 podman[306439]: 2025-11-26 01:30:55.367275554 +0000 UTC m=+0.043311150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:30:55 compute-0 systemd[1]: Started libpod-conmon-f6c82ca1e11fd6acb003b2e62efc3ae82a9c3d3ee53351b823d91881ec8844cc.scope.
Nov 26 01:30:55 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09503dc347ef8f5ac73e657aea1324397d930a5e3acae8e509ff0881ad1de0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09503dc347ef8f5ac73e657aea1324397d930a5e3acae8e509ff0881ad1de0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09503dc347ef8f5ac73e657aea1324397d930a5e3acae8e509ff0881ad1de0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09503dc347ef8f5ac73e657aea1324397d930a5e3acae8e509ff0881ad1de0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:30:55 compute-0 podman[306439]: 2025-11-26 01:30:55.597685049 +0000 UTC m=+0.273720645 container init f6c82ca1e11fd6acb003b2e62efc3ae82a9c3d3ee53351b823d91881ec8844cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:30:55 compute-0 podman[306439]: 2025-11-26 01:30:55.610560419 +0000 UTC m=+0.286596005 container start f6c82ca1e11fd6acb003b2e62efc3ae82a9c3d3ee53351b823d91881ec8844cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:30:55 compute-0 python3.9[306483]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:55 compute-0 podman[306439]: 2025-11-26 01:30:55.62350598 +0000 UTC m=+0.299541536 container attach f6c82ca1e11fd6acb003b2e62efc3ae82a9c3d3ee53351b823d91881ec8844cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 01:30:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:56 compute-0 python3.9[306566]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:56 compute-0 frosty_dirac[306484]: {
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:        "osd_id": 0,
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:        "type": "bluestore"
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:    },
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:        "osd_id": 2,
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:        "type": "bluestore"
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:    },
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:        "osd_id": 1,
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:        "type": "bluestore"
Nov 26 01:30:56 compute-0 frosty_dirac[306484]:    }
Nov 26 01:30:56 compute-0 frosty_dirac[306484]: }
Nov 26 01:30:56 compute-0 systemd[1]: libpod-f6c82ca1e11fd6acb003b2e62efc3ae82a9c3d3ee53351b823d91881ec8844cc.scope: Deactivated successfully.
Nov 26 01:30:56 compute-0 systemd[1]: libpod-f6c82ca1e11fd6acb003b2e62efc3ae82a9c3d3ee53351b823d91881ec8844cc.scope: Consumed 1.190s CPU time.
Nov 26 01:30:56 compute-0 podman[306439]: 2025-11-26 01:30:56.808512376 +0000 UTC m=+1.484547952 container died f6c82ca1e11fd6acb003b2e62efc3ae82a9c3d3ee53351b823d91881ec8844cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:30:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b09503dc347ef8f5ac73e657aea1324397d930a5e3acae8e509ff0881ad1de0f-merged.mount: Deactivated successfully.
Nov 26 01:30:56 compute-0 podman[306439]: 2025-11-26 01:30:56.913962581 +0000 UTC m=+1.589998167 container remove f6c82ca1e11fd6acb003b2e62efc3ae82a9c3d3ee53351b823d91881ec8844cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:30:56 compute-0 systemd[1]: libpod-conmon-f6c82ca1e11fd6acb003b2e62efc3ae82a9c3d3ee53351b823d91881ec8844cc.scope: Deactivated successfully.
Nov 26 01:30:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:30:56 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:30:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:30:56 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:30:57 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev afa895ad-20ab-4f15-80a1-784841541e0d does not exist
Nov 26 01:30:57 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev c1c301b7-30a8-4eae-81f5-3094ba281412 does not exist
Nov 26 01:30:57 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:30:57 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:30:57 compute-0 python3.9[306804]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:30:58 compute-0 python3.9[306884]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:30:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:30:59 compute-0 python3.9[307036]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:30:59 compute-0 podman[158021]: time="2025-11-26T01:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:30:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Nov 26 01:30:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7272 "" "Go-http-client/1.1"
Nov 26 01:30:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:00 compute-0 python3.9[307114]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:01 compute-0 python3.9[307266]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:31:01 compute-0 openstack_network_exporter[160178]: ERROR   01:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:31:01 compute-0 openstack_network_exporter[160178]: ERROR   01:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:31:01 compute-0 openstack_network_exporter[160178]: ERROR   01:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:31:01 compute-0 openstack_network_exporter[160178]: ERROR   01:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:31:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:31:01 compute-0 openstack_network_exporter[160178]: ERROR   01:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:31:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:31:01 compute-0 python3.9[307344]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:02 compute-0 podman[307345]: 2025-11-26 01:31:02.583762066 +0000 UTC m=+0.129109927 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true)
Nov 26 01:31:02 compute-0 podman[307346]: 2025-11-26 01:31:02.596329287 +0000 UTC m=+0.139925249 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:31:02 compute-0 podman[307347]: 2025-11-26 01:31:02.646767326 +0000 UTC m=+0.182329104 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 26 01:31:03 compute-0 python3.9[307562]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:31:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:04 compute-0 python3.9[307640]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:31:05 compute-0 podman[307738]: 2025-11-26 01:31:05.57968897 +0000 UTC m=+0.118505911 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:31:05 compute-0 podman[307730]: 2025-11-26 01:31:05.593610799 +0000 UTC m=+0.135366362 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 26 01:31:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:06 compute-0 python3.9[307832]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:31:06 compute-0 python3.9[307910]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:07 compute-0 python3.9[308062]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:31:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:08 compute-0 python3.9[308140]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:31:09 compute-0 python3.9[308292]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:31:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:10 compute-0 python3.9[308370]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:31:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:31:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:31:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:31:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:31:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:31:11 compute-0 python3.9[308522]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:31:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:12 compute-0 python3.9[308600]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:13 compute-0 python3.9[308750]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:31:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:31:14 compute-0 python3.9[308905]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 26 01:31:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:16 compute-0 podman[309029]: 2025-11-26 01:31:16.474629178 +0000 UTC m=+0.109661164 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, release-0.7.12=, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, build-date=2024-09-18T21:23:30)
Nov 26 01:31:16 compute-0 podman[309030]: 2025-11-26 01:31:16.498460154 +0000 UTC m=+0.123768078 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 01:31:16 compute-0 python3.9[309095]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:18 compute-0 python3.9[309247]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:31:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:20 compute-0 python3.9[309402]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:21 compute-0 python3.9[309554]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:22 compute-0 python3.9[309706]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:24 compute-0 python3.9[309858]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:31:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:31:24.936 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:31:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:31:24.937 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:31:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:31:24.937 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:31:25 compute-0 python3.9[310010]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:25 compute-0 podman[310064]: 2025-11-26 01:31:25.572930506 +0000 UTC m=+0.121943347 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:31:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:26 compute-0 python3.9[310181]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:27 compute-0 python3.9[310333]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:28 compute-0 python3.9[310485]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:31:29 compute-0 python3.9[310637]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:29 compute-0 podman[158021]: time="2025-11-26T01:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:31:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Nov 26 01:31:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7270 "" "Go-http-client/1.1"
Nov 26 01:31:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:30 compute-0 python3.9[310789]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 01:31:31 compute-0 openstack_network_exporter[160178]: ERROR   01:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:31:31 compute-0 openstack_network_exporter[160178]: ERROR   01:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:31:31 compute-0 openstack_network_exporter[160178]: ERROR   01:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:31:31 compute-0 openstack_network_exporter[160178]: ERROR   01:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:31:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:31:31 compute-0 openstack_network_exporter[160178]: ERROR   01:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:31:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:31:31 compute-0 python3.9[310941]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:31:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:33 compute-0 podman[311070]: 2025-11-26 01:31:33.293147006 +0000 UTC m=+0.111251708 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 01:31:33 compute-0 podman[311074]: 2025-11-26 01:31:33.309621256 +0000 UTC m=+0.121357900 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 01:31:33 compute-0 podman[311077]: 2025-11-26 01:31:33.343242305 +0000 UTC m=+0.146820352 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:31:33 compute-0 python3.9[311128]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 01:31:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:31:35 compute-0 python3.9[311313]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:31:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:36 compute-0 podman[311409]: 2025-11-26 01:31:36.169273053 +0000 UTC m=+0.100755745 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:31:36 compute-0 podman[311408]: 2025-11-26 01:31:36.182508973 +0000 UTC m=+0.118756488 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 26 01:31:36 compute-0 python3.9[311462]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764120693.97067-1017-123522498283649/.source.xml follow=False _original_basename=secret.xml.j2 checksum=406eafbdd6868ac53b816a54269a9bf3a681e42d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:37 compute-0 python3.9[311627]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 36901f64-240e-5c29-a2e2-29b56f2c329c#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:31:37 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 26 01:31:37 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 26 01:31:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:38 compute-0 python3.9[311808]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:31:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:31:41
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'images', 'backups', 'volumes', '.mgr', 'cephfs.cephfs.data', '.rgw.root']
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:31:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:42 compute-0 python3.9[312271]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:43 compute-0 python3.9[312423]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:31:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:31:44 compute-0 python3.9[312501]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/libvirt.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/libvirt.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:45 compute-0 python3.9[312653]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:46 compute-0 podman[312778]: 2025-11-26 01:31:46.988707534 +0000 UTC m=+0.125497556 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118)
Nov 26 01:31:46 compute-0 podman[312777]: 2025-11-26 01:31:46.994985749 +0000 UTC m=+0.131619577 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=edpm, release-0.7.12=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, architecture=x86_64)
Nov 26 01:31:47 compute-0 python3.9[312840]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:31:47 compute-0 python3.9[312920]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:31:49 compute-0 python3.9[313073]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:50 compute-0 python3.9[313151]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.piuu7zut recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:31:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:31:51 compute-0 python3.9[313303]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:31:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:52 compute-0 python3.9[313381]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:53 compute-0 python3.9[313533]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:31:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.214021) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120714214073, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1830, "num_deletes": 250, "total_data_size": 3122940, "memory_usage": 3178696, "flush_reason": "Manual Compaction"}
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120714231300, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1762028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11724, "largest_seqno": 13553, "table_properties": {"data_size": 1756108, "index_size": 2995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14721, "raw_average_key_size": 20, "raw_value_size": 1743022, "raw_average_value_size": 2374, "num_data_blocks": 139, "num_entries": 734, "num_filter_entries": 734, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764120505, "oldest_key_time": 1764120505, "file_creation_time": 1764120714, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 17403 microseconds, and 8803 cpu microseconds.
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.231419) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1762028 bytes OK
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.231447) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.234159) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.234179) EVENT_LOG_v1 {"time_micros": 1764120714234172, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.234202) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3115232, prev total WAL file size 3115232, number of live WAL files 2.
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.236460) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1720KB)], [29(7584KB)]
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120714236569, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9528765, "oldest_snapshot_seqno": -1}
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4003 keys, 7530418 bytes, temperature: kUnknown
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120714297472, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7530418, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7501925, "index_size": 17375, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95158, "raw_average_key_size": 23, "raw_value_size": 7427976, "raw_average_value_size": 1855, "num_data_blocks": 757, "num_entries": 4003, "num_filter_entries": 4003, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764120714, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.297762) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7530418 bytes
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.301391) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.2 rd, 123.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.4 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(9.7) write-amplify(4.3) OK, records in: 4416, records dropped: 413 output_compression: NoCompression
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.301456) EVENT_LOG_v1 {"time_micros": 1764120714301429, "job": 12, "event": "compaction_finished", "compaction_time_micros": 60986, "compaction_time_cpu_micros": 36044, "output_level": 6, "num_output_files": 1, "total_output_size": 7530418, "num_input_records": 4416, "num_output_records": 4003, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120714302462, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120714305915, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.235916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.306164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.306173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.306177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.306180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:31:54 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:31:54.306184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:31:54 compute-0 python3[313686]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 01:31:55 compute-0 podman[313810]: 2025-11-26 01:31:55.836092156 +0000 UTC m=+0.139169578 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:31:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:56 compute-0 python3.9[313854]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:31:56 compute-0 python3.9[313932]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:57 compute-0 python3.9[314154]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:31:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:31:58 compute-0 python3.9[314279]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:31:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:31:58 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:31:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:31:58 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:31:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:31:58 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:31:58 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 72d10a0f-e0e4-4825-bfd8-d213e0b15d95 does not exist
Nov 26 01:31:58 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 57f2e25f-b6b2-4abd-85c9-9f31a3177456 does not exist
Nov 26 01:31:58 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5c920712-977e-4681-9693-1e99c2524e16 does not exist
Nov 26 01:31:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:31:58 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:31:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:31:58 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:31:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:31:58 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:31:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:31:59 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:31:59 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:31:59 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:31:59 compute-0 podman[314582]: 2025-11-26 01:31:59.472413766 +0000 UTC m=+0.076510053 container create 442cb385661ecee1cef3a05e3767b0d4c3b3bdff32b47cff293b6f15f45e0ee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_newton, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:31:59 compute-0 podman[314582]: 2025-11-26 01:31:59.437816633 +0000 UTC m=+0.041912990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:31:59 compute-0 systemd[1]: Started libpod-conmon-442cb385661ecee1cef3a05e3767b0d4c3b3bdff32b47cff293b6f15f45e0ee9.scope.
Nov 26 01:31:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:31:59 compute-0 python3.9[314585]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:31:59 compute-0 podman[314582]: 2025-11-26 01:31:59.620076628 +0000 UTC m=+0.224173055 container init 442cb385661ecee1cef3a05e3767b0d4c3b3bdff32b47cff293b6f15f45e0ee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:31:59 compute-0 podman[314582]: 2025-11-26 01:31:59.639061592 +0000 UTC m=+0.243157899 container start 442cb385661ecee1cef3a05e3767b0d4c3b3bdff32b47cff293b6f15f45e0ee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_newton, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 01:31:59 compute-0 happy_newton[314598]: 167 167
Nov 26 01:31:59 compute-0 podman[314582]: 2025-11-26 01:31:59.646755488 +0000 UTC m=+0.250851805 container attach 442cb385661ecee1cef3a05e3767b0d4c3b3bdff32b47cff293b6f15f45e0ee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_newton, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:31:59 compute-0 systemd[1]: libpod-442cb385661ecee1cef3a05e3767b0d4c3b3bdff32b47cff293b6f15f45e0ee9.scope: Deactivated successfully.
Nov 26 01:31:59 compute-0 podman[314582]: 2025-11-26 01:31:59.650666559 +0000 UTC m=+0.254762876 container died 442cb385661ecee1cef3a05e3767b0d4c3b3bdff32b47cff293b6f15f45e0ee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 01:31:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4ec9f15afaaf75419f3bda3ca8bb0f7643776194038b16b2c1e7458007e6ce7-merged.mount: Deactivated successfully.
Nov 26 01:31:59 compute-0 podman[314582]: 2025-11-26 01:31:59.719402841 +0000 UTC m=+0.323499128 container remove 442cb385661ecee1cef3a05e3767b0d4c3b3bdff32b47cff293b6f15f45e0ee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 01:31:59 compute-0 systemd[1]: libpod-conmon-442cb385661ecee1cef3a05e3767b0d4c3b3bdff32b47cff293b6f15f45e0ee9.scope: Deactivated successfully.
Nov 26 01:31:59 compute-0 podman[158021]: time="2025-11-26T01:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:31:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Nov 26 01:31:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7270 "" "Go-http-client/1.1"
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.780 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.781 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feff248b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff25140e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248a270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff35fda90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feff25140b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feff248b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff5310410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feff248b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feff248b740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff2489520>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feff248b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feff248b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feff248b1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff4ce75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feff248ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feff248b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feff248b0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feff248ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feff248bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feff248bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feff24894f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feff248b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feff248bc20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feff248b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feff248bcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feff55e84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feff248bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feff248b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feff248bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.801 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feff248a2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.801 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feff248aea0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.801 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feff248afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.802 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:31:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:31:59 compute-0 podman[314624]: 2025-11-26 01:31:59.958274579 +0000 UTC m=+0.075152485 container create 0942f469bb00420a72dc8146f8c2609c97516a57834da95f4a2443bb997c8522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noether, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:32:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:00 compute-0 podman[314624]: 2025-11-26 01:31:59.924514679 +0000 UTC m=+0.041392665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:32:00 compute-0 systemd[1]: Started libpod-conmon-0942f469bb00420a72dc8146f8c2609c97516a57834da95f4a2443bb997c8522.scope.
Nov 26 01:32:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b7730e2e5efd3e1270821293761dec6e5c5d48c9c2c652746c3c6baaee9595/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b7730e2e5efd3e1270821293761dec6e5c5d48c9c2c652746c3c6baaee9595/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b7730e2e5efd3e1270821293761dec6e5c5d48c9c2c652746c3c6baaee9595/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b7730e2e5efd3e1270821293761dec6e5c5d48c9c2c652746c3c6baaee9595/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b7730e2e5efd3e1270821293761dec6e5c5d48c9c2c652746c3c6baaee9595/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:32:00 compute-0 podman[314624]: 2025-11-26 01:32:00.091692981 +0000 UTC m=+0.208570897 container init 0942f469bb00420a72dc8146f8c2609c97516a57834da95f4a2443bb997c8522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noether, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 01:32:00 compute-0 podman[314624]: 2025-11-26 01:32:00.101788285 +0000 UTC m=+0.218666181 container start 0942f469bb00420a72dc8146f8c2609c97516a57834da95f4a2443bb997c8522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 26 01:32:00 compute-0 podman[314624]: 2025-11-26 01:32:00.107529066 +0000 UTC m=+0.224406982 container attach 0942f469bb00420a72dc8146f8c2609c97516a57834da95f4a2443bb997c8522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 26 01:32:00 compute-0 python3.9[314720]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:32:01 compute-0 distracted_noether[314646]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:32:01 compute-0 distracted_noether[314646]: --> relative data size: 1.0
Nov 26 01:32:01 compute-0 distracted_noether[314646]: --> All data devices are unavailable
Nov 26 01:32:01 compute-0 systemd[1]: libpod-0942f469bb00420a72dc8146f8c2609c97516a57834da95f4a2443bb997c8522.scope: Deactivated successfully.
Nov 26 01:32:01 compute-0 podman[314624]: 2025-11-26 01:32:01.317550053 +0000 UTC m=+1.434427979 container died 0942f469bb00420a72dc8146f8c2609c97516a57834da95f4a2443bb997c8522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 01:32:01 compute-0 systemd[1]: libpod-0942f469bb00420a72dc8146f8c2609c97516a57834da95f4a2443bb997c8522.scope: Consumed 1.148s CPU time.
Nov 26 01:32:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3b7730e2e5efd3e1270821293761dec6e5c5d48c9c2c652746c3c6baaee9595-merged.mount: Deactivated successfully.
Nov 26 01:32:01 compute-0 openstack_network_exporter[160178]: ERROR   01:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:32:01 compute-0 openstack_network_exporter[160178]: ERROR   01:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:32:01 compute-0 openstack_network_exporter[160178]: ERROR   01:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:32:01 compute-0 openstack_network_exporter[160178]: ERROR   01:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:32:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:32:01 compute-0 openstack_network_exporter[160178]: ERROR   01:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:32:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:32:01 compute-0 podman[314624]: 2025-11-26 01:32:01.43513904 +0000 UTC m=+1.552016966 container remove 0942f469bb00420a72dc8146f8c2609c97516a57834da95f4a2443bb997c8522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noether, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:32:01 compute-0 systemd[1]: libpod-conmon-0942f469bb00420a72dc8146f8c2609c97516a57834da95f4a2443bb997c8522.scope: Deactivated successfully.
Nov 26 01:32:01 compute-0 python3.9[314932]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:32:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:02 compute-0 podman[315074]: 2025-11-26 01:32:02.649140099 +0000 UTC m=+0.093521701 container create f207465a397cb83e15cc4137d6c71dfb082f993ee7852962a8e6bb5a3d66bf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:32:02 compute-0 podman[315074]: 2025-11-26 01:32:02.61572261 +0000 UTC m=+0.060104262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:32:02 compute-0 systemd[1]: Started libpod-conmon-f207465a397cb83e15cc4137d6c71dfb082f993ee7852962a8e6bb5a3d66bf03.scope.
Nov 26 01:32:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:32:02 compute-0 podman[315074]: 2025-11-26 01:32:02.789362833 +0000 UTC m=+0.233744495 container init f207465a397cb83e15cc4137d6c71dfb082f993ee7852962a8e6bb5a3d66bf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 01:32:02 compute-0 podman[315074]: 2025-11-26 01:32:02.800543137 +0000 UTC m=+0.244924719 container start f207465a397cb83e15cc4137d6c71dfb082f993ee7852962a8e6bb5a3d66bf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 01:32:02 compute-0 podman[315074]: 2025-11-26 01:32:02.807700658 +0000 UTC m=+0.252082320 container attach f207465a397cb83e15cc4137d6c71dfb082f993ee7852962a8e6bb5a3d66bf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 01:32:02 compute-0 sleepy_wilson[315115]: 167 167
Nov 26 01:32:02 compute-0 systemd[1]: libpod-f207465a397cb83e15cc4137d6c71dfb082f993ee7852962a8e6bb5a3d66bf03.scope: Deactivated successfully.
Nov 26 01:32:02 compute-0 conmon[315115]: conmon f207465a397cb83e15cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f207465a397cb83e15cc4137d6c71dfb082f993ee7852962a8e6bb5a3d66bf03.scope/container/memory.events
Nov 26 01:32:02 compute-0 podman[315074]: 2025-11-26 01:32:02.812331089 +0000 UTC m=+0.256712701 container died f207465a397cb83e15cc4137d6c71dfb082f993ee7852962a8e6bb5a3d66bf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 01:32:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e7fefef879e6f84f5bb0c045e84b5390041174f6390fca8b17d882dc3bfa936-merged.mount: Deactivated successfully.
Nov 26 01:32:02 compute-0 podman[315074]: 2025-11-26 01:32:02.896226098 +0000 UTC m=+0.340607720 container remove f207465a397cb83e15cc4137d6c71dfb082f993ee7852962a8e6bb5a3d66bf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 01:32:02 compute-0 systemd[1]: libpod-conmon-f207465a397cb83e15cc4137d6c71dfb082f993ee7852962a8e6bb5a3d66bf03.scope: Deactivated successfully.
Nov 26 01:32:03 compute-0 python3.9[315157]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:32:03 compute-0 podman[315165]: 2025-11-26 01:32:03.186980784 +0000 UTC m=+0.084036084 container create dda78fc0317c5f85c2d2d4e7c26d2f210a31bf93f76da606cb5990ad89963b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 01:32:03 compute-0 podman[315165]: 2025-11-26 01:32:03.160589502 +0000 UTC m=+0.057644812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:32:03 compute-0 systemd[1]: Started libpod-conmon-dda78fc0317c5f85c2d2d4e7c26d2f210a31bf93f76da606cb5990ad89963b0d.scope.
Nov 26 01:32:03 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d8a87263fa2936f462e9fa3f53f65b773caef057591c1e1fe02876b3204b37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d8a87263fa2936f462e9fa3f53f65b773caef057591c1e1fe02876b3204b37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d8a87263fa2936f462e9fa3f53f65b773caef057591c1e1fe02876b3204b37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d8a87263fa2936f462e9fa3f53f65b773caef057591c1e1fe02876b3204b37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:32:03 compute-0 podman[315165]: 2025-11-26 01:32:03.342499478 +0000 UTC m=+0.239554838 container init dda78fc0317c5f85c2d2d4e7c26d2f210a31bf93f76da606cb5990ad89963b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:32:03 compute-0 podman[315165]: 2025-11-26 01:32:03.364299171 +0000 UTC m=+0.261354481 container start dda78fc0317c5f85c2d2d4e7c26d2f210a31bf93f76da606cb5990ad89963b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 01:32:03 compute-0 podman[315165]: 2025-11-26 01:32:03.37137646 +0000 UTC m=+0.268431820 container attach dda78fc0317c5f85c2d2d4e7c26d2f210a31bf93f76da606cb5990ad89963b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 26 01:32:03 compute-0 podman[315207]: 2025-11-26 01:32:03.476869496 +0000 UTC m=+0.123426222 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 01:32:03 compute-0 podman[315238]: 2025-11-26 01:32:03.552194425 +0000 UTC m=+0.115702515 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:32:03 compute-0 podman[315244]: 2025-11-26 01:32:03.578313759 +0000 UTC m=+0.130195952 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 01:32:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]: {
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:    "0": [
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:        {
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "devices": [
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "/dev/loop3"
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            ],
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "lv_name": "ceph_lv0",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "lv_size": "21470642176",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "name": "ceph_lv0",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "tags": {
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.cluster_name": "ceph",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.crush_device_class": "",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.encrypted": "0",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.osd_id": "0",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.type": "block",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.vdo": "0"
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            },
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "type": "block",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "vg_name": "ceph_vg0"
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:        }
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:    ],
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:    "1": [
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:        {
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "devices": [
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "/dev/loop4"
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            ],
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "lv_name": "ceph_lv1",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "lv_size": "21470642176",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "name": "ceph_lv1",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "tags": {
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.cluster_name": "ceph",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.crush_device_class": "",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.encrypted": "0",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.osd_id": "1",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.type": "block",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.vdo": "0"
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            },
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "type": "block",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "vg_name": "ceph_vg1"
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:        }
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:    ],
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:    "2": [
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:        {
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "devices": [
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "/dev/loop5"
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            ],
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "lv_name": "ceph_lv2",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "lv_size": "21470642176",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "name": "ceph_lv2",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "tags": {
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.cluster_name": "ceph",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.crush_device_class": "",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.encrypted": "0",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.osd_id": "2",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.type": "block",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:                "ceph.vdo": "0"
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            },
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "type": "block",
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:            "vg_name": "ceph_vg2"
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:        }
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]:    ]
Nov 26 01:32:04 compute-0 ecstatic_rhodes[315204]: }
Nov 26 01:32:04 compute-0 systemd[1]: libpod-dda78fc0317c5f85c2d2d4e7c26d2f210a31bf93f76da606cb5990ad89963b0d.scope: Deactivated successfully.
Nov 26 01:32:04 compute-0 podman[315165]: 2025-11-26 01:32:04.165298676 +0000 UTC m=+1.062354016 container died dda78fc0317c5f85c2d2d4e7c26d2f210a31bf93f76da606cb5990ad89963b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:32:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:32:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-48d8a87263fa2936f462e9fa3f53f65b773caef057591c1e1fe02876b3204b37-merged.mount: Deactivated successfully.
Nov 26 01:32:04 compute-0 podman[315165]: 2025-11-26 01:32:04.274166368 +0000 UTC m=+1.171221648 container remove dda78fc0317c5f85c2d2d4e7c26d2f210a31bf93f76da606cb5990ad89963b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:32:04 compute-0 systemd[1]: libpod-conmon-dda78fc0317c5f85c2d2d4e7c26d2f210a31bf93f76da606cb5990ad89963b0d.scope: Deactivated successfully.
Nov 26 01:32:04 compute-0 python3.9[315405]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:32:05 compute-0 python3.9[315594]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:32:05 compute-0 podman[315652]: 2025-11-26 01:32:05.476997752 +0000 UTC m=+0.104988123 container create 8806d0b3abc4e310430f63f45597ab38d2ffb9844365f2cc44b700c52c67a62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_buck, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 01:32:05 compute-0 podman[315652]: 2025-11-26 01:32:05.427737267 +0000 UTC m=+0.055727708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:32:05 compute-0 systemd[1]: Started libpod-conmon-8806d0b3abc4e310430f63f45597ab38d2ffb9844365f2cc44b700c52c67a62f.scope.
Nov 26 01:32:05 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:32:05 compute-0 podman[315652]: 2025-11-26 01:32:05.659295119 +0000 UTC m=+0.287285550 container init 8806d0b3abc4e310430f63f45597ab38d2ffb9844365f2cc44b700c52c67a62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 01:32:05 compute-0 podman[315652]: 2025-11-26 01:32:05.676279456 +0000 UTC m=+0.304269837 container start 8806d0b3abc4e310430f63f45597ab38d2ffb9844365f2cc44b700c52c67a62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_buck, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 01:32:05 compute-0 podman[315652]: 2025-11-26 01:32:05.682685706 +0000 UTC m=+0.310676137 container attach 8806d0b3abc4e310430f63f45597ab38d2ffb9844365f2cc44b700c52c67a62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 01:32:05 compute-0 clever_buck[315683]: 167 167
Nov 26 01:32:05 compute-0 systemd[1]: libpod-8806d0b3abc4e310430f63f45597ab38d2ffb9844365f2cc44b700c52c67a62f.scope: Deactivated successfully.
Nov 26 01:32:05 compute-0 podman[315652]: 2025-11-26 01:32:05.690435724 +0000 UTC m=+0.318426115 container died 8806d0b3abc4e310430f63f45597ab38d2ffb9844365f2cc44b700c52c67a62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 01:32:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce69e66b2af45470de05d156f9cabca128d526206df992482c377d6a3c129873-merged.mount: Deactivated successfully.
Nov 26 01:32:05 compute-0 podman[315652]: 2025-11-26 01:32:05.781419803 +0000 UTC m=+0.409410174 container remove 8806d0b3abc4e310430f63f45597ab38d2ffb9844365f2cc44b700c52c67a62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_buck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:32:05 compute-0 systemd[1]: libpod-conmon-8806d0b3abc4e310430f63f45597ab38d2ffb9844365f2cc44b700c52c67a62f.scope: Deactivated successfully.
Nov 26 01:32:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:06 compute-0 podman[315774]: 2025-11-26 01:32:06.016602467 +0000 UTC m=+0.067212851 container create 4d85b6680ccc2ce5f374bfa0de48daddeaa4f9e4842673e72316a2b4be1b638d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_joliot, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:32:06 compute-0 systemd[1]: Started libpod-conmon-4d85b6680ccc2ce5f374bfa0de48daddeaa4f9e4842673e72316a2b4be1b638d.scope.
Nov 26 01:32:06 compute-0 podman[315774]: 2025-11-26 01:32:05.99394459 +0000 UTC m=+0.044554964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:32:06 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93c96958cbda7f23e4026555b7a2192e727983d524285aaf58d9156b7a485a7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93c96958cbda7f23e4026555b7a2192e727983d524285aaf58d9156b7a485a7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93c96958cbda7f23e4026555b7a2192e727983d524285aaf58d9156b7a485a7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93c96958cbda7f23e4026555b7a2192e727983d524285aaf58d9156b7a485a7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:32:06 compute-0 podman[315774]: 2025-11-26 01:32:06.174551618 +0000 UTC m=+0.225162052 container init 4d85b6680ccc2ce5f374bfa0de48daddeaa4f9e4842673e72316a2b4be1b638d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 01:32:06 compute-0 podman[315774]: 2025-11-26 01:32:06.207069663 +0000 UTC m=+0.257680027 container start 4d85b6680ccc2ce5f374bfa0de48daddeaa4f9e4842673e72316a2b4be1b638d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_joliot, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:32:06 compute-0 podman[315774]: 2025-11-26 01:32:06.212271209 +0000 UTC m=+0.262881683 container attach 4d85b6680ccc2ce5f374bfa0de48daddeaa4f9e4842673e72316a2b4be1b638d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 01:32:06 compute-0 podman[315848]: 2025-11-26 01:32:06.468473954 +0000 UTC m=+0.116351903 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 01:32:06 compute-0 podman[315847]: 2025-11-26 01:32:06.480811891 +0000 UTC m=+0.132404074 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7)
Nov 26 01:32:06 compute-0 python3.9[315849]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:32:07 compute-0 goofy_joliot[315813]: {
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:        "osd_id": 0,
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:        "type": "bluestore"
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:    },
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:        "osd_id": 2,
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:        "type": "bluestore"
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:    },
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:        "osd_id": 1,
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:        "type": "bluestore"
Nov 26 01:32:07 compute-0 goofy_joliot[315813]:    }
Nov 26 01:32:07 compute-0 goofy_joliot[315813]: }
Nov 26 01:32:07 compute-0 systemd[1]: libpod-4d85b6680ccc2ce5f374bfa0de48daddeaa4f9e4842673e72316a2b4be1b638d.scope: Deactivated successfully.
Nov 26 01:32:07 compute-0 systemd[1]: libpod-4d85b6680ccc2ce5f374bfa0de48daddeaa4f9e4842673e72316a2b4be1b638d.scope: Consumed 1.085s CPU time.
Nov 26 01:32:07 compute-0 podman[316019]: 2025-11-26 01:32:07.407916423 +0000 UTC m=+0.088426268 container died 4d85b6680ccc2ce5f374bfa0de48daddeaa4f9e4842673e72316a2b4be1b638d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 01:32:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-93c96958cbda7f23e4026555b7a2192e727983d524285aaf58d9156b7a485a7c-merged.mount: Deactivated successfully.
Nov 26 01:32:07 compute-0 podman[316019]: 2025-11-26 01:32:07.50204 +0000 UTC m=+0.182549765 container remove 4d85b6680ccc2ce5f374bfa0de48daddeaa4f9e4842673e72316a2b4be1b638d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_joliot, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:32:07 compute-0 systemd[1]: libpod-conmon-4d85b6680ccc2ce5f374bfa0de48daddeaa4f9e4842673e72316a2b4be1b638d.scope: Deactivated successfully.
Nov 26 01:32:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:32:07 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:32:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:32:07 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:32:07 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev ac78c8d6-d20d-4ef4-b6d4-9b90329da2a5 does not exist
Nov 26 01:32:07 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 007c9def-20ad-42f5-8e1b-a40a591e410b does not exist
Nov 26 01:32:07 compute-0 python3.9[316083]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:32:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:08 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:32:08 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:32:08 compute-0 python3.9[316285]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:32:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:32:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:10 compute-0 python3.9[316438]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:32:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:32:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:32:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:32:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:32:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:32:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:32:11 compute-0 python3.9[316590]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:32:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:13 compute-0 python3.9[316742]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:32:13 compute-0 python3.9[316820]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt.target _original_basename=edpm_libvirt.target recurse=False state=file path=/etc/systemd/system/edpm_libvirt.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:32:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:32:15 compute-0 python3.9[316972]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:32:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:16 compute-0 python3.9[317050]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt_guests.service _original_basename=edpm_libvirt_guests.service recurse=False state=file path=/etc/systemd/system/edpm_libvirt_guests.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:32:17 compute-0 podman[317174]: 2025-11-26 01:32:17.185755058 +0000 UTC m=+0.114412958 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, config_id=edpm, release-0.7.12=, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Nov 26 01:32:17 compute-0 podman[317175]: 2025-11-26 01:32:17.239065047 +0000 UTC m=+0.162428769 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 26 01:32:17 compute-0 python3.9[317239]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:32:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:18 compute-0 python3.9[317318]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/virt-guest-shutdown.target _original_basename=virt-guest-shutdown.target recurse=False state=file path=/etc/systemd/system/virt-guest-shutdown.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:32:18 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Nov 26 01:32:18 compute-0 systemd[1]: session-54.scope: Consumed 3min 897ms CPU time.
Nov 26 01:32:18 compute-0 systemd-logind[800]: Session 54 logged out. Waiting for processes to exit.
Nov 26 01:32:18 compute-0 systemd-logind[800]: Removed session 54.
Nov 26 01:32:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:32:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:32:24 compute-0 systemd-logind[800]: New session 55 of user zuul.
Nov 26 01:32:24 compute-0 systemd[1]: Started Session 55 of User zuul.
Nov 26 01:32:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:32:24.938 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:32:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:32:24.940 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:32:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:32:24.940 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:32:26 compute-0 python3.9[317497]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:32:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:26 compute-0 podman[317522]: 2025-11-26 01:32:26.564465368 +0000 UTC m=+0.123314709 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 26 01:32:27 compute-0 python3.9[317668]: ansible-ansible.builtin.service_facts Invoked
Nov 26 01:32:27 compute-0 network[317685]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 01:32:27 compute-0 network[317686]: 'network-scripts' will be removed from distribution in near future.
Nov 26 01:32:27 compute-0 network[317687]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 01:32:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:32:29 compute-0 podman[158021]: time="2025-11-26T01:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:32:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Nov 26 01:32:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7269 "" "Go-http-client/1.1"
Nov 26 01:32:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:31 compute-0 openstack_network_exporter[160178]: ERROR   01:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:32:31 compute-0 openstack_network_exporter[160178]: ERROR   01:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:32:31 compute-0 openstack_network_exporter[160178]: ERROR   01:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:32:31 compute-0 openstack_network_exporter[160178]: ERROR   01:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:32:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:32:31 compute-0 openstack_network_exporter[160178]: ERROR   01:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:32:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:32:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:32:34 compute-0 podman[317907]: 2025-11-26 01:32:34.556344881 +0000 UTC m=+0.095215489 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 01:32:34 compute-0 podman[317906]: 2025-11-26 01:32:34.583587497 +0000 UTC m=+0.129453922 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 26 01:32:34 compute-0 podman[317908]: 2025-11-26 01:32:34.625154626 +0000 UTC m=+0.158677233 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 26 01:32:35 compute-0 python3.9[318020]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 01:32:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:36 compute-0 python3.9[318104]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 01:32:37 compute-0 podman[318107]: 2025-11-26 01:32:37.602027949 +0000 UTC m=+0.127915638 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 01:32:37 compute-0 podman[318106]: 2025-11-26 01:32:37.610717103 +0000 UTC m=+0.153104806 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, release=1755695350, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Nov 26 01:32:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:38 compute-0 python3.9[318299]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:32:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:32:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:40 compute-0 python3.9[318451]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:32:41
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'images', 'volumes', 'vms', 'cephfs.cephfs.data', '.mgr']
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:32:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:32:41 compute-0 python3.9[318604]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:32:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:42 compute-0 python3.9[318756]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:32:43 compute-0 python3.9[318909]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:32:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:32:44 compute-0 python3.9[319032]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764120763.0215924-95-260200381281221/.source.iscsi _original_basename=.tcsaxigf follow=False checksum=548503f8a3267cdae80182712af78ce401c7b78c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:32:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:46 compute-0 python3.9[319184]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:32:47 compute-0 podman[319337]: 2025-11-26 01:32:47.472862029 +0000 UTC m=+0.128200165 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:32:47 compute-0 podman[319336]: 2025-11-26 01:32:47.51448218 +0000 UTC m=+0.173461338 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, name=ubi9, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Nov 26 01:32:47 compute-0 python3.9[319338]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:32:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:49 compute-0 python3.9[319529]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:32:49 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 26 01:32:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:50 compute-0 python3.9[319686]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:32:50 compute-0 systemd[1]: Reloading.
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:32:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:32:50 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:32:50 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:32:51 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 26 01:32:51 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 26 01:32:51 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 26 01:32:51 compute-0 systemd[1]: Started Open-iSCSI.
Nov 26 01:32:51 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 26 01:32:51 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 26 01:32:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:52 compute-0 python3.9[319885]: ansible-ansible.builtin.service_facts Invoked
Nov 26 01:32:52 compute-0 network[319902]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 01:32:52 compute-0 network[319903]: 'network-scripts' will be removed from distribution in near future.
Nov 26 01:32:52 compute-0 network[319904]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 01:32:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:32:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:56 compute-0 podman[319991]: 2025-11-26 01:32:56.741183776 +0000 UTC m=+0.116134827 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:32:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:32:59 compute-0 python3.9[320190]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 01:32:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:32:59 compute-0 podman[158021]: time="2025-11-26T01:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:32:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Nov 26 01:32:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7263 "" "Go-http-client/1.1"
Nov 26 01:33:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:00 compute-0 python3.9[320342]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 26 01:33:01 compute-0 openstack_network_exporter[160178]: ERROR   01:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:33:01 compute-0 openstack_network_exporter[160178]: ERROR   01:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:33:01 compute-0 openstack_network_exporter[160178]: ERROR   01:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:33:01 compute-0 openstack_network_exporter[160178]: ERROR   01:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:33:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:33:01 compute-0 openstack_network_exporter[160178]: ERROR   01:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:33:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:33:01 compute-0 python3.9[320498]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:33:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:02 compute-0 python3.9[320621]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764120780.6539557-172-78919532899838/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:03 compute-0 python3.9[320773]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:33:04 compute-0 podman[320898]: 2025-11-26 01:33:04.780223074 +0000 UTC m=+0.118190344 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:33:04 compute-0 podman[320897]: 2025-11-26 01:33:04.786048018 +0000 UTC m=+0.123686789 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm)
Nov 26 01:33:04 compute-0 podman[320900]: 2025-11-26 01:33:04.830051516 +0000 UTC m=+0.157945103 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 26 01:33:05 compute-0 python3.9[320972]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 01:33:05 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 26 01:33:05 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 26 01:33:05 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 26 01:33:05 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 26 01:33:05 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 26 01:33:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:06 compute-0 python3.9[321146]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:33:07 compute-0 python3.9[321298]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:33:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:08 compute-0 podman[321397]: 2025-11-26 01:33:08.104677032 +0000 UTC m=+0.101752682 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Nov 26 01:33:08 compute-0 podman[321400]: 2025-11-26 01:33:08.142473705 +0000 UTC m=+0.134531644 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:33:08 compute-0 python3.9[321589]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:33:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:33:08 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:33:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:33:08 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:33:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:33:08 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:33:08 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev f1d6439e-ce0c-4763-85a8-8c25b7578a8d does not exist
Nov 26 01:33:08 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 4645cd58-7764-417e-9424-68106ce13d5c does not exist
Nov 26 01:33:08 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 02701b77-e181-49a6-a014-c390662b81ff does not exist
Nov 26 01:33:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:33:08 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:33:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:33:08 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:33:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:33:08 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:33:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:33:09 compute-0 python3.9[321847]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:33:09 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:33:09 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:33:09 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:33:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:10 compute-0 podman[321913]: 2025-11-26 01:33:10.09313164 +0000 UTC m=+0.102524354 container create f4603e9c2cdf5cc96fe10ebec803c349f11448fa3fe4247dddeb6f78aa3c2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_faraday, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 01:33:10 compute-0 podman[321913]: 2025-11-26 01:33:10.056690995 +0000 UTC m=+0.066083739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:33:10 compute-0 systemd[1]: Started libpod-conmon-f4603e9c2cdf5cc96fe10ebec803c349f11448fa3fe4247dddeb6f78aa3c2779.scope.
Nov 26 01:33:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:33:10 compute-0 podman[321913]: 2025-11-26 01:33:10.2492486 +0000 UTC m=+0.258641364 container init f4603e9c2cdf5cc96fe10ebec803c349f11448fa3fe4247dddeb6f78aa3c2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_faraday, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:33:10 compute-0 podman[321913]: 2025-11-26 01:33:10.261736951 +0000 UTC m=+0.271129635 container start f4603e9c2cdf5cc96fe10ebec803c349f11448fa3fe4247dddeb6f78aa3c2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:33:10 compute-0 podman[321913]: 2025-11-26 01:33:10.266795063 +0000 UTC m=+0.276187827 container attach f4603e9c2cdf5cc96fe10ebec803c349f11448fa3fe4247dddeb6f78aa3c2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_faraday, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 01:33:10 compute-0 mystifying_faraday[321933]: 167 167
Nov 26 01:33:10 compute-0 systemd[1]: libpod-f4603e9c2cdf5cc96fe10ebec803c349f11448fa3fe4247dddeb6f78aa3c2779.scope: Deactivated successfully.
Nov 26 01:33:10 compute-0 podman[321913]: 2025-11-26 01:33:10.276037153 +0000 UTC m=+0.285429857 container died f4603e9c2cdf5cc96fe10ebec803c349f11448fa3fe4247dddeb6f78aa3c2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_faraday, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:33:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b949f39c7b37ec0849adc3d054d901a6709e4aba08904271dcc1890d32a49ba3-merged.mount: Deactivated successfully.
Nov 26 01:33:10 compute-0 podman[321913]: 2025-11-26 01:33:10.348154251 +0000 UTC m=+0.357546935 container remove f4603e9c2cdf5cc96fe10ebec803c349f11448fa3fe4247dddeb6f78aa3c2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 01:33:10 compute-0 systemd[1]: libpod-conmon-f4603e9c2cdf5cc96fe10ebec803c349f11448fa3fe4247dddeb6f78aa3c2779.scope: Deactivated successfully.
Nov 26 01:33:10 compute-0 podman[322022]: 2025-11-26 01:33:10.646776649 +0000 UTC m=+0.108491312 container create 1da2601937e4fc19f60bd01632033ac7af0f4b53505a31848be861b46f8e7a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 01:33:10 compute-0 podman[322022]: 2025-11-26 01:33:10.603735469 +0000 UTC m=+0.065450212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:33:10 compute-0 systemd[1]: Started libpod-conmon-1da2601937e4fc19f60bd01632033ac7af0f4b53505a31848be861b46f8e7a2e.scope.
Nov 26 01:33:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4320167ee4941b9fa05fea020e4643641f501d4f75b5583e3e9dfa597eaba5ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4320167ee4941b9fa05fea020e4643641f501d4f75b5583e3e9dfa597eaba5ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4320167ee4941b9fa05fea020e4643641f501d4f75b5583e3e9dfa597eaba5ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4320167ee4941b9fa05fea020e4643641f501d4f75b5583e3e9dfa597eaba5ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4320167ee4941b9fa05fea020e4643641f501d4f75b5583e3e9dfa597eaba5ea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:33:10 compute-0 podman[322022]: 2025-11-26 01:33:10.831550775 +0000 UTC m=+0.293265438 container init 1da2601937e4fc19f60bd01632033ac7af0f4b53505a31848be861b46f8e7a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 01:33:10 compute-0 podman[322022]: 2025-11-26 01:33:10.847185835 +0000 UTC m=+0.308900498 container start 1da2601937e4fc19f60bd01632033ac7af0f4b53505a31848be861b46f8e7a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:33:10 compute-0 podman[322022]: 2025-11-26 01:33:10.852307609 +0000 UTC m=+0.314022272 container attach 1da2601937e4fc19f60bd01632033ac7af0f4b53505a31848be861b46f8e7a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:33:11 compute-0 python3.9[322093]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764120788.787194-230-261853460898297/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:33:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:33:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:33:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:33:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:33:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:33:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:12 compute-0 keen_galileo[322077]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:33:12 compute-0 keen_galileo[322077]: --> relative data size: 1.0
Nov 26 01:33:12 compute-0 keen_galileo[322077]: --> All data devices are unavailable
Nov 26 01:33:12 compute-0 python3.9[322265]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:33:12 compute-0 systemd[1]: libpod-1da2601937e4fc19f60bd01632033ac7af0f4b53505a31848be861b46f8e7a2e.scope: Deactivated successfully.
Nov 26 01:33:12 compute-0 systemd[1]: libpod-1da2601937e4fc19f60bd01632033ac7af0f4b53505a31848be861b46f8e7a2e.scope: Consumed 1.285s CPU time.
Nov 26 01:33:12 compute-0 podman[322022]: 2025-11-26 01:33:12.206562263 +0000 UTC m=+1.668276946 container died 1da2601937e4fc19f60bd01632033ac7af0f4b53505a31848be861b46f8e7a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:33:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4320167ee4941b9fa05fea020e4643641f501d4f75b5583e3e9dfa597eaba5ea-merged.mount: Deactivated successfully.
Nov 26 01:33:12 compute-0 podman[322022]: 2025-11-26 01:33:12.301472842 +0000 UTC m=+1.763187495 container remove 1da2601937e4fc19f60bd01632033ac7af0f4b53505a31848be861b46f8e7a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_galileo, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 01:33:12 compute-0 systemd[1]: libpod-conmon-1da2601937e4fc19f60bd01632033ac7af0f4b53505a31848be861b46f8e7a2e.scope: Deactivated successfully.
Nov 26 01:33:13 compute-0 podman[322513]: 2025-11-26 01:33:13.424927554 +0000 UTC m=+0.085636399 container create 598af793d6ff465e0db5a6976c028b47b98822547bfb357b6133e6087eb2de43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 01:33:13 compute-0 systemd[1]: Started libpod-conmon-598af793d6ff465e0db5a6976c028b47b98822547bfb357b6133e6087eb2de43.scope.
Nov 26 01:33:13 compute-0 podman[322513]: 2025-11-26 01:33:13.393251813 +0000 UTC m=+0.053960678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:33:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:33:13 compute-0 podman[322513]: 2025-11-26 01:33:13.587264309 +0000 UTC m=+0.247973164 container init 598af793d6ff465e0db5a6976c028b47b98822547bfb357b6133e6087eb2de43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hoover, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:33:13 compute-0 podman[322513]: 2025-11-26 01:33:13.603789534 +0000 UTC m=+0.264498369 container start 598af793d6ff465e0db5a6976c028b47b98822547bfb357b6133e6087eb2de43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hoover, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:33:13 compute-0 podman[322513]: 2025-11-26 01:33:13.612234281 +0000 UTC m=+0.272943176 container attach 598af793d6ff465e0db5a6976c028b47b98822547bfb357b6133e6087eb2de43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hoover, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 01:33:13 compute-0 nice_hoover[322555]: 167 167
Nov 26 01:33:13 compute-0 systemd[1]: libpod-598af793d6ff465e0db5a6976c028b47b98822547bfb357b6133e6087eb2de43.scope: Deactivated successfully.
Nov 26 01:33:13 compute-0 podman[322513]: 2025-11-26 01:33:13.617258553 +0000 UTC m=+0.277967358 container died 598af793d6ff465e0db5a6976c028b47b98822547bfb357b6133e6087eb2de43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hoover, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 01:33:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-db2a80c4b481864143a2a85a9187833aac879c1602e2d43f1369eeda8088c779-merged.mount: Deactivated successfully.
Nov 26 01:33:13 compute-0 podman[322513]: 2025-11-26 01:33:13.676081727 +0000 UTC m=+0.336790532 container remove 598af793d6ff465e0db5a6976c028b47b98822547bfb357b6133e6087eb2de43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 01:33:13 compute-0 systemd[1]: libpod-conmon-598af793d6ff465e0db5a6976c028b47b98822547bfb357b6133e6087eb2de43.scope: Deactivated successfully.
Nov 26 01:33:13 compute-0 python3.9[322604]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:13 compute-0 podman[322610]: 2025-11-26 01:33:13.980188959 +0000 UTC m=+0.106638120 container create c4314f0aa2509f32884ed806b03c5ead5fe424656ce3c7a80f0643ec71b66678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:33:14 compute-0 podman[322610]: 2025-11-26 01:33:13.936398927 +0000 UTC m=+0.062848078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:33:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:14 compute-0 systemd[1]: Started libpod-conmon-c4314f0aa2509f32884ed806b03c5ead5fe424656ce3c7a80f0643ec71b66678.scope.
Nov 26 01:33:14 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fd30c8aae729ae9b634123635174d9dfb31542ea0201ba3663d2e91d6d6fad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fd30c8aae729ae9b634123635174d9dfb31542ea0201ba3663d2e91d6d6fad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fd30c8aae729ae9b634123635174d9dfb31542ea0201ba3663d2e91d6d6fad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fd30c8aae729ae9b634123635174d9dfb31542ea0201ba3663d2e91d6d6fad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:33:14 compute-0 podman[322610]: 2025-11-26 01:33:14.131297398 +0000 UTC m=+0.257746579 container init c4314f0aa2509f32884ed806b03c5ead5fe424656ce3c7a80f0643ec71b66678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:33:14 compute-0 podman[322610]: 2025-11-26 01:33:14.155047546 +0000 UTC m=+0.281496677 container start c4314f0aa2509f32884ed806b03c5ead5fe424656ce3c7a80f0643ec71b66678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sinoussi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 01:33:14 compute-0 podman[322610]: 2025-11-26 01:33:14.159592894 +0000 UTC m=+0.286042025 container attach c4314f0aa2509f32884ed806b03c5ead5fe424656ce3c7a80f0643ec71b66678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:33:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]: {
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:    "0": [
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:        {
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "devices": [
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "/dev/loop3"
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            ],
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "lv_name": "ceph_lv0",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "lv_size": "21470642176",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "name": "ceph_lv0",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "tags": {
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.cluster_name": "ceph",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.crush_device_class": "",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.encrypted": "0",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.osd_id": "0",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.type": "block",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.vdo": "0"
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            },
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "type": "block",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "vg_name": "ceph_vg0"
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:        }
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:    ],
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:    "1": [
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:        {
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "devices": [
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "/dev/loop4"
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            ],
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "lv_name": "ceph_lv1",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "lv_size": "21470642176",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "name": "ceph_lv1",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "tags": {
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.cluster_name": "ceph",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.crush_device_class": "",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.encrypted": "0",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.osd_id": "1",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.type": "block",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.vdo": "0"
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            },
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "type": "block",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "vg_name": "ceph_vg1"
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:        }
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:    ],
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:    "2": [
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:        {
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "devices": [
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "/dev/loop5"
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            ],
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "lv_name": "ceph_lv2",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "lv_size": "21470642176",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "name": "ceph_lv2",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "tags": {
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.cluster_name": "ceph",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.crush_device_class": "",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.encrypted": "0",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.osd_id": "2",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.type": "block",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:                "ceph.vdo": "0"
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            },
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "type": "block",
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:            "vg_name": "ceph_vg2"
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:        }
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]:    ]
Nov 26 01:33:15 compute-0 gifted_sinoussi[322651]: }
Nov 26 01:33:15 compute-0 systemd[1]: libpod-c4314f0aa2509f32884ed806b03c5ead5fe424656ce3c7a80f0643ec71b66678.scope: Deactivated successfully.
Nov 26 01:33:15 compute-0 podman[322610]: 2025-11-26 01:33:15.069147552 +0000 UTC m=+1.195596713 container died c4314f0aa2509f32884ed806b03c5ead5fe424656ce3c7a80f0643ec71b66678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sinoussi, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 01:33:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4fd30c8aae729ae9b634123635174d9dfb31542ea0201ba3663d2e91d6d6fad-merged.mount: Deactivated successfully.
Nov 26 01:33:15 compute-0 podman[322610]: 2025-11-26 01:33:15.169412402 +0000 UTC m=+1.295861533 container remove c4314f0aa2509f32884ed806b03c5ead5fe424656ce3c7a80f0643ec71b66678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:33:15 compute-0 systemd[1]: libpod-conmon-c4314f0aa2509f32884ed806b03c5ead5fe424656ce3c7a80f0643ec71b66678.scope: Deactivated successfully.
Nov 26 01:33:15 compute-0 python3.9[322787]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:16 compute-0 podman[323078]: 2025-11-26 01:33:16.265812433 +0000 UTC m=+0.077611993 container create 297b27a479b8f9fa18830251bec21fc1cd48b206dbde73fd9e488b53a11edb86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:33:16 compute-0 podman[323078]: 2025-11-26 01:33:16.236625832 +0000 UTC m=+0.048425392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:33:16 compute-0 systemd[1]: Started libpod-conmon-297b27a479b8f9fa18830251bec21fc1cd48b206dbde73fd9e488b53a11edb86.scope.
Nov 26 01:33:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:33:16 compute-0 podman[323078]: 2025-11-26 01:33:16.395723876 +0000 UTC m=+0.207523466 container init 297b27a479b8f9fa18830251bec21fc1cd48b206dbde73fd9e488b53a11edb86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 01:33:16 compute-0 podman[323078]: 2025-11-26 01:33:16.407872288 +0000 UTC m=+0.219671808 container start 297b27a479b8f9fa18830251bec21fc1cd48b206dbde73fd9e488b53a11edb86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 01:33:16 compute-0 podman[323078]: 2025-11-26 01:33:16.412081026 +0000 UTC m=+0.223880626 container attach 297b27a479b8f9fa18830251bec21fc1cd48b206dbde73fd9e488b53a11edb86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 01:33:16 compute-0 bold_curran[323103]: 167 167
Nov 26 01:33:16 compute-0 systemd[1]: libpod-297b27a479b8f9fa18830251bec21fc1cd48b206dbde73fd9e488b53a11edb86.scope: Deactivated successfully.
Nov 26 01:33:16 compute-0 podman[323078]: 2025-11-26 01:33:16.422284553 +0000 UTC m=+0.234084113 container died 297b27a479b8f9fa18830251bec21fc1cd48b206dbde73fd9e488b53a11edb86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-351169144c1f5a8a0cda8c24158ecf9f618d7ebd41c02501abc21d32eea40f74-merged.mount: Deactivated successfully.
Nov 26 01:33:16 compute-0 python3.9[323100]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:16 compute-0 podman[323078]: 2025-11-26 01:33:16.492169639 +0000 UTC m=+0.303969199 container remove 297b27a479b8f9fa18830251bec21fc1cd48b206dbde73fd9e488b53a11edb86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 01:33:16 compute-0 systemd[1]: libpod-conmon-297b27a479b8f9fa18830251bec21fc1cd48b206dbde73fd9e488b53a11edb86.scope: Deactivated successfully.
Nov 26 01:33:16 compute-0 podman[323151]: 2025-11-26 01:33:16.759389723 +0000 UTC m=+0.100794615 container create 0388ce0933de60c5d5b9f914eb57c363372a54e1dfb08bf20882a7403f7ce74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_murdock, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 01:33:16 compute-0 podman[323151]: 2025-11-26 01:33:16.717368912 +0000 UTC m=+0.058773854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:33:16 compute-0 systemd[1]: Started libpod-conmon-0388ce0933de60c5d5b9f914eb57c363372a54e1dfb08bf20882a7403f7ce74b.scope.
Nov 26 01:33:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612cc02e4dd4aeaf0ffcf4ab9f32c766e283594a965acfe79650026d71495712/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612cc02e4dd4aeaf0ffcf4ab9f32c766e283594a965acfe79650026d71495712/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612cc02e4dd4aeaf0ffcf4ab9f32c766e283594a965acfe79650026d71495712/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:33:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612cc02e4dd4aeaf0ffcf4ab9f32c766e283594a965acfe79650026d71495712/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:33:16 compute-0 podman[323151]: 2025-11-26 01:33:16.962953748 +0000 UTC m=+0.304358700 container init 0388ce0933de60c5d5b9f914eb57c363372a54e1dfb08bf20882a7403f7ce74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 01:33:16 compute-0 podman[323151]: 2025-11-26 01:33:16.983326281 +0000 UTC m=+0.324731173 container start 0388ce0933de60c5d5b9f914eb57c363372a54e1dfb08bf20882a7403f7ce74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_murdock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:33:16 compute-0 podman[323151]: 2025-11-26 01:33:16.990430401 +0000 UTC m=+0.331835343 container attach 0388ce0933de60c5d5b9f914eb57c363372a54e1dfb08bf20882a7403f7ce74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 26 01:33:17 compute-0 python3.9[323299]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]: {
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:        "osd_id": 0,
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:        "type": "bluestore"
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:    },
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:        "osd_id": 2,
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:        "type": "bluestore"
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:    },
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:        "osd_id": 1,
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:        "type": "bluestore"
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]:    }
Nov 26 01:33:18 compute-0 hopeful_murdock[323190]: }
Nov 26 01:33:18 compute-0 systemd[1]: libpod-0388ce0933de60c5d5b9f914eb57c363372a54e1dfb08bf20882a7403f7ce74b.scope: Deactivated successfully.
Nov 26 01:33:18 compute-0 podman[323151]: 2025-11-26 01:33:18.191086365 +0000 UTC m=+1.532491257 container died 0388ce0933de60c5d5b9f914eb57c363372a54e1dfb08bf20882a7403f7ce74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:33:18 compute-0 systemd[1]: libpod-0388ce0933de60c5d5b9f914eb57c363372a54e1dfb08bf20882a7403f7ce74b.scope: Consumed 1.204s CPU time.
Nov 26 01:33:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-612cc02e4dd4aeaf0ffcf4ab9f32c766e283594a965acfe79650026d71495712-merged.mount: Deactivated successfully.
Nov 26 01:33:18 compute-0 podman[323151]: 2025-11-26 01:33:18.284293106 +0000 UTC m=+1.625697968 container remove 0388ce0933de60c5d5b9f914eb57c363372a54e1dfb08bf20882a7403f7ce74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_murdock, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:33:18 compute-0 systemd[1]: libpod-conmon-0388ce0933de60c5d5b9f914eb57c363372a54e1dfb08bf20882a7403f7ce74b.scope: Deactivated successfully.
Nov 26 01:33:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:33:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:33:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:33:18 compute-0 podman[323432]: 2025-11-26 01:33:18.352979838 +0000 UTC m=+0.118360240 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., container_name=kepler, distribution-scope=public, com.redhat.component=ubi9-container, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Nov 26 01:33:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:33:18 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 6b5887d3-c7ec-42b2-9e6f-69a753ec34ea does not exist
Nov 26 01:33:18 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev cd5a047c-1a49-4cfb-bf3e-af69b62e8a52 does not exist
Nov 26 01:33:18 compute-0 podman[323437]: 2025-11-26 01:33:18.369635936 +0000 UTC m=+0.132070775 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:33:18 compute-0 python3.9[323551]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:33:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:33:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.371108) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120799371136, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1178, "num_deletes": 507, "total_data_size": 1298049, "memory_usage": 1330960, "flush_reason": "Manual Compaction"}
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120799383707, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1274849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13554, "largest_seqno": 14731, "table_properties": {"data_size": 1269662, "index_size": 2199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 13676, "raw_average_key_size": 17, "raw_value_size": 1257238, "raw_average_value_size": 1641, "num_data_blocks": 101, "num_entries": 766, "num_filter_entries": 766, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764120715, "oldest_key_time": 1764120715, "file_creation_time": 1764120799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 12667 microseconds, and 6996 cpu microseconds.
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.383771) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1274849 bytes OK
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.383791) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.386237) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.386251) EVENT_LOG_v1 {"time_micros": 1764120799386246, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.386266) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1291552, prev total WAL file size 1291552, number of live WAL files 2.
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.387036) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1244KB)], [32(7353KB)]
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120799387079, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 8805267, "oldest_snapshot_seqno": -1}
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3742 keys, 6914440 bytes, temperature: kUnknown
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120799432027, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 6914440, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6887854, "index_size": 16109, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 91857, "raw_average_key_size": 24, "raw_value_size": 6818482, "raw_average_value_size": 1822, "num_data_blocks": 684, "num_entries": 3742, "num_filter_entries": 3742, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764120799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.432272) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 6914440 bytes
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.435263) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.6 rd, 153.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.2 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(12.3) write-amplify(5.4) OK, records in: 4769, records dropped: 1027 output_compression: NoCompression
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.435290) EVENT_LOG_v1 {"time_micros": 1764120799435278, "job": 14, "event": "compaction_finished", "compaction_time_micros": 45014, "compaction_time_cpu_micros": 17114, "output_level": 6, "num_output_files": 1, "total_output_size": 6914440, "num_input_records": 4769, "num_output_records": 3742, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120799435808, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120799438661, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.386910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.439046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.439051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.439054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.439057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:33:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:33:19.439060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:33:19 compute-0 python3.9[323731]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:20 compute-0 python3.9[323883]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:21 compute-0 python3.9[324035]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:33:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:23 compute-0 python3.9[324189]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:33:24 compute-0 python3.9[324341]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:33:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:33:24.940 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:33:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:33:24.941 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:33:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:33:24.941 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:33:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:26 compute-0 python3.9[324493]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:33:27 compute-0 podman[324571]: 2025-11-26 01:33:27.037103976 +0000 UTC m=+0.146492710 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 26 01:33:27 compute-0 python3.9[324572]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:33:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:28 compute-0 python3.9[324741]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:33:28 compute-0 python3.9[324819]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:33:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:33:30 compute-0 podman[158021]: time="2025-11-26T01:33:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:33:30 compute-0 podman[158021]: @ - - [26/Nov/2025:01:33:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Nov 26 01:33:30 compute-0 podman[158021]: @ - - [26/Nov/2025:01:33:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7271 "" "Go-http-client/1.1"
Nov 26 01:33:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:30 compute-0 python3.9[324971]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:31 compute-0 openstack_network_exporter[160178]: ERROR   01:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:33:31 compute-0 openstack_network_exporter[160178]: ERROR   01:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:33:31 compute-0 openstack_network_exporter[160178]: ERROR   01:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:33:31 compute-0 openstack_network_exporter[160178]: ERROR   01:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:33:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:33:31 compute-0 openstack_network_exporter[160178]: ERROR   01:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:33:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:33:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:32 compute-0 python3.9[325123]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:33:32 compute-0 python3.9[325201]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:33 compute-0 python3.9[325353]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:33:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:33:34 compute-0 python3.9[325431]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:35 compute-0 podman[325456]: 2025-11-26 01:33:35.59478169 +0000 UTC m=+0.133640899 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 26 01:33:35 compute-0 podman[325457]: 2025-11-26 01:33:35.606116888 +0000 UTC m=+0.136517850 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:33:35 compute-0 podman[325458]: 2025-11-26 01:33:35.659532921 +0000 UTC m=+0.185329213 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 26 01:33:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:36 compute-0 python3.9[325648]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:33:36 compute-0 systemd[1]: Reloading.
Nov 26 01:33:37 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:33:37 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:33:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:38 compute-0 podman[325742]: 2025-11-26 01:33:38.579779141 +0000 UTC m=+0.125505630 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:33:38 compute-0 podman[325738]: 2025-11-26 01:33:38.601386819 +0000 UTC m=+0.145482122 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, architecture=x86_64, io.buildah.version=1.33.7, vcs-type=git, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm)
Nov 26 01:33:39 compute-0 python3.9[325879]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:33:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:33:39 compute-0 python3.9[325957]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:41 compute-0 python3.9[326109]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:33:41
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.control', 'images', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.log', '.mgr']
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:33:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:33:41 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 26 01:33:41 compute-0 python3.9[326188]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:43 compute-0 python3.9[326340]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:33:43 compute-0 systemd[1]: Reloading.
Nov 26 01:33:43 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:33:43 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:33:43 compute-0 systemd[1]: Starting Create netns directory...
Nov 26 01:33:43 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 01:33:43 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 01:33:43 compute-0 systemd[1]: Finished Create netns directory.
Nov 26 01:33:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:33:44 compute-0 python3.9[326535]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:33:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:46 compute-0 python3.9[326687]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:33:46 compute-0 python3.9[326810]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764120825.2633512-437-15978547475239/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:33:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:48 compute-0 python3.9[326962]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:33:48 compute-0 podman[326964]: 2025-11-26 01:33:48.604933951 +0000 UTC m=+0.141036277 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 01:33:48 compute-0 podman[326963]: 2025-11-26 01:33:48.626621731 +0000 UTC m=+0.163901730 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, vcs-type=git, container_name=kepler, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container)
Nov 26 01:33:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:33:49 compute-0 python3.9[327152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:33:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:33:51 compute-0 python3.9[327275]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764120829.109948-462-251926777788027/.source.json _original_basename=.xthd_vyl follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:52 compute-0 python3.9[327427]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:33:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:33:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3315 writes, 14K keys, 3315 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3315 writes, 3315 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1278 writes, 5795 keys, 1278 commit groups, 1.0 writes per commit group, ingest: 8.44 MB, 0.01 MB/s#012Interval WAL: 1278 writes, 1278 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    111.7      0.14              0.07         7    0.019       0      0       0.0       0.0#012  L6      1/0    6.59 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7    181.0    149.4      0.27              0.15         6    0.045     24K   3203       0.0       0.0#012 Sum      1/0    6.59 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.7    120.5    136.8      0.40              0.22        13    0.031     24K   3203       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9    140.7    141.1      0.24              0.12         8    0.030     17K   2468       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    181.0    149.4      0.27              0.15         6    0.045     24K   3203       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    117.7      0.13              0.07         6    0.021       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.015, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.4 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5636b955b1f0#2 capacity: 308.00 MB usage: 1.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(99,1.38 MB,0.448063%) FilterBlock(14,74.48 KB,0.0236164%) IndexBlock(14,148.55 KB,0.0470991%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 26 01:33:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:33:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:56 compute-0 python3.9[327854]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 26 01:33:57 compute-0 podman[327978]: 2025-11-26 01:33:57.267524635 +0000 UTC m=+0.137164249 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 01:33:57 compute-0 python3.9[328024]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 01:33:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:33:58 compute-0 python3.9[328177]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 26 01:33:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:33:59 compute-0 podman[158021]: time="2025-11-26T01:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:33:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Nov 26 01:33:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7267 "" "Go-http-client/1.1"
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.781 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.782 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feff248b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff25140e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248a270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff35fda90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff5310410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feff25140b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feff248b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feff248b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feff248b740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feff248b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feff248b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feff248b1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feff248ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feff248b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feff248b0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feff248ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feff248bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feff248bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feff24894f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feff248b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feff248bc20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff2489520>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff4ce75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff2033440>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feff248b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feff248bcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feff55e84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.804 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feff248bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.804 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feff248b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.804 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feff248bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.805 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feff248a2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.805 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feff248aea0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.805 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feff248afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.806 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.814 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.814 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.815 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.815 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.816 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.816 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.817 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.817 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:33:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:33:59.818 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:34:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:01 compute-0 openstack_network_exporter[160178]: ERROR   01:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:34:01 compute-0 openstack_network_exporter[160178]: ERROR   01:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:34:01 compute-0 openstack_network_exporter[160178]: ERROR   01:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:34:01 compute-0 openstack_network_exporter[160178]: ERROR   01:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:34:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:34:01 compute-0 openstack_network_exporter[160178]: ERROR   01:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:34:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:34:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:02 compute-0 python3[328356]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 01:34:03 compute-0 podman[328368]: 2025-11-26 01:34:03.860936411 +0000 UTC m=+1.634580267 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 26 01:34:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:04 compute-0 podman[328423]: 2025-11-26 01:34:04.165809065 +0000 UTC m=+0.110514669 container create ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251118, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 01:34:04 compute-0 podman[328423]: 2025-11-26 01:34:04.114106031 +0000 UTC m=+0.058811645 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 26 01:34:04 compute-0 python3[328356]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 26 01:34:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:34:05 compute-0 python3.9[328610]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:34:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:06 compute-0 podman[328737]: 2025-11-26 01:34:06.57841249 +0000 UTC m=+0.129059981 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:34:06 compute-0 podman[328736]: 2025-11-26 01:34:06.591265691 +0000 UTC m=+0.147331804 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute)
Nov 26 01:34:06 compute-0 podman[328738]: 2025-11-26 01:34:06.619480895 +0000 UTC m=+0.163692505 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 01:34:06 compute-0 python3.9[328814]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:34:07 compute-0 python3.9[328906]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:34:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:08 compute-0 python3.9[329057]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764120847.5513747-550-277588255765326/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:34:08 compute-0 podman[329106]: 2025-11-26 01:34:08.930730181 +0000 UTC m=+0.130660956 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:34:08 compute-0 podman[329105]: 2025-11-26 01:34:08.949079297 +0000 UTC m=+0.152597583 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 26 01:34:09 compute-0 python3.9[329175]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 01:34:09 compute-0 systemd[1]: Reloading.
Nov 26 01:34:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:34:09 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:34:09 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:34:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:10 compute-0 python3.9[329288]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:34:10 compute-0 systemd[1]: Reloading.
Nov 26 01:34:10 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:34:10 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:34:11 compute-0 systemd[1]: Starting multipathd container...
Nov 26 01:34:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:34:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:34:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:34:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:34:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:34:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:34:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007c9be721d325047295ab5fff1b29c4157cf7cdeee3f9912595eac90e0634a9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007c9be721d325047295ab5fff1b29c4157cf7cdeee3f9912595eac90e0634a9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:11 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2.
Nov 26 01:34:11 compute-0 podman[329328]: 2025-11-26 01:34:11.29511671 +0000 UTC m=+0.227384196 container init ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.schema-version=1.0)
Nov 26 01:34:11 compute-0 multipathd[329343]: + sudo -E kolla_set_configs
Nov 26 01:34:11 compute-0 podman[329328]: 2025-11-26 01:34:11.337762589 +0000 UTC m=+0.270030085 container start ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3)
Nov 26 01:34:11 compute-0 podman[329328]: multipathd
Nov 26 01:34:11 compute-0 systemd[1]: Started multipathd container.
Nov 26 01:34:11 compute-0 multipathd[329343]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 01:34:11 compute-0 multipathd[329343]: INFO:__main__:Validating config file
Nov 26 01:34:11 compute-0 multipathd[329343]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 01:34:11 compute-0 multipathd[329343]: INFO:__main__:Writing out command to execute
Nov 26 01:34:11 compute-0 multipathd[329343]: ++ cat /run_command
Nov 26 01:34:11 compute-0 multipathd[329343]: + CMD='/usr/sbin/multipathd -d'
Nov 26 01:34:11 compute-0 multipathd[329343]: + ARGS=
Nov 26 01:34:11 compute-0 multipathd[329343]: + sudo kolla_copy_cacerts
Nov 26 01:34:11 compute-0 multipathd[329343]: + [[ ! -n '' ]]
Nov 26 01:34:11 compute-0 multipathd[329343]: + . kolla_extend_start
Nov 26 01:34:11 compute-0 multipathd[329343]: Running command: '/usr/sbin/multipathd -d'
Nov 26 01:34:11 compute-0 multipathd[329343]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 26 01:34:11 compute-0 multipathd[329343]: + umask 0022
Nov 26 01:34:11 compute-0 multipathd[329343]: + exec /usr/sbin/multipathd -d
Nov 26 01:34:11 compute-0 podman[329350]: 2025-11-26 01:34:11.490398211 +0000 UTC m=+0.134746090 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:34:11 compute-0 systemd[1]: ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2-24c7b851ac9963e3.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 01:34:11 compute-0 systemd[1]: ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2-24c7b851ac9963e3.service: Failed with result 'exit-code'.
Nov 26 01:34:11 compute-0 multipathd[329343]: 4482.198094 | --------start up--------
Nov 26 01:34:11 compute-0 multipathd[329343]: 4482.198125 | read /etc/multipath.conf
Nov 26 01:34:11 compute-0 multipathd[329343]: 4482.213164 | path checkers start up
Nov 26 01:34:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:12 compute-0 python3.9[329531]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:34:13 compute-0 python3.9[329685]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:34:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:34:15 compute-0 python3.9[329850]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 01:34:15 compute-0 systemd[1]: Stopping multipathd container...
Nov 26 01:34:15 compute-0 multipathd[329343]: 4486.375639 | exit (signal)
Nov 26 01:34:15 compute-0 multipathd[329343]: 4486.375941 | --------shut down-------
Nov 26 01:34:15 compute-0 systemd[1]: libpod-ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2.scope: Deactivated successfully.
Nov 26 01:34:15 compute-0 podman[329854]: 2025-11-26 01:34:15.731518267 +0000 UTC m=+0.131231442 container died ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:34:15 compute-0 systemd[1]: ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2-24c7b851ac9963e3.timer: Deactivated successfully.
Nov 26 01:34:15 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2.
Nov 26 01:34:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2-userdata-shm.mount: Deactivated successfully.
Nov 26 01:34:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-007c9be721d325047295ab5fff1b29c4157cf7cdeee3f9912595eac90e0634a9-merged.mount: Deactivated successfully.
Nov 26 01:34:15 compute-0 podman[329854]: 2025-11-26 01:34:15.868028905 +0000 UTC m=+0.267742050 container cleanup ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 01:34:15 compute-0 podman[329854]: multipathd
Nov 26 01:34:15 compute-0 podman[329880]: multipathd
Nov 26 01:34:16 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 26 01:34:16 compute-0 systemd[1]: Stopped multipathd container.
Nov 26 01:34:16 compute-0 systemd[1]: Starting multipathd container...
Nov 26 01:34:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007c9be721d325047295ab5fff1b29c4157cf7cdeee3f9912595eac90e0634a9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007c9be721d325047295ab5fff1b29c4157cf7cdeee3f9912595eac90e0634a9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:16 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2.
Nov 26 01:34:16 compute-0 podman[329892]: 2025-11-26 01:34:16.289956441 +0000 UTC m=+0.244558168 container init ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 26 01:34:16 compute-0 multipathd[329908]: + sudo -E kolla_set_configs
Nov 26 01:34:16 compute-0 podman[329892]: 2025-11-26 01:34:16.334938016 +0000 UTC m=+0.289539693 container start ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 01:34:16 compute-0 podman[329892]: multipathd
Nov 26 01:34:16 compute-0 systemd[1]: Started multipathd container.
Nov 26 01:34:16 compute-0 multipathd[329908]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 01:34:16 compute-0 multipathd[329908]: INFO:__main__:Validating config file
Nov 26 01:34:16 compute-0 multipathd[329908]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 01:34:16 compute-0 multipathd[329908]: INFO:__main__:Writing out command to execute
Nov 26 01:34:16 compute-0 multipathd[329908]: ++ cat /run_command
Nov 26 01:34:16 compute-0 multipathd[329908]: + CMD='/usr/sbin/multipathd -d'
Nov 26 01:34:16 compute-0 multipathd[329908]: + ARGS=
Nov 26 01:34:16 compute-0 multipathd[329908]: + sudo kolla_copy_cacerts
Nov 26 01:34:16 compute-0 multipathd[329908]: + [[ ! -n '' ]]
Nov 26 01:34:16 compute-0 multipathd[329908]: + . kolla_extend_start
Nov 26 01:34:16 compute-0 multipathd[329908]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 26 01:34:16 compute-0 multipathd[329908]: Running command: '/usr/sbin/multipathd -d'
Nov 26 01:34:16 compute-0 multipathd[329908]: + umask 0022
Nov 26 01:34:16 compute-0 multipathd[329908]: + exec /usr/sbin/multipathd -d
Nov 26 01:34:16 compute-0 podman[329915]: 2025-11-26 01:34:16.491431387 +0000 UTC m=+0.134171054 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 26 01:34:16 compute-0 multipathd[329908]: 4487.183669 | --------start up--------
Nov 26 01:34:16 compute-0 multipathd[329908]: 4487.183722 | read /etc/multipath.conf
Nov 26 01:34:16 compute-0 multipathd[329908]: 4487.199507 | path checkers start up
Nov 26 01:34:16 compute-0 systemd[1]: ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2-7a64c4f4c9254e07.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 01:34:16 compute-0 systemd[1]: ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2-7a64c4f4c9254e07.service: Failed with result 'exit-code'.
Nov 26 01:34:17 compute-0 python3.9[330099]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:34:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:18 compute-0 podman[330244]: 2025-11-26 01:34:18.886459058 +0000 UTC m=+0.121306913 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:34:18 compute-0 podman[330238]: 2025-11-26 01:34:18.902043786 +0000 UTC m=+0.138912567 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.29.0, vcs-type=git, managed_by=edpm_ansible, version=9.4, io.openshift.tags=base rhel9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Nov 26 01:34:19 compute-0 python3.9[330351]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 01:34:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:34:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:34:19 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:34:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:34:19 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:34:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:34:19 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:34:19 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5636a549-05f0-4bbc-8548-d9f006fdc2bb does not exist
Nov 26 01:34:19 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 2f058a53-00b5-45c0-a0b6-f08f2834f6f7 does not exist
Nov 26 01:34:19 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev c326f9b6-89f2-4127-9532-3e039ee1c481 does not exist
Nov 26 01:34:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:34:19 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:34:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:34:19 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:34:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:34:19 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:34:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:20 compute-0 python3.9[330619]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 26 01:34:20 compute-0 kernel: Key type psk registered
Nov 26 01:34:20 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:34:20 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:34:20 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:34:21 compute-0 podman[330797]: 2025-11-26 01:34:21.002224565 +0000 UTC m=+0.064734381 container create 4cda872622a3c1bf5578286be4e68ec18a3f49ec51633cfbdc32eb8fcdaa738b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:34:21 compute-0 systemd[1]: Started libpod-conmon-4cda872622a3c1bf5578286be4e68ec18a3f49ec51633cfbdc32eb8fcdaa738b.scope.
Nov 26 01:34:21 compute-0 podman[330797]: 2025-11-26 01:34:20.980207786 +0000 UTC m=+0.042717632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:34:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:34:21 compute-0 podman[330797]: 2025-11-26 01:34:21.156637128 +0000 UTC m=+0.219147024 container init 4cda872622a3c1bf5578286be4e68ec18a3f49ec51633cfbdc32eb8fcdaa738b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 01:34:21 compute-0 podman[330797]: 2025-11-26 01:34:21.175375425 +0000 UTC m=+0.237885261 container start 4cda872622a3c1bf5578286be4e68ec18a3f49ec51633cfbdc32eb8fcdaa738b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_robinson, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:34:21 compute-0 podman[330797]: 2025-11-26 01:34:21.18268321 +0000 UTC m=+0.245193056 container attach 4cda872622a3c1bf5578286be4e68ec18a3f49ec51633cfbdc32eb8fcdaa738b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 01:34:21 compute-0 hardcore_robinson[330844]: 167 167
Nov 26 01:34:21 compute-0 systemd[1]: libpod-4cda872622a3c1bf5578286be4e68ec18a3f49ec51633cfbdc32eb8fcdaa738b.scope: Deactivated successfully.
Nov 26 01:34:21 compute-0 podman[330797]: 2025-11-26 01:34:21.187224558 +0000 UTC m=+0.249734404 container died 4cda872622a3c1bf5578286be4e68ec18a3f49ec51633cfbdc32eb8fcdaa738b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_robinson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 26 01:34:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d4c683d0c3cdacfe5f385e28232adbc3964cf791745bb85d93037e2e415966d-merged.mount: Deactivated successfully.
Nov 26 01:34:21 compute-0 podman[330797]: 2025-11-26 01:34:21.275662665 +0000 UTC m=+0.338172511 container remove 4cda872622a3c1bf5578286be4e68ec18a3f49ec51633cfbdc32eb8fcdaa738b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:34:21 compute-0 systemd[1]: libpod-conmon-4cda872622a3c1bf5578286be4e68ec18a3f49ec51633cfbdc32eb8fcdaa738b.scope: Deactivated successfully.
Nov 26 01:34:21 compute-0 podman[330912]: 2025-11-26 01:34:21.510513459 +0000 UTC m=+0.091924176 container create 1259bfdec188290497634dd51f0bbd7fa36ab182aebaecaff2901ff84dbb7dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shirley, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 01:34:21 compute-0 python3.9[330906]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:34:21 compute-0 podman[330912]: 2025-11-26 01:34:21.46644087 +0000 UTC m=+0.047851627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:34:21 compute-0 systemd[1]: Started libpod-conmon-1259bfdec188290497634dd51f0bbd7fa36ab182aebaecaff2901ff84dbb7dd9.scope.
Nov 26 01:34:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1008481b7112def0901a4a58fa024bb7e7d91c3a0320626b7913a391f0f38b0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1008481b7112def0901a4a58fa024bb7e7d91c3a0320626b7913a391f0f38b0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1008481b7112def0901a4a58fa024bb7e7d91c3a0320626b7913a391f0f38b0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1008481b7112def0901a4a58fa024bb7e7d91c3a0320626b7913a391f0f38b0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1008481b7112def0901a4a58fa024bb7e7d91c3a0320626b7913a391f0f38b0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:21 compute-0 podman[330912]: 2025-11-26 01:34:21.662233506 +0000 UTC m=+0.243644323 container init 1259bfdec188290497634dd51f0bbd7fa36ab182aebaecaff2901ff84dbb7dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shirley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 01:34:21 compute-0 podman[330912]: 2025-11-26 01:34:21.685704026 +0000 UTC m=+0.267114773 container start 1259bfdec188290497634dd51f0bbd7fa36ab182aebaecaff2901ff84dbb7dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:34:21 compute-0 podman[330912]: 2025-11-26 01:34:21.692384174 +0000 UTC m=+0.273794921 container attach 1259bfdec188290497634dd51f0bbd7fa36ab182aebaecaff2901ff84dbb7dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shirley, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 01:34:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:22 compute-0 python3.9[331055]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764120860.7875183-630-84460036394624/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:34:22 compute-0 blissful_shirley[330929]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:34:22 compute-0 blissful_shirley[330929]: --> relative data size: 1.0
Nov 26 01:34:22 compute-0 blissful_shirley[330929]: --> All data devices are unavailable
Nov 26 01:34:22 compute-0 systemd[1]: libpod-1259bfdec188290497634dd51f0bbd7fa36ab182aebaecaff2901ff84dbb7dd9.scope: Deactivated successfully.
Nov 26 01:34:22 compute-0 systemd[1]: libpod-1259bfdec188290497634dd51f0bbd7fa36ab182aebaecaff2901ff84dbb7dd9.scope: Consumed 1.228s CPU time.
Nov 26 01:34:22 compute-0 podman[330912]: 2025-11-26 01:34:22.990331974 +0000 UTC m=+1.571742721 container died 1259bfdec188290497634dd51f0bbd7fa36ab182aebaecaff2901ff84dbb7dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shirley, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 01:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1008481b7112def0901a4a58fa024bb7e7d91c3a0320626b7913a391f0f38b0c-merged.mount: Deactivated successfully.
Nov 26 01:34:23 compute-0 podman[330912]: 2025-11-26 01:34:23.083535455 +0000 UTC m=+1.664946172 container remove 1259bfdec188290497634dd51f0bbd7fa36ab182aebaecaff2901ff84dbb7dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shirley, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:34:23 compute-0 systemd[1]: libpod-conmon-1259bfdec188290497634dd51f0bbd7fa36ab182aebaecaff2901ff84dbb7dd9.scope: Deactivated successfully.
Nov 26 01:34:23 compute-0 python3.9[331292]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:34:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:24 compute-0 podman[331482]: 2025-11-26 01:34:24.16158285 +0000 UTC m=+0.073889768 container create 6cc01b92ef508b6a88a2020a81cc542e059fb96057269ec2f5934d57be150eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:34:24 compute-0 systemd[1]: Started libpod-conmon-6cc01b92ef508b6a88a2020a81cc542e059fb96057269ec2f5934d57be150eba.scope.
Nov 26 01:34:24 compute-0 podman[331482]: 2025-11-26 01:34:24.137805772 +0000 UTC m=+0.050112770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:34:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:34:24 compute-0 podman[331482]: 2025-11-26 01:34:24.278520319 +0000 UTC m=+0.190827317 container init 6cc01b92ef508b6a88a2020a81cc542e059fb96057269ec2f5934d57be150eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 01:34:24 compute-0 podman[331482]: 2025-11-26 01:34:24.297353289 +0000 UTC m=+0.209660237 container start 6cc01b92ef508b6a88a2020a81cc542e059fb96057269ec2f5934d57be150eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_tesla, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:34:24 compute-0 podman[331482]: 2025-11-26 01:34:24.305429246 +0000 UTC m=+0.217736244 container attach 6cc01b92ef508b6a88a2020a81cc542e059fb96057269ec2f5934d57be150eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_tesla, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:34:24 compute-0 zen_tesla[331519]: 167 167
Nov 26 01:34:24 compute-0 systemd[1]: libpod-6cc01b92ef508b6a88a2020a81cc542e059fb96057269ec2f5934d57be150eba.scope: Deactivated successfully.
Nov 26 01:34:24 compute-0 podman[331482]: 2025-11-26 01:34:24.311087705 +0000 UTC m=+0.223394653 container died 6cc01b92ef508b6a88a2020a81cc542e059fb96057269ec2f5934d57be150eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:34:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc033e5a18b53e20b0c0c51dd85c4952c8ad0c1abd014eea6d9cbf89428fb540-merged.mount: Deactivated successfully.
Nov 26 01:34:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:34:24 compute-0 podman[331482]: 2025-11-26 01:34:24.389376456 +0000 UTC m=+0.301683384 container remove 6cc01b92ef508b6a88a2020a81cc542e059fb96057269ec2f5934d57be150eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 01:34:24 compute-0 systemd[1]: libpod-conmon-6cc01b92ef508b6a88a2020a81cc542e059fb96057269ec2f5934d57be150eba.scope: Deactivated successfully.
Nov 26 01:34:24 compute-0 podman[331576]: 2025-11-26 01:34:24.618717426 +0000 UTC m=+0.062298443 container create 061ef6621ca67956f74c654388cd6eb8d9dae4f6792b7db50a093a66bb5194c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_khorana, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:34:24 compute-0 podman[331576]: 2025-11-26 01:34:24.593736923 +0000 UTC m=+0.037317950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:34:24 compute-0 systemd[1]: Started libpod-conmon-061ef6621ca67956f74c654388cd6eb8d9dae4f6792b7db50a093a66bb5194c9.scope.
Nov 26 01:34:24 compute-0 python3.9[331570]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 01:34:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ded6aa0137f1ff91c8eee95fafe2f36e556c41759adf5e4c5be17524410e81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ded6aa0137f1ff91c8eee95fafe2f36e556c41759adf5e4c5be17524410e81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ded6aa0137f1ff91c8eee95fafe2f36e556c41759adf5e4c5be17524410e81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ded6aa0137f1ff91c8eee95fafe2f36e556c41759adf5e4c5be17524410e81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:24 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 26 01:34:24 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 26 01:34:24 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 26 01:34:24 compute-0 podman[331576]: 2025-11-26 01:34:24.820094019 +0000 UTC m=+0.263675106 container init 061ef6621ca67956f74c654388cd6eb8d9dae4f6792b7db50a093a66bb5194c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_khorana, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 01:34:24 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 26 01:34:24 compute-0 podman[331576]: 2025-11-26 01:34:24.832369044 +0000 UTC m=+0.275950061 container start 061ef6621ca67956f74c654388cd6eb8d9dae4f6792b7db50a093a66bb5194c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:34:24 compute-0 podman[331576]: 2025-11-26 01:34:24.838075694 +0000 UTC m=+0.281656791 container attach 061ef6621ca67956f74c654388cd6eb8d9dae4f6792b7db50a093a66bb5194c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 01:34:24 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 26 01:34:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:34:24.942 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:34:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:34:24.943 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:34:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:34:24.944 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:34:25 compute-0 objective_khorana[331592]: {
Nov 26 01:34:25 compute-0 objective_khorana[331592]:    "0": [
Nov 26 01:34:25 compute-0 objective_khorana[331592]:        {
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "devices": [
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "/dev/loop3"
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            ],
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "lv_name": "ceph_lv0",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "lv_size": "21470642176",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "name": "ceph_lv0",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "tags": {
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.cluster_name": "ceph",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.crush_device_class": "",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.encrypted": "0",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.osd_id": "0",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.type": "block",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.vdo": "0"
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            },
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "type": "block",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "vg_name": "ceph_vg0"
Nov 26 01:34:25 compute-0 objective_khorana[331592]:        }
Nov 26 01:34:25 compute-0 objective_khorana[331592]:    ],
Nov 26 01:34:25 compute-0 objective_khorana[331592]:    "1": [
Nov 26 01:34:25 compute-0 objective_khorana[331592]:        {
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "devices": [
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "/dev/loop4"
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            ],
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "lv_name": "ceph_lv1",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "lv_size": "21470642176",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "name": "ceph_lv1",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "tags": {
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.cluster_name": "ceph",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.crush_device_class": "",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.encrypted": "0",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.osd_id": "1",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.type": "block",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.vdo": "0"
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            },
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "type": "block",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "vg_name": "ceph_vg1"
Nov 26 01:34:25 compute-0 objective_khorana[331592]:        }
Nov 26 01:34:25 compute-0 objective_khorana[331592]:    ],
Nov 26 01:34:25 compute-0 objective_khorana[331592]:    "2": [
Nov 26 01:34:25 compute-0 objective_khorana[331592]:        {
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "devices": [
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "/dev/loop5"
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            ],
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "lv_name": "ceph_lv2",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "lv_size": "21470642176",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "name": "ceph_lv2",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "tags": {
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.cluster_name": "ceph",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.crush_device_class": "",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.encrypted": "0",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.osd_id": "2",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.type": "block",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:                "ceph.vdo": "0"
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            },
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "type": "block",
Nov 26 01:34:25 compute-0 objective_khorana[331592]:            "vg_name": "ceph_vg2"
Nov 26 01:34:25 compute-0 objective_khorana[331592]:        }
Nov 26 01:34:25 compute-0 objective_khorana[331592]:    ]
Nov 26 01:34:25 compute-0 objective_khorana[331592]: }
Nov 26 01:34:25 compute-0 systemd[1]: libpod-061ef6621ca67956f74c654388cd6eb8d9dae4f6792b7db50a093a66bb5194c9.scope: Deactivated successfully.
Nov 26 01:34:25 compute-0 podman[331576]: 2025-11-26 01:34:25.732920269 +0000 UTC m=+1.176501316 container died 061ef6621ca67956f74c654388cd6eb8d9dae4f6792b7db50a093a66bb5194c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 01:34:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-05ded6aa0137f1ff91c8eee95fafe2f36e556c41759adf5e4c5be17524410e81-merged.mount: Deactivated successfully.
Nov 26 01:34:25 compute-0 podman[331576]: 2025-11-26 01:34:25.83465951 +0000 UTC m=+1.278240527 container remove 061ef6621ca67956f74c654388cd6eb8d9dae4f6792b7db50a093a66bb5194c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_khorana, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:34:25 compute-0 systemd[1]: libpod-conmon-061ef6621ca67956f74c654388cd6eb8d9dae4f6792b7db50a093a66bb5194c9.scope: Deactivated successfully.
Nov 26 01:34:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:26 compute-0 python3.9[331870]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 01:34:27 compute-0 podman[331910]: 2025-11-26 01:34:27.045159701 +0000 UTC m=+0.081593555 container create 17a7b32a9fa4eedaac3350016a315ec3349d391b85115c731be1aaf26e83c917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 01:34:27 compute-0 podman[331910]: 2025-11-26 01:34:27.010650501 +0000 UTC m=+0.047084375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:34:27 compute-0 systemd[1]: Started libpod-conmon-17a7b32a9fa4eedaac3350016a315ec3349d391b85115c731be1aaf26e83c917.scope.
Nov 26 01:34:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:34:27 compute-0 podman[331910]: 2025-11-26 01:34:27.182054741 +0000 UTC m=+0.218488575 container init 17a7b32a9fa4eedaac3350016a315ec3349d391b85115c731be1aaf26e83c917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:34:27 compute-0 podman[331910]: 2025-11-26 01:34:27.199051239 +0000 UTC m=+0.235485063 container start 17a7b32a9fa4eedaac3350016a315ec3349d391b85115c731be1aaf26e83c917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:34:27 compute-0 vibrant_bohr[331926]: 167 167
Nov 26 01:34:27 compute-0 podman[331910]: 2025-11-26 01:34:27.203813743 +0000 UTC m=+0.240247587 container attach 17a7b32a9fa4eedaac3350016a315ec3349d391b85115c731be1aaf26e83c917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:34:27 compute-0 systemd[1]: libpod-17a7b32a9fa4eedaac3350016a315ec3349d391b85115c731be1aaf26e83c917.scope: Deactivated successfully.
Nov 26 01:34:27 compute-0 podman[331910]: 2025-11-26 01:34:27.209359249 +0000 UTC m=+0.245793113 container died 17a7b32a9fa4eedaac3350016a315ec3349d391b85115c731be1aaf26e83c917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 01:34:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-48b1558333b883969fb7da00e2f01de6e8cc3538942650436287a4235d1ec3b1-merged.mount: Deactivated successfully.
Nov 26 01:34:27 compute-0 podman[331910]: 2025-11-26 01:34:27.281869718 +0000 UTC m=+0.318303552 container remove 17a7b32a9fa4eedaac3350016a315ec3349d391b85115c731be1aaf26e83c917 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 01:34:27 compute-0 systemd[1]: libpod-conmon-17a7b32a9fa4eedaac3350016a315ec3349d391b85115c731be1aaf26e83c917.scope: Deactivated successfully.
Nov 26 01:34:27 compute-0 podman[331945]: 2025-11-26 01:34:27.470718119 +0000 UTC m=+0.128505895 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:34:27 compute-0 podman[331963]: 2025-11-26 01:34:27.520495088 +0000 UTC m=+0.075880214 container create 0a766ecbeeccfb0e5c9f8b8e2f26dfda5a2d641ea797055ceaccf93148caef27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:34:27 compute-0 podman[331963]: 2025-11-26 01:34:27.489905468 +0000 UTC m=+0.045290634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:34:27 compute-0 systemd[1]: Started libpod-conmon-0a766ecbeeccfb0e5c9f8b8e2f26dfda5a2d641ea797055ceaccf93148caef27.scope.
Nov 26 01:34:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058c7b49f345774d9c93405f7c9199049bdbe7cc3e25760bc114a100d68b183f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058c7b49f345774d9c93405f7c9199049bdbe7cc3e25760bc114a100d68b183f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058c7b49f345774d9c93405f7c9199049bdbe7cc3e25760bc114a100d68b183f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058c7b49f345774d9c93405f7c9199049bdbe7cc3e25760bc114a100d68b183f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:34:27 compute-0 podman[331963]: 2025-11-26 01:34:27.711269202 +0000 UTC m=+0.266654408 container init 0a766ecbeeccfb0e5c9f8b8e2f26dfda5a2d641ea797055ceaccf93148caef27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 01:34:27 compute-0 podman[331963]: 2025-11-26 01:34:27.732120629 +0000 UTC m=+0.287505755 container start 0a766ecbeeccfb0e5c9f8b8e2f26dfda5a2d641ea797055ceaccf93148caef27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:34:27 compute-0 podman[331963]: 2025-11-26 01:34:27.738306483 +0000 UTC m=+0.293691629 container attach 0a766ecbeeccfb0e5c9f8b8e2f26dfda5a2d641ea797055ceaccf93148caef27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 01:34:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]: {
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:        "osd_id": 0,
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:        "type": "bluestore"
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:    },
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:        "osd_id": 2,
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:        "type": "bluestore"
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:    },
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:        "osd_id": 1,
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:        "type": "bluestore"
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]:    }
Nov 26 01:34:28 compute-0 laughing_sinoussi[331984]: }
Nov 26 01:34:29 compute-0 systemd[1]: libpod-0a766ecbeeccfb0e5c9f8b8e2f26dfda5a2d641ea797055ceaccf93148caef27.scope: Deactivated successfully.
Nov 26 01:34:29 compute-0 podman[331963]: 2025-11-26 01:34:29.037046415 +0000 UTC m=+1.592431571 container died 0a766ecbeeccfb0e5c9f8b8e2f26dfda5a2d641ea797055ceaccf93148caef27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 01:34:29 compute-0 systemd[1]: libpod-0a766ecbeeccfb0e5c9f8b8e2f26dfda5a2d641ea797055ceaccf93148caef27.scope: Consumed 1.266s CPU time.
Nov 26 01:34:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-058c7b49f345774d9c93405f7c9199049bdbe7cc3e25760bc114a100d68b183f-merged.mount: Deactivated successfully.
Nov 26 01:34:29 compute-0 podman[331963]: 2025-11-26 01:34:29.152383169 +0000 UTC m=+1.707768295 container remove 0a766ecbeeccfb0e5c9f8b8e2f26dfda5a2d641ea797055ceaccf93148caef27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 26 01:34:29 compute-0 systemd[1]: libpod-conmon-0a766ecbeeccfb0e5c9f8b8e2f26dfda5a2d641ea797055ceaccf93148caef27.scope: Deactivated successfully.
Nov 26 01:34:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:34:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:34:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:34:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:34:29 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 6a182828-6b25-43d9-b26b-d7e91262a08a does not exist
Nov 26 01:34:29 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev dc834bb7-07c2-4b67-99ac-fbf2d355c5be does not exist
Nov 26 01:34:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:34:29 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:34:29 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:34:29 compute-0 podman[158021]: time="2025-11-26T01:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:34:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38322 "" "Go-http-client/1.1"
Nov 26 01:34:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7691 "" "Go-http-client/1.1"
Nov 26 01:34:29 compute-0 systemd[1]: Reloading.
Nov 26 01:34:29 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:34:30 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:34:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:30 compute-0 systemd[1]: Reloading.
Nov 26 01:34:31 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:34:31 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:34:31 compute-0 openstack_network_exporter[160178]: ERROR   01:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:34:31 compute-0 openstack_network_exporter[160178]: ERROR   01:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:34:31 compute-0 openstack_network_exporter[160178]: ERROR   01:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:34:31 compute-0 openstack_network_exporter[160178]: ERROR   01:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:34:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:34:31 compute-0 openstack_network_exporter[160178]: ERROR   01:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:34:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:34:31 compute-0 systemd-logind[800]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 26 01:34:31 compute-0 systemd-logind[800]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 26 01:34:31 compute-0 lvm[332196]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 01:34:31 compute-0 lvm[332196]: VG ceph_vg2 finished
Nov 26 01:34:31 compute-0 lvm[332198]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 01:34:31 compute-0 lvm[332198]: VG ceph_vg1 finished
Nov 26 01:34:31 compute-0 lvm[332195]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 01:34:31 compute-0 lvm[332195]: VG ceph_vg0 finished
Nov 26 01:34:31 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 01:34:31 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 26 01:34:31 compute-0 systemd[1]: Reloading.
Nov 26 01:34:32 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:34:32 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:34:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:32 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 01:34:34 compute-0 python3.9[333414]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 01:34:34 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 26 01:34:34 compute-0 iscsid[319725]: iscsid shutting down.
Nov 26 01:34:34 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 26 01:34:34 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 26 01:34:34 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 26 01:34:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:34 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 26 01:34:34 compute-0 systemd[1]: Started Open-iSCSI.
Nov 26 01:34:34 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 01:34:34 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 26 01:34:34 compute-0 systemd[1]: man-db-cache-update.service: Consumed 2.750s CPU time.
Nov 26 01:34:34 compute-0 systemd[1]: run-r24d59c58bb704aa1a6a6dafa077bacc5.service: Deactivated successfully.
Nov 26 01:34:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:34:35 compute-0 python3.9[333693]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:34:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:36 compute-0 python3.9[333849]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:34:37 compute-0 podman[333927]: 2025-11-26 01:34:37.531472579 +0000 UTC m=+0.088005736 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 01:34:37 compute-0 podman[333926]: 2025-11-26 01:34:37.546758448 +0000 UTC m=+0.106727912 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 26 01:34:37 compute-0 podman[333928]: 2025-11-26 01:34:37.617895269 +0000 UTC m=+0.170269479 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:34:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:38 compute-0 python3.9[334066]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 01:34:38 compute-0 systemd[1]: Reloading.
Nov 26 01:34:38 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:34:38 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:34:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:34:39 compute-0 podman[334226]: 2025-11-26 01:34:39.568879143 +0000 UTC m=+0.103590544 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:34:39 compute-0 podman[334224]: 2025-11-26 01:34:39.580346375 +0000 UTC m=+0.116435175 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, io.buildah.version=1.33.7, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Nov 26 01:34:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:40 compute-0 python3.9[334293]: ansible-ansible.builtin.service_facts Invoked
Nov 26 01:34:40 compute-0 network[334310]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 01:34:40 compute-0 network[334311]: 'network-scripts' will be removed from distribution in near future.
Nov 26 01:34:40 compute-0 network[334312]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:34:41
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.mgr', 'vms', 'images', 'volumes', 'cephfs.cephfs.data']
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:34:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:34:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:34:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:46 compute-0 podman[334558]: 2025-11-26 01:34:46.989162761 +0000 UTC m=+0.117935548 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:34:47 compute-0 python3.9[334603]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:34:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:48 compute-0 python3.9[334756]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:34:49 compute-0 podman[334881]: 2025-11-26 01:34:49.37738405 +0000 UTC m=+0.097057090 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, config_id=edpm, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 01:34:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:34:49 compute-0 podman[334882]: 2025-11-26 01:34:49.403656669 +0000 UTC m=+0.116027124 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 26 01:34:49 compute-0 python3.9[334946]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:34:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:34:50 compute-0 python3.9[335100]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:34:52 compute-0 python3.9[335253]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:34:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:53 compute-0 python3.9[335406]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:34:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:34:55 compute-0 python3.9[335559]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:34:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:57 compute-0 python3.9[335712]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:34:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:34:58 compute-0 podman[335837]: 2025-11-26 01:34:58.360287792 +0000 UTC m=+0.143087555 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 26 01:34:58 compute-0 python3.9[335883]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:34:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:34:59 compute-0 python3.9[336035]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:34:59 compute-0 podman[158021]: time="2025-11-26T01:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:34:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38321 "" "Go-http-client/1.1"
Nov 26 01:34:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7687 "" "Go-http-client/1.1"
Nov 26 01:35:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:00 compute-0 python3.9[336187]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:35:01 compute-0 openstack_network_exporter[160178]: ERROR   01:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:35:01 compute-0 openstack_network_exporter[160178]: ERROR   01:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:35:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:35:01 compute-0 openstack_network_exporter[160178]: ERROR   01:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:35:01 compute-0 openstack_network_exporter[160178]: ERROR   01:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:35:01 compute-0 openstack_network_exporter[160178]: ERROR   01:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:35:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:35:01 compute-0 python3.9[336339]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:35:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:02 compute-0 python3.9[336491]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:35:04 compute-0 python3.9[336643]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:35:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:35:05 compute-0 python3.9[336795]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:35:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:06 compute-0 python3.9[336947]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:35:07 compute-0 python3.9[337099]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:35:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:08 compute-0 podman[337170]: 2025-11-26 01:35:08.567598455 +0000 UTC m=+0.115912911 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Nov 26 01:35:08 compute-0 podman[337177]: 2025-11-26 01:35:08.592573997 +0000 UTC m=+0.133517946 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:35:08 compute-0 podman[337178]: 2025-11-26 01:35:08.620092161 +0000 UTC m=+0.157372977 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 26 01:35:09 compute-0 python3.9[337315]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:35:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:35:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:10 compute-0 podman[337415]: 2025-11-26 01:35:10.562045762 +0000 UTC m=+0.112392952 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, distribution-scope=public, name=ubi9-minimal)
Nov 26 01:35:10 compute-0 podman[337416]: 2025-11-26 01:35:10.58798581 +0000 UTC m=+0.115935230 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:35:10 compute-0 python3.9[337512]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:35:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:35:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:35:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:35:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:35:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:35:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:35:12 compute-0 python3.9[337664]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:35:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:13 compute-0 python3.9[337816]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:35:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:14 compute-0 python3.9[337968]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:35:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:35:15 compute-0 python3.9[338120]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:35:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:16 compute-0 python3.9[338272]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:35:17 compute-0 podman[338396]: 2025-11-26 01:35:17.388754266 +0000 UTC m=+0.124718298 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 01:35:17 compute-0 python3.9[338443]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:35:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:18 compute-0 python3.9[338595]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 01:35:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:35:19 compute-0 podman[338649]: 2025-11-26 01:35:19.540436144 +0000 UTC m=+0.096767452 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 26 01:35:19 compute-0 podman[338645]: 2025-11-26 01:35:19.542618676 +0000 UTC m=+0.098215043 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, com.redhat.component=ubi9-container)
Nov 26 01:35:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:20 compute-0 python3.9[338787]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 01:35:20 compute-0 systemd[1]: Reloading.
Nov 26 01:35:20 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:35:20 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:35:21 compute-0 python3.9[338973]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:35:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:22 compute-0 python3.9[339126]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:35:24 compute-0 python3.9[339279]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:35:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:35:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:35:24.943 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:35:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:35:24.943 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:35:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:35:24.944 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:35:25 compute-0 python3.9[339432]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:35:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:26 compute-0 python3.9[339585]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:35:27 compute-0 python3.9[339738]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:35:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:28 compute-0 podman[339740]: 2025-11-26 01:35:28.564240025 +0000 UTC m=+0.114627974 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 26 01:35:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:35:29 compute-0 podman[158021]: time="2025-11-26T01:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:35:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38321 "" "Go-http-client/1.1"
Nov 26 01:35:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7683 "" "Go-http-client/1.1"
Nov 26 01:35:29 compute-0 python3.9[339918]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:35:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:35:30 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:35:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:35:30 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:35:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:35:30 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:35:30 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 69a49e03-9ef1-42d6-89ef-dd3e6d61e18d does not exist
Nov 26 01:35:30 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 14d5e100-2eaa-450e-b86b-4816ef8230a0 does not exist
Nov 26 01:35:30 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 52b51ceb-a701-4e5d-bc6b-2b8ac923ce41 does not exist
Nov 26 01:35:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:35:30 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:35:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:35:30 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:35:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:35:30 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:35:30 compute-0 python3.9[340195]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:35:31 compute-0 openstack_network_exporter[160178]: ERROR   01:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:35:31 compute-0 openstack_network_exporter[160178]: ERROR   01:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:35:31 compute-0 openstack_network_exporter[160178]: ERROR   01:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:35:31 compute-0 openstack_network_exporter[160178]: ERROR   01:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:35:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:35:31 compute-0 openstack_network_exporter[160178]: ERROR   01:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:35:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:35:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:35:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:35:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:35:31 compute-0 podman[340358]: 2025-11-26 01:35:31.80999465 +0000 UTC m=+0.100940129 container create 3350663b3c4d3a6587a7538e6aa0f5554f903f717d0b55b1038b0b80bc31efac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 01:35:31 compute-0 podman[340358]: 2025-11-26 01:35:31.756797914 +0000 UTC m=+0.047743453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:35:31 compute-0 systemd[1]: Started libpod-conmon-3350663b3c4d3a6587a7538e6aa0f5554f903f717d0b55b1038b0b80bc31efac.scope.
Nov 26 01:35:31 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:35:31 compute-0 podman[340358]: 2025-11-26 01:35:31.963801016 +0000 UTC m=+0.254746545 container init 3350663b3c4d3a6587a7538e6aa0f5554f903f717d0b55b1038b0b80bc31efac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:35:31 compute-0 podman[340358]: 2025-11-26 01:35:31.981452562 +0000 UTC m=+0.272398051 container start 3350663b3c4d3a6587a7538e6aa0f5554f903f717d0b55b1038b0b80bc31efac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 01:35:31 compute-0 podman[340358]: 2025-11-26 01:35:31.9884853 +0000 UTC m=+0.279430789 container attach 3350663b3c4d3a6587a7538e6aa0f5554f903f717d0b55b1038b0b80bc31efac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:35:31 compute-0 musing_wright[340374]: 167 167
Nov 26 01:35:31 compute-0 systemd[1]: libpod-3350663b3c4d3a6587a7538e6aa0f5554f903f717d0b55b1038b0b80bc31efac.scope: Deactivated successfully.
Nov 26 01:35:31 compute-0 podman[340358]: 2025-11-26 01:35:31.99809032 +0000 UTC m=+0.289035859 container died 3350663b3c4d3a6587a7538e6aa0f5554f903f717d0b55b1038b0b80bc31efac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:35:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-70880b03431b86ba0b7676cbe331e602ac3d375ff06691a2087658395d07fea6-merged.mount: Deactivated successfully.
Nov 26 01:35:32 compute-0 podman[340358]: 2025-11-26 01:35:32.080692012 +0000 UTC m=+0.371637491 container remove 3350663b3c4d3a6587a7538e6aa0f5554f903f717d0b55b1038b0b80bc31efac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:35:32 compute-0 systemd[1]: libpod-conmon-3350663b3c4d3a6587a7538e6aa0f5554f903f717d0b55b1038b0b80bc31efac.scope: Deactivated successfully.
Nov 26 01:35:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:32 compute-0 podman[340449]: 2025-11-26 01:35:32.369190225 +0000 UTC m=+0.091888215 container create a973a43c88eb512af78fead841db0e71e693cee28513e3e1e983ee093e70320a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jones, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 01:35:32 compute-0 podman[340449]: 2025-11-26 01:35:32.332455662 +0000 UTC m=+0.055153722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:35:32 compute-0 systemd[1]: Started libpod-conmon-a973a43c88eb512af78fead841db0e71e693cee28513e3e1e983ee093e70320a.scope.
Nov 26 01:35:32 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a19dc60c2a2b089b9215c5644ac4d0bbd936a8b21569b689176c76b9d016c91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a19dc60c2a2b089b9215c5644ac4d0bbd936a8b21569b689176c76b9d016c91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a19dc60c2a2b089b9215c5644ac4d0bbd936a8b21569b689176c76b9d016c91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a19dc60c2a2b089b9215c5644ac4d0bbd936a8b21569b689176c76b9d016c91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a19dc60c2a2b089b9215c5644ac4d0bbd936a8b21569b689176c76b9d016c91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:35:32 compute-0 podman[340449]: 2025-11-26 01:35:32.522676981 +0000 UTC m=+0.245375031 container init a973a43c88eb512af78fead841db0e71e693cee28513e3e1e983ee093e70320a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 26 01:35:32 compute-0 podman[340449]: 2025-11-26 01:35:32.557813129 +0000 UTC m=+0.280511139 container start a973a43c88eb512af78fead841db0e71e693cee28513e3e1e983ee093e70320a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 01:35:32 compute-0 podman[340449]: 2025-11-26 01:35:32.565707671 +0000 UTC m=+0.288405671 container attach a973a43c88eb512af78fead841db0e71e693cee28513e3e1e983ee093e70320a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jones, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:35:33 compute-0 python3.9[340545]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:35:33 compute-0 relaxed_jones[340491]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:35:33 compute-0 relaxed_jones[340491]: --> relative data size: 1.0
Nov 26 01:35:33 compute-0 relaxed_jones[340491]: --> All data devices are unavailable
Nov 26 01:35:33 compute-0 systemd[1]: libpod-a973a43c88eb512af78fead841db0e71e693cee28513e3e1e983ee093e70320a.scope: Deactivated successfully.
Nov 26 01:35:33 compute-0 systemd[1]: libpod-a973a43c88eb512af78fead841db0e71e693cee28513e3e1e983ee093e70320a.scope: Consumed 1.169s CPU time.
Nov 26 01:35:33 compute-0 podman[340449]: 2025-11-26 01:35:33.816052713 +0000 UTC m=+1.538750673 container died a973a43c88eb512af78fead841db0e71e693cee28513e3e1e983ee093e70320a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jones, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:35:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a19dc60c2a2b089b9215c5644ac4d0bbd936a8b21569b689176c76b9d016c91-merged.mount: Deactivated successfully.
Nov 26 01:35:33 compute-0 podman[340449]: 2025-11-26 01:35:33.889387135 +0000 UTC m=+1.612085105 container remove a973a43c88eb512af78fead841db0e71e693cee28513e3e1e983ee093e70320a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jones, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:35:33 compute-0 systemd[1]: libpod-conmon-a973a43c88eb512af78fead841db0e71e693cee28513e3e1e983ee093e70320a.scope: Deactivated successfully.
Nov 26 01:35:34 compute-0 python3.9[340721]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:35:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:35:34 compute-0 podman[341022]: 2025-11-26 01:35:34.923102225 +0000 UTC m=+0.053726082 container create 44bc95d839030b166d0076ddfcafb692e3bb3fcb136400242c3b8bb88f78a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:35:34 compute-0 systemd[1]: Started libpod-conmon-44bc95d839030b166d0076ddfcafb692e3bb3fcb136400242c3b8bb88f78a791.scope.
Nov 26 01:35:34 compute-0 podman[341022]: 2025-11-26 01:35:34.896673242 +0000 UTC m=+0.027297109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:35:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:35:35 compute-0 python3.9[341023]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:35:35 compute-0 podman[341022]: 2025-11-26 01:35:35.051809604 +0000 UTC m=+0.182433541 container init 44bc95d839030b166d0076ddfcafb692e3bb3fcb136400242c3b8bb88f78a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 01:35:35 compute-0 podman[341022]: 2025-11-26 01:35:35.073879445 +0000 UTC m=+0.204503322 container start 44bc95d839030b166d0076ddfcafb692e3bb3fcb136400242c3b8bb88f78a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:35:35 compute-0 podman[341022]: 2025-11-26 01:35:35.081923981 +0000 UTC m=+0.212547878 container attach 44bc95d839030b166d0076ddfcafb692e3bb3fcb136400242c3b8bb88f78a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:35:35 compute-0 elated_kare[341039]: 167 167
Nov 26 01:35:35 compute-0 systemd[1]: libpod-44bc95d839030b166d0076ddfcafb692e3bb3fcb136400242c3b8bb88f78a791.scope: Deactivated successfully.
Nov 26 01:35:35 compute-0 podman[341022]: 2025-11-26 01:35:35.087462687 +0000 UTC m=+0.218086564 container died 44bc95d839030b166d0076ddfcafb692e3bb3fcb136400242c3b8bb88f78a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 01:35:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed3371610e22cd6eda1086a5da4594e37da1d6f5741955d0e3eaf911f77a7898-merged.mount: Deactivated successfully.
Nov 26 01:35:35 compute-0 podman[341022]: 2025-11-26 01:35:35.168490196 +0000 UTC m=+0.299114053 container remove 44bc95d839030b166d0076ddfcafb692e3bb3fcb136400242c3b8bb88f78a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 01:35:35 compute-0 systemd[1]: libpod-conmon-44bc95d839030b166d0076ddfcafb692e3bb3fcb136400242c3b8bb88f78a791.scope: Deactivated successfully.
Nov 26 01:35:35 compute-0 podman[341091]: 2025-11-26 01:35:35.451429102 +0000 UTC m=+0.089169848 container create b54aa5613186c26b160da29f0c12aaa640220ee400a7a7c6bf7aab396418fdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:35:35 compute-0 podman[341091]: 2025-11-26 01:35:35.418225659 +0000 UTC m=+0.055966465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:35:35 compute-0 systemd[1]: Started libpod-conmon-b54aa5613186c26b160da29f0c12aaa640220ee400a7a7c6bf7aab396418fdfa.scope.
Nov 26 01:35:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642ad8f9178ab6f2440d9ed4830879a33d54a90da2cc6be9edbadfc8a5c1b0a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642ad8f9178ab6f2440d9ed4830879a33d54a90da2cc6be9edbadfc8a5c1b0a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642ad8f9178ab6f2440d9ed4830879a33d54a90da2cc6be9edbadfc8a5c1b0a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642ad8f9178ab6f2440d9ed4830879a33d54a90da2cc6be9edbadfc8a5c1b0a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:35:35 compute-0 podman[341091]: 2025-11-26 01:35:35.597273954 +0000 UTC m=+0.235014680 container init b54aa5613186c26b160da29f0c12aaa640220ee400a7a7c6bf7aab396418fdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:35:35 compute-0 podman[341091]: 2025-11-26 01:35:35.608599622 +0000 UTC m=+0.246340338 container start b54aa5613186c26b160da29f0c12aaa640220ee400a7a7c6bf7aab396418fdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 01:35:35 compute-0 podman[341091]: 2025-11-26 01:35:35.613079848 +0000 UTC m=+0.250820604 container attach b54aa5613186c26b160da29f0c12aaa640220ee400a7a7c6bf7aab396418fdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:35:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:36 compute-0 python3.9[341233]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]: {
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:    "0": [
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:        {
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "devices": [
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "/dev/loop3"
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            ],
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "lv_name": "ceph_lv0",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "lv_size": "21470642176",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "name": "ceph_lv0",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "tags": {
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.cluster_name": "ceph",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.crush_device_class": "",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.encrypted": "0",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.osd_id": "0",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.type": "block",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.vdo": "0"
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            },
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "type": "block",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "vg_name": "ceph_vg0"
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:        }
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:    ],
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:    "1": [
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:        {
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "devices": [
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "/dev/loop4"
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            ],
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "lv_name": "ceph_lv1",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "lv_size": "21470642176",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "name": "ceph_lv1",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "tags": {
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.cluster_name": "ceph",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.crush_device_class": "",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.encrypted": "0",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.osd_id": "1",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.type": "block",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.vdo": "0"
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            },
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "type": "block",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "vg_name": "ceph_vg1"
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:        }
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:    ],
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:    "2": [
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:        {
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "devices": [
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "/dev/loop5"
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            ],
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "lv_name": "ceph_lv2",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "lv_size": "21470642176",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "name": "ceph_lv2",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "tags": {
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.cluster_name": "ceph",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.crush_device_class": "",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.encrypted": "0",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.osd_id": "2",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.type": "block",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:                "ceph.vdo": "0"
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            },
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "type": "block",
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:            "vg_name": "ceph_vg2"
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:        }
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]:    ]
Nov 26 01:35:36 compute-0 gallant_satoshi[341152]: }
Nov 26 01:35:36 compute-0 systemd[1]: libpod-b54aa5613186c26b160da29f0c12aaa640220ee400a7a7c6bf7aab396418fdfa.scope: Deactivated successfully.
Nov 26 01:35:36 compute-0 podman[341091]: 2025-11-26 01:35:36.490976635 +0000 UTC m=+1.128717391 container died b54aa5613186c26b160da29f0c12aaa640220ee400a7a7c6bf7aab396418fdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:35:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-642ad8f9178ab6f2440d9ed4830879a33d54a90da2cc6be9edbadfc8a5c1b0a1-merged.mount: Deactivated successfully.
Nov 26 01:35:36 compute-0 podman[341091]: 2025-11-26 01:35:36.619309094 +0000 UTC m=+1.257049850 container remove b54aa5613186c26b160da29f0c12aaa640220ee400a7a7c6bf7aab396418fdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 26 01:35:36 compute-0 systemd[1]: libpod-conmon-b54aa5613186c26b160da29f0c12aaa640220ee400a7a7c6bf7aab396418fdfa.scope: Deactivated successfully.
Nov 26 01:35:37 compute-0 python3.9[341474]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:35:37 compute-0 podman[341586]: 2025-11-26 01:35:37.698910114 +0000 UTC m=+0.077435129 container create 648ee9d55f2ae7822f1a148ad926d5a34a1d0af6c3948246b6d5545859831e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:35:37 compute-0 podman[341586]: 2025-11-26 01:35:37.668271002 +0000 UTC m=+0.046796027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:35:37 compute-0 systemd[1]: Started libpod-conmon-648ee9d55f2ae7822f1a148ad926d5a34a1d0af6c3948246b6d5545859831e53.scope.
Nov 26 01:35:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:35:37 compute-0 podman[341586]: 2025-11-26 01:35:37.837548573 +0000 UTC m=+0.216073628 container init 648ee9d55f2ae7822f1a148ad926d5a34a1d0af6c3948246b6d5545859831e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galois, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 01:35:37 compute-0 podman[341586]: 2025-11-26 01:35:37.846473384 +0000 UTC m=+0.224998359 container start 648ee9d55f2ae7822f1a148ad926d5a34a1d0af6c3948246b6d5545859831e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:35:37 compute-0 podman[341586]: 2025-11-26 01:35:37.851726291 +0000 UTC m=+0.230251336 container attach 648ee9d55f2ae7822f1a148ad926d5a34a1d0af6c3948246b6d5545859831e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:35:37 compute-0 wizardly_galois[341631]: 167 167
Nov 26 01:35:37 compute-0 systemd[1]: libpod-648ee9d55f2ae7822f1a148ad926d5a34a1d0af6c3948246b6d5545859831e53.scope: Deactivated successfully.
Nov 26 01:35:37 compute-0 podman[341586]: 2025-11-26 01:35:37.855895849 +0000 UTC m=+0.234420844 container died 648ee9d55f2ae7822f1a148ad926d5a34a1d0af6c3948246b6d5545859831e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galois, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:35:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-85e42e10fb56a31b1b3ecdf1e067df5ec0a8ef0a0ba2acac01217021b4888231-merged.mount: Deactivated successfully.
Nov 26 01:35:37 compute-0 podman[341586]: 2025-11-26 01:35:37.91389866 +0000 UTC m=+0.292423615 container remove 648ee9d55f2ae7822f1a148ad926d5a34a1d0af6c3948246b6d5545859831e53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_galois, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:35:37 compute-0 systemd[1]: libpod-conmon-648ee9d55f2ae7822f1a148ad926d5a34a1d0af6c3948246b6d5545859831e53.scope: Deactivated successfully.
Nov 26 01:35:38 compute-0 podman[341707]: 2025-11-26 01:35:38.135329857 +0000 UTC m=+0.079790705 container create 2fb7c5e7e4df7ae9ae4b63d50ca32ffc07844bf54003668b333329750da34106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 01:35:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:38 compute-0 podman[341707]: 2025-11-26 01:35:38.100701983 +0000 UTC m=+0.045162901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:35:38 compute-0 systemd[1]: Started libpod-conmon-2fb7c5e7e4df7ae9ae4b63d50ca32ffc07844bf54003668b333329750da34106.scope.
Nov 26 01:35:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28cc14e835fd425726c8256a0b40d34e007d9d52d198d4b19fde64ab3f1a765/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28cc14e835fd425726c8256a0b40d34e007d9d52d198d4b19fde64ab3f1a765/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28cc14e835fd425726c8256a0b40d34e007d9d52d198d4b19fde64ab3f1a765/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28cc14e835fd425726c8256a0b40d34e007d9d52d198d4b19fde64ab3f1a765/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:35:38 compute-0 podman[341707]: 2025-11-26 01:35:38.326888514 +0000 UTC m=+0.271349352 container init 2fb7c5e7e4df7ae9ae4b63d50ca32ffc07844bf54003668b333329750da34106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_payne, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:35:38 compute-0 podman[341707]: 2025-11-26 01:35:38.342744 +0000 UTC m=+0.287204858 container start 2fb7c5e7e4df7ae9ae4b63d50ca32ffc07844bf54003668b333329750da34106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_payne, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 01:35:38 compute-0 podman[341707]: 2025-11-26 01:35:38.361136817 +0000 UTC m=+0.305597745 container attach 2fb7c5e7e4df7ae9ae4b63d50ca32ffc07844bf54003668b333329750da34106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_payne, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 01:35:38 compute-0 python3.9[341744]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:35:39 compute-0 loving_payne[341748]: {
Nov 26 01:35:39 compute-0 loving_payne[341748]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:35:39 compute-0 loving_payne[341748]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:35:39 compute-0 loving_payne[341748]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:35:39 compute-0 loving_payne[341748]:        "osd_id": 0,
Nov 26 01:35:39 compute-0 loving_payne[341748]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:35:39 compute-0 loving_payne[341748]:        "type": "bluestore"
Nov 26 01:35:39 compute-0 loving_payne[341748]:    },
Nov 26 01:35:39 compute-0 loving_payne[341748]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:35:39 compute-0 loving_payne[341748]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:35:39 compute-0 loving_payne[341748]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:35:39 compute-0 loving_payne[341748]:        "osd_id": 2,
Nov 26 01:35:39 compute-0 loving_payne[341748]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:35:39 compute-0 loving_payne[341748]:        "type": "bluestore"
Nov 26 01:35:39 compute-0 loving_payne[341748]:    },
Nov 26 01:35:39 compute-0 loving_payne[341748]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:35:39 compute-0 loving_payne[341748]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:35:39 compute-0 loving_payne[341748]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:35:39 compute-0 loving_payne[341748]:        "osd_id": 1,
Nov 26 01:35:39 compute-0 loving_payne[341748]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:35:39 compute-0 loving_payne[341748]:        "type": "bluestore"
Nov 26 01:35:39 compute-0 loving_payne[341748]:    }
Nov 26 01:35:39 compute-0 loving_payne[341748]: }
Nov 26 01:35:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:35:39 compute-0 podman[341707]: 2025-11-26 01:35:39.430971331 +0000 UTC m=+1.375432179 container died 2fb7c5e7e4df7ae9ae4b63d50ca32ffc07844bf54003668b333329750da34106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_payne, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 01:35:39 compute-0 systemd[1]: libpod-2fb7c5e7e4df7ae9ae4b63d50ca32ffc07844bf54003668b333329750da34106.scope: Deactivated successfully.
Nov 26 01:35:39 compute-0 systemd[1]: libpod-2fb7c5e7e4df7ae9ae4b63d50ca32ffc07844bf54003668b333329750da34106.scope: Consumed 1.092s CPU time.
Nov 26 01:35:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e28cc14e835fd425726c8256a0b40d34e007d9d52d198d4b19fde64ab3f1a765-merged.mount: Deactivated successfully.
Nov 26 01:35:39 compute-0 podman[341707]: 2025-11-26 01:35:39.520294563 +0000 UTC m=+1.464755391 container remove 2fb7c5e7e4df7ae9ae4b63d50ca32ffc07844bf54003668b333329750da34106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:35:39 compute-0 systemd[1]: libpod-conmon-2fb7c5e7e4df7ae9ae4b63d50ca32ffc07844bf54003668b333329750da34106.scope: Deactivated successfully.
Nov 26 01:35:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:35:39 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:35:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:35:39 compute-0 podman[341880]: 2025-11-26 01:35:39.592955177 +0000 UTC m=+0.123970738 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true)
Nov 26 01:35:39 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:35:39 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 2c99a348-bc0c-4132-8e94-163a406d77df does not exist
Nov 26 01:35:39 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 7721f46e-4f95-40aa-9bec-ffb34fba53be does not exist
Nov 26 01:35:39 compute-0 podman[341881]: 2025-11-26 01:35:39.6097843 +0000 UTC m=+0.163926771 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:35:39 compute-0 podman[341883]: 2025-11-26 01:35:39.619364269 +0000 UTC m=+0.158281082 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Nov 26 01:35:39 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:35:39 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:35:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:40 compute-0 python3.9[342057]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:35:41
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'backups', '.mgr', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'default.rgw.log']
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:35:41 compute-0 podman[342181]: 2025-11-26 01:35:41.060948529 +0000 UTC m=+0.121966371 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350)
Nov 26 01:35:41 compute-0 podman[342182]: 2025-11-26 01:35:41.079915662 +0000 UTC m=+0.123376920 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:35:41 compute-0 python3.9[342244]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:35:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:35:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:42 compute-0 python3.9[342403]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:35:43 compute-0 python3.9[342555]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:35:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:35:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:47 compute-0 podman[342580]: 2025-11-26 01:35:47.576410613 +0000 UTC m=+0.121356634 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 26 01:35:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:35:49 compute-0 python3.9[342726]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:50 compute-0 podman[342853]: 2025-11-26 01:35:50.466484504 +0000 UTC m=+0.137263591 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:35:50 compute-0 podman[342852]: 2025-11-26 01:35:50.482456103 +0000 UTC m=+0.158302372 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, container_name=kepler, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 01:35:50 compute-0 python3.9[342914]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:35:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:35:52 compute-0 python3.9[343077]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 01:35:52 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 01:35:52 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 01:35:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:35:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5680 writes, 24K keys, 5680 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5680 writes, 896 syncs, 6.34 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a132e68dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000106 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a132e68dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000106 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 26 01:35:53 compute-0 systemd-logind[800]: New session 56 of user zuul.
Nov 26 01:35:53 compute-0 systemd[1]: Started Session 56 of User zuul.
Nov 26 01:35:53 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Nov 26 01:35:53 compute-0 systemd-logind[800]: Session 56 logged out. Waiting for processes to exit.
Nov 26 01:35:53 compute-0 systemd-logind[800]: Removed session 56.
Nov 26 01:35:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:35:54 compute-0 python3.9[343264]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:35:55 compute-0 python3.9[343385]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764120953.7960021-1249-270782011227353/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:35:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:56 compute-0 python3.9[343535]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:35:56 compute-0 python3.9[343611]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:35:58 compute-0 python3.9[343761]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:35:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:35:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:35:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6949 writes, 28K keys, 6949 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6949 writes, 1271 syncs, 5.47 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a56a188dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a56a188dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 26 01:35:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:35:59 compute-0 podman[343832]: 2025-11-26 01:35:59.517296264 +0000 UTC m=+0.076710708 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Nov 26 01:35:59 compute-0 podman[158021]: time="2025-11-26T01:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:35:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38321 "" "Go-http-client/1.1"
Nov 26 01:35:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7683 "" "Go-http-client/1.1"
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.782 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.783 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feff248b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff25140e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248a270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff35fda90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff5310410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff2489520>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff4ce75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff5a700b0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feff25140b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feff248b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feff248b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feff248b740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feff248b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feff248b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feff248b1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feff248ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feff248b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feff248b0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feff248ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feff248bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feff248bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feff24894f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feff248b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feff248bc20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feff248b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feff248bcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feff55e84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feff248bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feff248b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feff248bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feff248a2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feff248aea0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feff248afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.801 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:35:59.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:35:59 compute-0 python3.9[343899]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764120957.2735393-1249-106132524309851/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:36:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:00 compute-0 python3.9[344050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:36:01 compute-0 openstack_network_exporter[160178]: ERROR   01:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:36:01 compute-0 openstack_network_exporter[160178]: ERROR   01:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:36:01 compute-0 openstack_network_exporter[160178]: ERROR   01:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:36:01 compute-0 openstack_network_exporter[160178]: ERROR   01:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:36:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:36:01 compute-0 openstack_network_exporter[160178]: ERROR   01:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:36:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:36:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:02 compute-0 python3.9[344171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764120960.1188726-1249-127646226918975/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:36:03 compute-0 python3.9[344321]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:36:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:36:04 compute-0 python3.9[344442]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764120962.672558-1249-136693680841397/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:36:05 compute-0 python3.9[344592]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:36:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:36:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 5768 writes, 24K keys, 5768 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5768 writes, 916 syncs, 6.30 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55731391f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55731391f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Nov 26 01:36:06 compute-0 ceph-mgr[193049]: [devicehealth INFO root] Check health
Nov 26 01:36:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:06 compute-0 python3.9[344713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764120964.6929374-1249-260795957365003/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:36:07 compute-0 python3.9[344865]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:36:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:08 compute-0 python3.9[345017]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:36:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:36:09 compute-0 python3.9[345169]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:36:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:10 compute-0 podman[345286]: 2025-11-26 01:36:10.574795206 +0000 UTC m=+0.124355968 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:36:10 compute-0 podman[345290]: 2025-11-26 01:36:10.601814525 +0000 UTC m=+0.148021563 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:36:10 compute-0 podman[345292]: 2025-11-26 01:36:10.628712352 +0000 UTC m=+0.168074708 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:36:10 compute-0 python3.9[345382]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:36:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:36:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:36:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:36:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:36:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:36:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:36:11 compute-0 podman[345482]: 2025-11-26 01:36:11.571110853 +0000 UTC m=+0.120622373 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Nov 26 01:36:11 compute-0 podman[345483]: 2025-11-26 01:36:11.60157759 +0000 UTC m=+0.142856469 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:36:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:12 compute-0 python3.9[345554]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764120969.9539247-1356-158530984525157/.source _original_basename=.3c6jyys3 follow=False checksum=c3836fcfe3164138d4d85a8502021a23c997d667 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 26 01:36:13 compute-0 python3.9[345706]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:36:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:36:15 compute-0 python3.9[345858]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:36:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:16 compute-0 python3.9[345979]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764120974.5995173-1382-77877675405975/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.033942) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120977034055, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1563, "num_deletes": 251, "total_data_size": 2588114, "memory_usage": 2618208, "flush_reason": "Manual Compaction"}
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120977053529, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2553409, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14732, "largest_seqno": 16294, "table_properties": {"data_size": 2546101, "index_size": 4379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14464, "raw_average_key_size": 19, "raw_value_size": 2531610, "raw_average_value_size": 3430, "num_data_blocks": 200, "num_entries": 738, "num_filter_entries": 738, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764120800, "oldest_key_time": 1764120800, "file_creation_time": 1764120977, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 19678 microseconds, and 10990 cpu microseconds.
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.053628) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2553409 bytes OK
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.053656) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.057187) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.057209) EVENT_LOG_v1 {"time_micros": 1764120977057202, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.057231) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2581362, prev total WAL file size 2581362, number of live WAL files 2.
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.059402) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2493KB)], [35(6752KB)]
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120977059491, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9467849, "oldest_snapshot_seqno": -1}
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3966 keys, 7712477 bytes, temperature: kUnknown
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120977118102, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7712477, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7683689, "index_size": 17798, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 96894, "raw_average_key_size": 24, "raw_value_size": 7609524, "raw_average_value_size": 1918, "num_data_blocks": 755, "num_entries": 3966, "num_filter_entries": 3966, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764120977, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.118376) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7712477 bytes
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.121669) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.3 rd, 131.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 6.6 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(6.7) write-amplify(3.0) OK, records in: 4480, records dropped: 514 output_compression: NoCompression
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.121699) EVENT_LOG_v1 {"time_micros": 1764120977121685, "job": 16, "event": "compaction_finished", "compaction_time_micros": 58686, "compaction_time_cpu_micros": 33596, "output_level": 6, "num_output_files": 1, "total_output_size": 7712477, "num_input_records": 4480, "num_output_records": 3966, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120977123122, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764120977126057, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.058930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.126256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.126739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.126744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.126748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:36:17 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:36:17.126751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:36:17 compute-0 python3.9[346129]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:36:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:18 compute-0 podman[346224]: 2025-11-26 01:36:18.219312235 +0000 UTC m=+0.111375934 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 01:36:18 compute-0 python3.9[346270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764120976.7634168-1397-165099589003280/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:36:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:36:19 compute-0 python3.9[346424]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 26 01:36:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:20 compute-0 podman[346548]: 2025-11-26 01:36:20.702374987 +0000 UTC m=+0.112794653 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, architecture=x86_64, container_name=kepler, io.openshift.tags=base rhel9, release-0.7.12=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9, config_id=edpm, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 01:36:20 compute-0 podman[346549]: 2025-11-26 01:36:20.716337959 +0000 UTC m=+0.114106421 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Nov 26 01:36:20 compute-0 python3.9[346612]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 01:36:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:22 compute-0 python3[346765]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 01:36:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:36:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:36:24.945 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:36:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:36:24.946 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:36:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:36:24.946 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:36:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:36:29 compute-0 podman[158021]: time="2025-11-26T01:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:36:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38321 "" "Go-http-client/1.1"
Nov 26 01:36:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7691 "" "Go-http-client/1.1"
Nov 26 01:36:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:31 compute-0 openstack_network_exporter[160178]: ERROR   01:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:36:31 compute-0 openstack_network_exporter[160178]: ERROR   01:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:36:31 compute-0 openstack_network_exporter[160178]: ERROR   01:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:36:31 compute-0 openstack_network_exporter[160178]: ERROR   01:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:36:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:36:31 compute-0 openstack_network_exporter[160178]: ERROR   01:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:36:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:36:31 compute-0 podman[346817]: 2025-11-26 01:36:31.485686749 +0000 UTC m=+1.043334865 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 26 01:36:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:36:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:39 compute-0 podman[346777]: 2025-11-26 01:36:39.164076113 +0000 UTC m=+16.878871426 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 26 01:36:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:36:39 compute-0 podman[346873]: 2025-11-26 01:36:39.419798743 +0000 UTC m=+0.082072802 container create d1aca02bfc4238c227812804c261db3b54300ce558d483a03a70331a43bbb588 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:36:39 compute-0 podman[346873]: 2025-11-26 01:36:39.382714703 +0000 UTC m=+0.044988802 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 26 01:36:39 compute-0 python3[346765]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 26 01:36:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:40 compute-0 python3.9[347161]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:36:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:36:40 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:36:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:36:40 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:36:40 compute-0 podman[347178]: 2025-11-26 01:36:40.811300169 +0000 UTC m=+0.143299519 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 01:36:40 compute-0 podman[347177]: 2025-11-26 01:36:40.822851503 +0000 UTC m=+0.153470664 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm)
Nov 26 01:36:40 compute-0 podman[347180]: 2025-11-26 01:36:40.850159878 +0000 UTC m=+0.174243406 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:36:41
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.log', 'images', 'vms']
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:36:41 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:36:41 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:36:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:36:41 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:36:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:36:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:36:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:36:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev c933dbc8-7b52-49e5-89ff-2f9e2a4b974a does not exist
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 14bbaae4-bf8a-44ed-bc26-98d13e12dad5 does not exist
Nov 26 01:36:41 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 916c9545-d9ef-4601-aa7d-2e8fadfedd0a does not exist
Nov 26 01:36:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:36:41 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:36:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:36:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:36:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:36:41 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:36:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:42 compute-0 podman[347475]: 2025-11-26 01:36:42.242323043 +0000 UTC m=+0.114582733 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Nov 26 01:36:42 compute-0 podman[347479]: 2025-11-26 01:36:42.248193168 +0000 UTC m=+0.116641491 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:36:42 compute-0 python3.9[347672]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 26 01:36:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:36:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:36:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:36:43 compute-0 podman[347734]: 2025-11-26 01:36:43.01220595 +0000 UTC m=+0.077821403 container create 226e922416f7eb40760ef733b0eee9fad42859f20e48de6f8573cda6594e2a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pascal, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 01:36:43 compute-0 podman[347734]: 2025-11-26 01:36:42.975627845 +0000 UTC m=+0.041243298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:36:43 compute-0 systemd[1]: Started libpod-conmon-226e922416f7eb40760ef733b0eee9fad42859f20e48de6f8573cda6594e2a0d.scope.
Nov 26 01:36:43 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:36:43 compute-0 podman[347734]: 2025-11-26 01:36:43.16447471 +0000 UTC m=+0.230090163 container init 226e922416f7eb40760ef733b0eee9fad42859f20e48de6f8573cda6594e2a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pascal, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:36:43 compute-0 podman[347734]: 2025-11-26 01:36:43.182097234 +0000 UTC m=+0.247712687 container start 226e922416f7eb40760ef733b0eee9fad42859f20e48de6f8573cda6594e2a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pascal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:36:43 compute-0 podman[347734]: 2025-11-26 01:36:43.188257877 +0000 UTC m=+0.253873350 container attach 226e922416f7eb40760ef733b0eee9fad42859f20e48de6f8573cda6594e2a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pascal, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 01:36:43 compute-0 pensive_pascal[347755]: 167 167
Nov 26 01:36:43 compute-0 systemd[1]: libpod-226e922416f7eb40760ef733b0eee9fad42859f20e48de6f8573cda6594e2a0d.scope: Deactivated successfully.
Nov 26 01:36:43 compute-0 podman[347734]: 2025-11-26 01:36:43.195488639 +0000 UTC m=+0.261104092 container died 226e922416f7eb40760ef733b0eee9fad42859f20e48de6f8573cda6594e2a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pascal, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:36:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-592311a258c4fc271326b08a692e692d7ed285d6dad86884aa05fc82ddb2308f-merged.mount: Deactivated successfully.
Nov 26 01:36:43 compute-0 podman[347734]: 2025-11-26 01:36:43.274099034 +0000 UTC m=+0.339714457 container remove 226e922416f7eb40760ef733b0eee9fad42859f20e48de6f8573cda6594e2a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pascal, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 01:36:43 compute-0 systemd[1]: libpod-conmon-226e922416f7eb40760ef733b0eee9fad42859f20e48de6f8573cda6594e2a0d.scope: Deactivated successfully.
Nov 26 01:36:43 compute-0 podman[347848]: 2025-11-26 01:36:43.490727938 +0000 UTC m=+0.056782333 container create 0ae8ebc318eb1cb015d8252f1639abfabb78d50590a16f8310a898225312a1d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:36:43 compute-0 systemd[1]: Started libpod-conmon-0ae8ebc318eb1cb015d8252f1639abfabb78d50590a16f8310a898225312a1d6.scope.
Nov 26 01:36:43 compute-0 podman[347848]: 2025-11-26 01:36:43.469392789 +0000 UTC m=+0.035447274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:36:43 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655cc419d49504fe51384bbc248dec49398f49584eb508bc8aac75e6bc701f59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655cc419d49504fe51384bbc248dec49398f49584eb508bc8aac75e6bc701f59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655cc419d49504fe51384bbc248dec49398f49584eb508bc8aac75e6bc701f59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655cc419d49504fe51384bbc248dec49398f49584eb508bc8aac75e6bc701f59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655cc419d49504fe51384bbc248dec49398f49584eb508bc8aac75e6bc701f59/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:43 compute-0 podman[347848]: 2025-11-26 01:36:43.645928288 +0000 UTC m=+0.211982773 container init 0ae8ebc318eb1cb015d8252f1639abfabb78d50590a16f8310a898225312a1d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 01:36:43 compute-0 podman[347848]: 2025-11-26 01:36:43.664493899 +0000 UTC m=+0.230548314 container start 0ae8ebc318eb1cb015d8252f1639abfabb78d50590a16f8310a898225312a1d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:36:43 compute-0 podman[347848]: 2025-11-26 01:36:43.6713187 +0000 UTC m=+0.237373105 container attach 0ae8ebc318eb1cb015d8252f1639abfabb78d50590a16f8310a898225312a1d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 01:36:43 compute-0 python3.9[347921]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 01:36:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:36:44 compute-0 inspiring_moser[347888]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:36:44 compute-0 inspiring_moser[347888]: --> relative data size: 1.0
Nov 26 01:36:44 compute-0 inspiring_moser[347888]: --> All data devices are unavailable
Nov 26 01:36:45 compute-0 systemd[1]: libpod-0ae8ebc318eb1cb015d8252f1639abfabb78d50590a16f8310a898225312a1d6.scope: Deactivated successfully.
Nov 26 01:36:45 compute-0 systemd[1]: libpod-0ae8ebc318eb1cb015d8252f1639abfabb78d50590a16f8310a898225312a1d6.scope: Consumed 1.252s CPU time.
Nov 26 01:36:45 compute-0 podman[347848]: 2025-11-26 01:36:45.016614901 +0000 UTC m=+1.582669316 container died 0ae8ebc318eb1cb015d8252f1639abfabb78d50590a16f8310a898225312a1d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:36:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-655cc419d49504fe51384bbc248dec49398f49584eb508bc8aac75e6bc701f59-merged.mount: Deactivated successfully.
Nov 26 01:36:45 compute-0 podman[347848]: 2025-11-26 01:36:45.089395032 +0000 UTC m=+1.655449427 container remove 0ae8ebc318eb1cb015d8252f1639abfabb78d50590a16f8310a898225312a1d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 01:36:45 compute-0 systemd[1]: libpod-conmon-0ae8ebc318eb1cb015d8252f1639abfabb78d50590a16f8310a898225312a1d6.scope: Deactivated successfully.
Nov 26 01:36:45 compute-0 python3[348093]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 01:36:45 compute-0 podman[348193]: 2025-11-26 01:36:45.486446725 +0000 UTC m=+0.084307015 container create ac7effc437d16249430863efb1c7b725e07f28ac46be28a00010c23b9db621e6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 26 01:36:45 compute-0 podman[348193]: 2025-11-26 01:36:45.441930117 +0000 UTC m=+0.039790427 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 26 01:36:45 compute-0 python3[348093]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 26 01:36:46 compute-0 podman[348359]: 2025-11-26 01:36:46.045007327 +0000 UTC m=+0.071596899 container create 27bd90b7eecbe8dab033b4e642f41f4e51230223b2e3f9a040a9495596006732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_maxwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 01:36:46 compute-0 systemd[1]: Started libpod-conmon-27bd90b7eecbe8dab033b4e642f41f4e51230223b2e3f9a040a9495596006732.scope.
Nov 26 01:36:46 compute-0 podman[348359]: 2025-11-26 01:36:46.024605204 +0000 UTC m=+0.051194776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:36:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:36:46 compute-0 podman[348359]: 2025-11-26 01:36:46.166201375 +0000 UTC m=+0.192791057 container init 27bd90b7eecbe8dab033b4e642f41f4e51230223b2e3f9a040a9495596006732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 01:36:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:46 compute-0 podman[348359]: 2025-11-26 01:36:46.184635282 +0000 UTC m=+0.211224864 container start 27bd90b7eecbe8dab033b4e642f41f4e51230223b2e3f9a040a9495596006732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:36:46 compute-0 funny_maxwell[348404]: 167 167
Nov 26 01:36:46 compute-0 podman[348359]: 2025-11-26 01:36:46.189801156 +0000 UTC m=+0.216390758 container attach 27bd90b7eecbe8dab033b4e642f41f4e51230223b2e3f9a040a9495596006732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_maxwell, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:36:46 compute-0 systemd[1]: libpod-27bd90b7eecbe8dab033b4e642f41f4e51230223b2e3f9a040a9495596006732.scope: Deactivated successfully.
Nov 26 01:36:46 compute-0 podman[348359]: 2025-11-26 01:36:46.191991298 +0000 UTC m=+0.218580900 container died 27bd90b7eecbe8dab033b4e642f41f4e51230223b2e3f9a040a9495596006732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 01:36:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a36c3840834c970d773208bd3159e1f4349ac56f929a7272f85e18f1de7ea1b-merged.mount: Deactivated successfully.
Nov 26 01:36:46 compute-0 podman[348359]: 2025-11-26 01:36:46.274142611 +0000 UTC m=+0.300732193 container remove 27bd90b7eecbe8dab033b4e642f41f4e51230223b2e3f9a040a9495596006732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_maxwell, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:36:46 compute-0 systemd[1]: libpod-conmon-27bd90b7eecbe8dab033b4e642f41f4e51230223b2e3f9a040a9495596006732.scope: Deactivated successfully.
Nov 26 01:36:46 compute-0 podman[348490]: 2025-11-26 01:36:46.506580269 +0000 UTC m=+0.071166147 container create b467624750da9b3c47de65993d6ca5cb08d431d674095f9708063567dafdea08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 26 01:36:46 compute-0 podman[348490]: 2025-11-26 01:36:46.480275671 +0000 UTC m=+0.044861559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:36:46 compute-0 systemd[1]: Started libpod-conmon-b467624750da9b3c47de65993d6ca5cb08d431d674095f9708063567dafdea08.scope.
Nov 26 01:36:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc9acfe6799e474867ddc0ffe0efb4755ffc0bbdc211c5b7277b5dc8371ce4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc9acfe6799e474867ddc0ffe0efb4755ffc0bbdc211c5b7277b5dc8371ce4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc9acfe6799e474867ddc0ffe0efb4755ffc0bbdc211c5b7277b5dc8371ce4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc9acfe6799e474867ddc0ffe0efb4755ffc0bbdc211c5b7277b5dc8371ce4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:46 compute-0 podman[348490]: 2025-11-26 01:36:46.693365176 +0000 UTC m=+0.257951054 container init b467624750da9b3c47de65993d6ca5cb08d431d674095f9708063567dafdea08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:36:46 compute-0 python3.9[348515]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:36:46 compute-0 podman[348490]: 2025-11-26 01:36:46.712446761 +0000 UTC m=+0.277032629 container start b467624750da9b3c47de65993d6ca5cb08d431d674095f9708063567dafdea08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:36:46 compute-0 podman[348490]: 2025-11-26 01:36:46.71669449 +0000 UTC m=+0.281280358 container attach b467624750da9b3c47de65993d6ca5cb08d431d674095f9708063567dafdea08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_golick, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:36:47 compute-0 thirsty_golick[348519]: {
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:    "0": [
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:        {
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "devices": [
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "/dev/loop3"
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            ],
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "lv_name": "ceph_lv0",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "lv_size": "21470642176",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "name": "ceph_lv0",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "tags": {
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.cluster_name": "ceph",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.crush_device_class": "",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.encrypted": "0",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.osd_id": "0",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.type": "block",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.vdo": "0"
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            },
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "type": "block",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "vg_name": "ceph_vg0"
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:        }
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:    ],
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:    "1": [
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:        {
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "devices": [
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "/dev/loop4"
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            ],
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "lv_name": "ceph_lv1",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "lv_size": "21470642176",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "name": "ceph_lv1",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "tags": {
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.cluster_name": "ceph",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.crush_device_class": "",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.encrypted": "0",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.osd_id": "1",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.type": "block",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.vdo": "0"
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            },
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "type": "block",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "vg_name": "ceph_vg1"
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:        }
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:    ],
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:    "2": [
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:        {
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "devices": [
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "/dev/loop5"
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            ],
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "lv_name": "ceph_lv2",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "lv_size": "21470642176",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "name": "ceph_lv2",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "tags": {
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.cluster_name": "ceph",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.crush_device_class": "",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.encrypted": "0",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.osd_id": "2",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.type": "block",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:                "ceph.vdo": "0"
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            },
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "type": "block",
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:            "vg_name": "ceph_vg2"
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:        }
Nov 26 01:36:47 compute-0 thirsty_golick[348519]:    ]
Nov 26 01:36:47 compute-0 thirsty_golick[348519]: }
Nov 26 01:36:47 compute-0 systemd[1]: libpod-b467624750da9b3c47de65993d6ca5cb08d431d674095f9708063567dafdea08.scope: Deactivated successfully.
Nov 26 01:36:47 compute-0 podman[348490]: 2025-11-26 01:36:47.541206808 +0000 UTC m=+1.105792686 container died b467624750da9b3c47de65993d6ca5cb08d431d674095f9708063567dafdea08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 01:36:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dbc9acfe6799e474867ddc0ffe0efb4755ffc0bbdc211c5b7277b5dc8371ce4-merged.mount: Deactivated successfully.
Nov 26 01:36:47 compute-0 podman[348490]: 2025-11-26 01:36:47.614467712 +0000 UTC m=+1.179053580 container remove b467624750da9b3c47de65993d6ca5cb08d431d674095f9708063567dafdea08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_golick, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:36:47 compute-0 systemd[1]: libpod-conmon-b467624750da9b3c47de65993d6ca5cb08d431d674095f9708063567dafdea08.scope: Deactivated successfully.
Nov 26 01:36:47 compute-0 python3.9[348691]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:36:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:48 compute-0 podman[348913]: 2025-11-26 01:36:48.562784862 +0000 UTC m=+0.111532258 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 01:36:48 compute-0 podman[348982]: 2025-11-26 01:36:48.673950189 +0000 UTC m=+0.062440532 container create 330e119314847c810079acf0c836833220275e54e42a1fd6d7c458f4451f595c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:36:48 compute-0 systemd[1]: Started libpod-conmon-330e119314847c810079acf0c836833220275e54e42a1fd6d7c458f4451f595c.scope.
Nov 26 01:36:48 compute-0 podman[348982]: 2025-11-26 01:36:48.649047111 +0000 UTC m=+0.037537474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:36:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:36:48 compute-0 podman[348982]: 2025-11-26 01:36:48.808663116 +0000 UTC m=+0.197153489 container init 330e119314847c810079acf0c836833220275e54e42a1fd6d7c458f4451f595c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:36:48 compute-0 podman[348982]: 2025-11-26 01:36:48.824703646 +0000 UTC m=+0.213193979 container start 330e119314847c810079acf0c836833220275e54e42a1fd6d7c458f4451f595c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 01:36:48 compute-0 podman[348982]: 2025-11-26 01:36:48.829439669 +0000 UTC m=+0.217930042 container attach 330e119314847c810079acf0c836833220275e54e42a1fd6d7c458f4451f595c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:36:48 compute-0 cranky_haslett[349018]: 167 167
Nov 26 01:36:48 compute-0 systemd[1]: libpod-330e119314847c810079acf0c836833220275e54e42a1fd6d7c458f4451f595c.scope: Deactivated successfully.
Nov 26 01:36:48 compute-0 podman[348982]: 2025-11-26 01:36:48.838596195 +0000 UTC m=+0.227086568 container died 330e119314847c810079acf0c836833220275e54e42a1fd6d7c458f4451f595c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:36:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b22b0937084244e8daa19525e24360c125498a85023ae48f83868c6c29d4d3d-merged.mount: Deactivated successfully.
Nov 26 01:36:48 compute-0 podman[348982]: 2025-11-26 01:36:48.907255301 +0000 UTC m=+0.295745674 container remove 330e119314847c810079acf0c836833220275e54e42a1fd6d7c458f4451f595c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 01:36:48 compute-0 systemd[1]: libpod-conmon-330e119314847c810079acf0c836833220275e54e42a1fd6d7c458f4451f595c.scope: Deactivated successfully.
Nov 26 01:36:48 compute-0 python3.9[349012]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764121007.9541323-1489-274280561289466/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:36:49 compute-0 podman[349058]: 2025-11-26 01:36:49.146123958 +0000 UTC m=+0.077090552 container create 3857f4bad3c9a60538d1f3415b4fa991a56eef576032aada0c6fad81d15ca411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:36:49 compute-0 podman[349058]: 2025-11-26 01:36:49.109625805 +0000 UTC m=+0.040592459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:36:49 compute-0 systemd[1]: Started libpod-conmon-3857f4bad3c9a60538d1f3415b4fa991a56eef576032aada0c6fad81d15ca411.scope.
Nov 26 01:36:49 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54796095a63f1f464f74e743b4fd23b25b388be509d545c77698ec3ce50518d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54796095a63f1f464f74e743b4fd23b25b388be509d545c77698ec3ce50518d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54796095a63f1f464f74e743b4fd23b25b388be509d545c77698ec3ce50518d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54796095a63f1f464f74e743b4fd23b25b388be509d545c77698ec3ce50518d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:49 compute-0 podman[349058]: 2025-11-26 01:36:49.31843292 +0000 UTC m=+0.249399564 container init 3857f4bad3c9a60538d1f3415b4fa991a56eef576032aada0c6fad81d15ca411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:36:49 compute-0 podman[349058]: 2025-11-26 01:36:49.32842048 +0000 UTC m=+0.259387074 container start 3857f4bad3c9a60538d1f3415b4fa991a56eef576032aada0c6fad81d15ca411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ellis, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:36:49 compute-0 podman[349058]: 2025-11-26 01:36:49.334417218 +0000 UTC m=+0.265383782 container attach 3857f4bad3c9a60538d1f3415b4fa991a56eef576032aada0c6fad81d15ca411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ellis, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:36:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:36:49 compute-0 python3.9[349137]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 01:36:49 compute-0 systemd[1]: Reloading.
Nov 26 01:36:49 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:36:49 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:50 compute-0 competent_ellis[349103]: {
Nov 26 01:36:50 compute-0 competent_ellis[349103]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:36:50 compute-0 competent_ellis[349103]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:36:50 compute-0 competent_ellis[349103]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:36:50 compute-0 competent_ellis[349103]:        "osd_id": 0,
Nov 26 01:36:50 compute-0 competent_ellis[349103]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:36:50 compute-0 competent_ellis[349103]:        "type": "bluestore"
Nov 26 01:36:50 compute-0 competent_ellis[349103]:    },
Nov 26 01:36:50 compute-0 competent_ellis[349103]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:36:50 compute-0 competent_ellis[349103]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:36:50 compute-0 competent_ellis[349103]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:36:50 compute-0 competent_ellis[349103]:        "osd_id": 2,
Nov 26 01:36:50 compute-0 competent_ellis[349103]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:36:50 compute-0 competent_ellis[349103]:        "type": "bluestore"
Nov 26 01:36:50 compute-0 competent_ellis[349103]:    },
Nov 26 01:36:50 compute-0 competent_ellis[349103]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:36:50 compute-0 competent_ellis[349103]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:36:50 compute-0 competent_ellis[349103]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:36:50 compute-0 competent_ellis[349103]:        "osd_id": 1,
Nov 26 01:36:50 compute-0 competent_ellis[349103]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:36:50 compute-0 competent_ellis[349103]:        "type": "bluestore"
Nov 26 01:36:50 compute-0 competent_ellis[349103]:    }
Nov 26 01:36:50 compute-0 competent_ellis[349103]: }
Nov 26 01:36:50 compute-0 systemd[1]: libpod-3857f4bad3c9a60538d1f3415b4fa991a56eef576032aada0c6fad81d15ca411.scope: Deactivated successfully.
Nov 26 01:36:50 compute-0 podman[349058]: 2025-11-26 01:36:50.446135459 +0000 UTC m=+1.377102053 container died 3857f4bad3c9a60538d1f3415b4fa991a56eef576032aada0c6fad81d15ca411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ellis, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 26 01:36:50 compute-0 systemd[1]: libpod-3857f4bad3c9a60538d1f3415b4fa991a56eef576032aada0c6fad81d15ca411.scope: Consumed 1.091s CPU time.
Nov 26 01:36:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f54796095a63f1f464f74e743b4fd23b25b388be509d545c77698ec3ce50518d-merged.mount: Deactivated successfully.
Nov 26 01:36:50 compute-0 podman[349058]: 2025-11-26 01:36:50.550526226 +0000 UTC m=+1.481492800 container remove 3857f4bad3c9a60538d1f3415b4fa991a56eef576032aada0c6fad81d15ca411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:36:50 compute-0 systemd[1]: libpod-conmon-3857f4bad3c9a60538d1f3415b4fa991a56eef576032aada0c6fad81d15ca411.scope: Deactivated successfully.
Nov 26 01:36:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:36:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:36:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:36:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 15dd7ec2-9afd-49cf-a490-081d172a663e does not exist
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev fa2b6aac-5223-40f1-b970-6962d396123a does not exist
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:36:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:36:50 compute-0 podman[349317]: 2025-11-26 01:36:50.898145962 +0000 UTC m=+0.107825264 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:36:50 compute-0 podman[349316]: 2025-11-26 01:36:50.91946713 +0000 UTC m=+0.127733142 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.buildah.version=1.29.0)
Nov 26 01:36:50 compute-0 python3.9[349301]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:36:51 compute-0 systemd[1]: Reloading.
Nov 26 01:36:51 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:36:51 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:36:51 compute-0 systemd[1]: Starting nova_compute container...
Nov 26 01:36:51 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:36:51 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:36:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71409531e552783a5c89b5b3c3f686d14563002c222d23aff80a6f2d424b359/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71409531e552783a5c89b5b3c3f686d14563002c222d23aff80a6f2d424b359/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71409531e552783a5c89b5b3c3f686d14563002c222d23aff80a6f2d424b359/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71409531e552783a5c89b5b3c3f686d14563002c222d23aff80a6f2d424b359/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71409531e552783a5c89b5b3c3f686d14563002c222d23aff80a6f2d424b359/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 26 01:36:51 compute-0 podman[349415]: 2025-11-26 01:36:51.762021265 +0000 UTC m=+0.197274353 container init ac7effc437d16249430863efb1c7b725e07f28ac46be28a00010c23b9db621e6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute)
Nov 26 01:36:51 compute-0 podman[349415]: 2025-11-26 01:36:51.770435801 +0000 UTC m=+0.205688839 container start ac7effc437d16249430863efb1c7b725e07f28ac46be28a00010c23b9db621e6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:36:51 compute-0 podman[349415]: nova_compute
Nov 26 01:36:51 compute-0 nova_compute[349430]: + sudo -E kolla_set_configs
Nov 26 01:36:51 compute-0 systemd[1]: Started nova_compute container.
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Validating config file
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Copying service configuration files
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Deleting /etc/ceph
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Creating directory /etc/ceph
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /etc/ceph
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Writing out command to execute
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 01:36:51 compute-0 nova_compute[349430]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 01:36:51 compute-0 nova_compute[349430]: ++ cat /run_command
Nov 26 01:36:51 compute-0 nova_compute[349430]: + CMD=nova-compute
Nov 26 01:36:51 compute-0 nova_compute[349430]: + ARGS=
Nov 26 01:36:51 compute-0 nova_compute[349430]: + sudo kolla_copy_cacerts
Nov 26 01:36:51 compute-0 nova_compute[349430]: + [[ ! -n '' ]]
Nov 26 01:36:51 compute-0 nova_compute[349430]: + . kolla_extend_start
Nov 26 01:36:51 compute-0 nova_compute[349430]: + echo 'Running command: '\''nova-compute'\'''
Nov 26 01:36:51 compute-0 nova_compute[349430]: Running command: 'nova-compute'
Nov 26 01:36:51 compute-0 nova_compute[349430]: + umask 0022
Nov 26 01:36:51 compute-0 nova_compute[349430]: + exec nova-compute
Nov 26 01:36:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:53 compute-0 python3.9[349592]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:36:54 compute-0 nova_compute[349430]: 2025-11-26 01:36:54.080 349434 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 01:36:54 compute-0 nova_compute[349430]: 2025-11-26 01:36:54.081 349434 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 01:36:54 compute-0 nova_compute[349430]: 2025-11-26 01:36:54.081 349434 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 01:36:54 compute-0 nova_compute[349430]: 2025-11-26 01:36:54.081 349434 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 26 01:36:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:54 compute-0 nova_compute[349430]: 2025-11-26 01:36:54.230 349434 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:36:54 compute-0 nova_compute[349430]: 2025-11-26 01:36:54.265 349434 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:36:54 compute-0 nova_compute[349430]: 2025-11-26 01:36:54.266 349434 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 26 01:36:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:36:54 compute-0 nova_compute[349430]: 2025-11-26 01:36:54.952 349434 INFO nova.virt.driver [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.084 349434 INFO nova.compute.provider_config [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.102 349434 DEBUG oslo_concurrency.lockutils [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.103 349434 DEBUG oslo_concurrency.lockutils [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.103 349434 DEBUG oslo_concurrency.lockutils [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.104 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.104 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.104 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.104 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.105 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.105 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.105 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.106 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.106 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.106 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.107 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.107 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.107 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.108 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.108 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.108 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.109 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.109 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.109 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.110 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.110 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.110 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.111 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.111 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.111 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.112 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.112 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.112 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.113 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.113 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.113 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.114 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.114 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.114 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.115 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.115 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.115 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.115 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.116 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.116 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.116 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.117 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.117 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.117 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.118 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.118 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.119 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.119 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.120 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.120 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.121 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.121 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.121 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.122 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.122 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.122 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.123 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.123 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.124 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.124 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.124 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.125 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.125 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.125 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.125 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.126 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.126 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.126 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.127 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.127 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.127 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.128 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.128 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.128 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.129 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.129 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.129 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.130 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.130 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.130 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.131 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.131 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.132 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.132 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.132 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.133 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.133 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.133 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.134 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.134 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.134 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.135 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.135 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.135 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.136 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.136 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.136 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.137 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.137 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.137 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.138 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.138 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.138 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.139 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.139 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.139 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.140 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.140 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.140 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.141 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.141 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.141 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.142 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.142 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.142 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.143 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.143 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.143 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.144 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.144 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.144 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.145 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.145 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.146 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.146 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.147 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.147 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.147 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.148 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.148 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.148 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.149 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.149 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.149 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.150 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.150 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.150 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.151 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.151 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.151 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.152 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.152 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.152 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.153 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.153 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.153 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.154 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.154 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.155 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.155 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.155 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.156 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.156 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.156 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.157 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.157 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.157 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.158 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.158 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.158 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.159 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.159 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.159 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.159 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.160 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.160 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.160 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.160 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.161 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.161 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.161 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.161 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.161 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.162 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.162 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.162 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.162 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.163 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.163 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.163 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.163 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.164 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.164 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.164 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.164 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.164 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.165 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.165 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.165 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.165 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.166 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.166 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.166 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.166 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.166 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.167 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.167 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.167 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.167 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.168 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.168 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.168 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.168 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.169 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.169 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.169 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.169 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.170 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.170 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.170 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.170 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.171 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.171 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.171 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.171 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.171 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.172 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.172 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.172 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.172 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.173 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.173 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.173 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.173 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.173 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.174 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.174 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.174 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.174 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.175 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.175 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.175 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.175 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.176 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.176 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.176 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.176 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.176 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.177 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.177 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.177 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.177 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.178 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.178 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.178 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.178 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.179 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.179 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.179 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.179 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.179 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.180 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.180 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.180 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.180 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.180 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.181 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.181 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.181 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.181 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.182 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.182 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.182 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.182 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.182 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.183 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.183 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.183 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.184 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.184 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.184 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.184 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.184 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.185 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.185 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.185 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.185 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.186 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.186 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.186 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.186 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.187 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.187 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.187 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.187 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.187 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.188 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.188 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.188 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.188 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.189 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.189 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.189 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.189 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.190 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.190 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.190 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.190 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.191 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.191 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.191 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.191 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.191 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.192 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.192 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.192 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.192 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.193 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.193 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.193 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.194 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.194 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.194 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.194 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.194 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.195 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.195 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.195 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.196 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.196 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.196 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.196 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.196 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.196 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.196 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.197 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.197 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.197 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.197 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.197 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.197 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.197 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.198 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.198 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.198 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.198 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.198 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.198 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.199 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.199 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.199 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.199 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.199 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.199 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.200 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.200 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.200 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.200 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.200 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.200 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.200 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.201 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.201 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.201 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.201 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.201 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.201 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.202 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.202 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.202 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.202 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.202 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.202 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.203 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.203 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.203 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.203 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.203 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.203 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.203 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.204 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.204 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.204 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.204 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.204 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.204 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.204 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.204 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.205 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.205 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.205 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.205 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.205 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.205 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.205 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.205 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.206 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.206 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.206 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.206 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.206 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.206 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.207 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.207 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.207 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.207 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.207 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.207 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.207 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.207 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.208 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.208 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.208 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.208 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.208 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.208 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.208 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.208 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.209 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.209 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.209 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.209 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.209 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.209 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.209 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.210 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.210 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.210 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.210 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.210 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.210 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.210 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.210 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.211 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.211 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.211 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.211 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.211 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.211 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.211 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.212 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.212 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.212 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.212 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.212 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.212 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.212 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.212 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.213 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.213 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.213 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.213 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.213 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.213 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.213 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.214 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.214 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.214 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.214 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.214 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.214 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.214 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.215 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.215 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.215 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.215 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.215 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.215 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.215 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.216 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.216 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.216 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.216 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.216 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.216 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.216 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.216 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.217 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.217 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.217 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.217 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.217 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.217 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.217 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.218 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.218 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.218 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.218 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.218 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.218 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.219 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.219 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.219 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.219 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.219 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.219 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.219 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.220 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.220 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.220 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.220 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.220 349434 WARNING oslo_config.cfg [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 26 01:36:55 compute-0 nova_compute[349430]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 26 01:36:55 compute-0 nova_compute[349430]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 26 01:36:55 compute-0 nova_compute[349430]: and ``live_migration_inbound_addr`` respectively.
Nov 26 01:36:55 compute-0 nova_compute[349430]: ).  Its value may be silently ignored in the future.#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.220 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.221 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.221 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.221 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.221 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.221 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.221 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.222 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.222 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.222 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.222 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.222 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.222 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.223 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.223 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.223 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.223 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.223 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.223 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.rbd_secret_uuid        = 36901f64-240e-5c29-a2e2-29b56f2c329c log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.223 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.224 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.224 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.224 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.224 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.224 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.224 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.224 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.225 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.225 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.225 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.225 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.225 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.225 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.226 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.226 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.226 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.226 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.226 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.226 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.227 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.227 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.227 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.227 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.227 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.227 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.227 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.227 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.228 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.228 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.228 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.228 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.228 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.228 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.228 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.229 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.229 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.229 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.229 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.229 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.229 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.229 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.229 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.230 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.230 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.230 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.230 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.230 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.230 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.230 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.231 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.231 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.231 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.231 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.231 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.231 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.231 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.231 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.232 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.232 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.232 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.232 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.232 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.232 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.232 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.233 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.233 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.233 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.233 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.233 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.233 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.233 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.233 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.234 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.234 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.234 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.234 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.234 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.234 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.234 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.235 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.235 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.235 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.235 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.235 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.235 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.235 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.235 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.236 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.236 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.236 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.236 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.236 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.236 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.236 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.236 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.237 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.237 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.237 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.237 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.237 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.237 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.237 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.238 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.238 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.238 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.238 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.238 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.238 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.238 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.239 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.239 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.239 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.239 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.239 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.239 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.240 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.240 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.240 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.240 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.240 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.240 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.240 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.241 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.241 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.241 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.241 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.241 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.241 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.242 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.242 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.242 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.242 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.242 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.242 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.242 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.243 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.243 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.243 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.243 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.243 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.243 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.243 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.244 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.244 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.244 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.244 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.244 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.244 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.244 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.244 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.245 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.245 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.245 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.245 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.245 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.245 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.245 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.246 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.246 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.246 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.246 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.246 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.246 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.247 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.247 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.247 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.247 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.247 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.247 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.247 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.248 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.248 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.248 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.248 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.248 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.248 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.248 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.249 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.249 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.249 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.249 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.249 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.249 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.249 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.250 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.250 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.250 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.250 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.250 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.250 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.250 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.250 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.251 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.251 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.251 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.251 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.251 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.251 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.251 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.251 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.252 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.252 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.252 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.252 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.252 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.252 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.252 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.253 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.253 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.253 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.253 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.253 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.253 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.253 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.253 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.254 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.254 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.254 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.254 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.254 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.254 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.254 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.255 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.255 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.255 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.255 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.255 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.255 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.256 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.256 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.256 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.256 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.256 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.256 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.256 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.257 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.257 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.257 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.257 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.257 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.257 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.257 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.257 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.258 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.258 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.258 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.258 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.258 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.258 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.258 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.259 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.259 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.259 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.259 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.259 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.259 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.259 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.260 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.260 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.260 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.260 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.260 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.260 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.260 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.260 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.261 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.261 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.261 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.261 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.261 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.261 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.261 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.262 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.262 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.262 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.262 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.262 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.262 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.263 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.263 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.263 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.263 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.263 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.263 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.263 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.263 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.264 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.264 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.264 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.264 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.264 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.264 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.264 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.265 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.265 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.265 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.265 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.265 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.265 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.265 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.266 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.266 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.266 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.266 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.266 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.266 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.266 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.267 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.267 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.267 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.267 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.267 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.267 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.267 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.268 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.268 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.268 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.268 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.268 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.268 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.269 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.269 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.269 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.269 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.269 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.269 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.269 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.269 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.270 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.270 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.270 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.270 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.270 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.270 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.270 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.271 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.277 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.277 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.277 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.277 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.277 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.277 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.278 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.278 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.278 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.278 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.278 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.278 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.278 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.278 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.279 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.279 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.279 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.279 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.279 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.279 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.279 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.280 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.280 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.280 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.280 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.280 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.280 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.280 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.281 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.281 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.281 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.281 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.281 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.281 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.282 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.282 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.282 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.282 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.282 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.282 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.283 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.283 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.283 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.283 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.283 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.283 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.284 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.284 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.284 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.284 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.284 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.284 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.285 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.285 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.285 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.285 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.285 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.285 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.285 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.286 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.286 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.286 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.286 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.286 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.286 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.286 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.287 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.287 349434 DEBUG oslo_service.service [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.288 349434 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.316 349434 DEBUG nova.virt.libvirt.host [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.317 349434 DEBUG nova.virt.libvirt.host [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.317 349434 DEBUG nova.virt.libvirt.host [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.317 349434 DEBUG nova.virt.libvirt.host [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 26 01:36:55 compute-0 python3.9[349746]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.338 349434 DEBUG nova.virt.libvirt.host [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7ff5b781bd90> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.344 349434 DEBUG nova.virt.libvirt.host [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7ff5b781bd90> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.346 349434 INFO nova.virt.libvirt.driver [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.381 349434 WARNING nova.virt.libvirt.driver [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 26 01:36:55 compute-0 nova_compute[349430]: 2025-11-26 01:36:55.382 349434 DEBUG nova.virt.libvirt.volume.mount [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 26 01:36:55 compute-0 auditd[705]: Audit daemon rotating log files
Nov 26 01:36:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:56 compute-0 python3.9[349935]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:36:56 compute-0 nova_compute[349430]: 2025-11-26 01:36:56.519 349434 INFO nova.virt.libvirt.host [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Libvirt host capabilities <capabilities>
Nov 26 01:36:56 compute-0 nova_compute[349430]: 
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <host>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <uuid>2220aeb1-94e1-4f31-94a2-20ade60d36f9</uuid>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <cpu>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <arch>x86_64</arch>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model>EPYC-Rome-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <vendor>AMD</vendor>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <microcode version='16777317'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <signature family='23' model='49' stepping='0'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='x2apic'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='tsc-deadline'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='osxsave'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='hypervisor'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='tsc_adjust'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='spec-ctrl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='stibp'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='arch-capabilities'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='ssbd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='cmp_legacy'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='topoext'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='virt-ssbd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='lbrv'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='tsc-scale'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='vmcb-clean'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='pause-filter'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='pfthreshold'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='svme-addr-chk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='rdctl-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='skip-l1dfl-vmentry'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='mds-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature name='pschange-mc-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <pages unit='KiB' size='4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <pages unit='KiB' size='2048'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <pages unit='KiB' size='1048576'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </cpu>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <power_management>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <suspend_mem/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </power_management>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <iommu support='no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <migration_features>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <live/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <uri_transports>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <uri_transport>tcp</uri_transport>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <uri_transport>rdma</uri_transport>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </uri_transports>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </migration_features>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <topology>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <cells num='1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <cell id='0'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:          <memory unit='KiB'>7864316</memory>
Nov 26 01:36:56 compute-0 nova_compute[349430]:          <pages unit='KiB' size='4'>1966079</pages>
Nov 26 01:36:56 compute-0 nova_compute[349430]:          <pages unit='KiB' size='2048'>0</pages>
Nov 26 01:36:56 compute-0 nova_compute[349430]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 26 01:36:56 compute-0 nova_compute[349430]:          <distances>
Nov 26 01:36:56 compute-0 nova_compute[349430]:            <sibling id='0' value='10'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:          </distances>
Nov 26 01:36:56 compute-0 nova_compute[349430]:          <cpus num='8'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:          </cpus>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        </cell>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </cells>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </topology>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <cache>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </cache>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <secmodel>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model>selinux</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <doi>0</doi>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </secmodel>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <secmodel>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model>dac</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <doi>0</doi>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </secmodel>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </host>
Nov 26 01:36:56 compute-0 nova_compute[349430]: 
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <guest>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <os_type>hvm</os_type>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <arch name='i686'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <wordsize>32</wordsize>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <domain type='qemu'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <domain type='kvm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </arch>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <features>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <pae/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <nonpae/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <acpi default='on' toggle='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <apic default='on' toggle='no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <cpuselection/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <deviceboot/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <disksnapshot default='on' toggle='no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <externalSnapshot/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </features>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </guest>
Nov 26 01:36:56 compute-0 nova_compute[349430]: 
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <guest>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <os_type>hvm</os_type>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <arch name='x86_64'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <wordsize>64</wordsize>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <domain type='qemu'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <domain type='kvm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </arch>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <features>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <acpi default='on' toggle='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <apic default='on' toggle='no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <cpuselection/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <deviceboot/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <disksnapshot default='on' toggle='no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <externalSnapshot/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </features>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </guest>
Nov 26 01:36:56 compute-0 nova_compute[349430]: 
Nov 26 01:36:56 compute-0 nova_compute[349430]: </capabilities>
Nov 26 01:36:56 compute-0 nova_compute[349430]: #033[00m
Nov 26 01:36:56 compute-0 nova_compute[349430]: 2025-11-26 01:36:56.533 349434 DEBUG nova.virt.libvirt.host [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 26 01:36:56 compute-0 nova_compute[349430]: 2025-11-26 01:36:56.588 349434 DEBUG nova.virt.libvirt.host [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 26 01:36:56 compute-0 nova_compute[349430]: <domainCapabilities>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <domain>kvm</domain>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <arch>i686</arch>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <vcpu max='4096'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <iothreads supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <os supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <enum name='firmware'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <loader supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>rom</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pflash</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='readonly'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>yes</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>no</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='secure'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>no</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </loader>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </os>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <cpu>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <mode name='host-passthrough' supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='hostPassthroughMigratable'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>on</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>off</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <mode name='maximum' supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='maximumMigratable'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>on</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>off</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <mode name='host-model' supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <vendor>AMD</vendor>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='x2apic'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='hypervisor'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='stibp'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='ssbd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='overflow-recov'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='succor'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='ibrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='lbrv'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='tsc-scale'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='flushbyasid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='pause-filter'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='pfthreshold'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='disable' name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <mode name='custom' supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-noTSX'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cooperlake'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cooperlake-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cooperlake-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Denverton'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Denverton-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Denverton-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Denverton-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Dhyana-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Genoa'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amd-psfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='auto-ibrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='no-nested-data-bp'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='null-sel-clr-base'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='stibp-always-on'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amd-psfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='auto-ibrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='no-nested-data-bp'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='null-sel-clr-base'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='stibp-always-on'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Milan'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Milan-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Milan-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amd-psfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='no-nested-data-bp'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='null-sel-clr-base'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='stibp-always-on'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='GraniteRapids'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='prefetchiti'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='GraniteRapids-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='prefetchiti'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='GraniteRapids-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx10'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx10-128'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx10-256'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx10-512'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='prefetchiti'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-noTSX'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v5'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v6'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v7'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='IvyBridge'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='IvyBridge-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='IvyBridge-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='IvyBridge-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='KnightsMill'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-4fmaps'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-4vnniw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512er'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512pf'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='KnightsMill-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-4fmaps'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-4vnniw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512er'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512pf'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Opteron_G4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Opteron_G4-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Opteron_G5'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tbm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Opteron_G5-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tbm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SierraForest'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-ne-convert'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cmpccxadd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SierraForest-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-ne-convert'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cmpccxadd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v5'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Snowridge'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='athlon'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='athlon-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='core2duo'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='core2duo-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='coreduo'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='coreduo-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='n270'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='n270-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='phenom'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='phenom-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </cpu>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <memoryBacking supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <enum name='sourceType'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>file</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>anonymous</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>memfd</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </memoryBacking>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <devices>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <disk supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='diskDevice'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>disk</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>cdrom</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>floppy</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>lun</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='bus'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>fdc</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>scsi</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>usb</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>sata</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio-transitional</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio-non-transitional</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </disk>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <graphics supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vnc</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>egl-headless</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>dbus</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </graphics>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <video supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='modelType'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vga</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>cirrus</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>none</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>bochs</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>ramfb</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </video>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <hostdev supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='mode'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>subsystem</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='startupPolicy'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>default</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>mandatory</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>requisite</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>optional</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='subsysType'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>usb</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pci</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>scsi</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='capsType'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='pciBackend'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </hostdev>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <rng supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio-transitional</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio-non-transitional</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='backendModel'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>random</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>egd</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>builtin</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </rng>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <filesystem supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='driverType'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>path</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>handle</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtiofs</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </filesystem>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <tpm supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>tpm-tis</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>tpm-crb</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='backendModel'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>emulator</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>external</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='backendVersion'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>2.0</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </tpm>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <redirdev supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='bus'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>usb</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </redirdev>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <channel supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pty</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>unix</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </channel>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <crypto supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='model'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>qemu</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='backendModel'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>builtin</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </crypto>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <interface supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='backendType'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>default</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>passt</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </interface>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <panic supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>isa</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>hyperv</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </panic>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <console supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>null</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vc</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pty</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>dev</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>file</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pipe</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>stdio</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>udp</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>tcp</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>unix</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>qemu-vdagent</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>dbus</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </console>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </devices>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <features>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <gic supported='no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <vmcoreinfo supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <genid supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <backingStoreInput supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <backup supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <async-teardown supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <ps2 supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <sev supported='no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <sgx supported='no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <hyperv supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='features'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>relaxed</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vapic</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>spinlocks</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vpindex</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>runtime</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>synic</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>stimer</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>reset</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vendor_id</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>frequencies</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>reenlightenment</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>tlbflush</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>ipi</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>avic</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>emsr_bitmap</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>xmm_input</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <defaults>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <spinlocks>4095</spinlocks>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <stimer_direct>on</stimer_direct>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </defaults>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </hyperv>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <launchSecurity supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='sectype'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>tdx</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </launchSecurity>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </features>
Nov 26 01:36:56 compute-0 nova_compute[349430]: </domainCapabilities>
Nov 26 01:36:56 compute-0 nova_compute[349430]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 01:36:56 compute-0 nova_compute[349430]: 2025-11-26 01:36:56.622 349434 DEBUG nova.virt.libvirt.host [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 26 01:36:56 compute-0 nova_compute[349430]: <domainCapabilities>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <domain>kvm</domain>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <arch>i686</arch>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <vcpu max='240'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <iothreads supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <os supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <enum name='firmware'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <loader supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>rom</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pflash</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='readonly'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>yes</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>no</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='secure'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>no</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </loader>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </os>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <cpu>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <mode name='host-passthrough' supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='hostPassthroughMigratable'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>on</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>off</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <mode name='maximum' supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='maximumMigratable'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>on</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>off</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <mode name='host-model' supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <vendor>AMD</vendor>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='x2apic'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='hypervisor'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='stibp'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='ssbd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='overflow-recov'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='succor'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='ibrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='lbrv'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='tsc-scale'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='flushbyasid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='pause-filter'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='pfthreshold'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='disable' name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <mode name='custom' supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-noTSX'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cooperlake'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cooperlake-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cooperlake-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Denverton'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Denverton-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Denverton-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Denverton-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Dhyana-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Genoa'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amd-psfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='auto-ibrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='no-nested-data-bp'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='null-sel-clr-base'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='stibp-always-on'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amd-psfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='auto-ibrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='no-nested-data-bp'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='null-sel-clr-base'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='stibp-always-on'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Milan'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Milan-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Milan-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amd-psfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='no-nested-data-bp'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='null-sel-clr-base'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='stibp-always-on'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='GraniteRapids'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='prefetchiti'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='GraniteRapids-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='prefetchiti'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='GraniteRapids-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx10'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx10-128'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx10-256'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx10-512'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='prefetchiti'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-noTSX'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v5'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v6'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v7'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='IvyBridge'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='IvyBridge-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='IvyBridge-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='IvyBridge-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='KnightsMill'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-4fmaps'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-4vnniw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512er'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512pf'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='KnightsMill-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-4fmaps'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-4vnniw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512er'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512pf'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Opteron_G4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Opteron_G4-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Opteron_G5'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tbm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Opteron_G5-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tbm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SierraForest'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-ne-convert'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cmpccxadd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SierraForest-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-ne-convert'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cmpccxadd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v5'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Snowridge'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='athlon'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='athlon-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='core2duo'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='core2duo-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='coreduo'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='coreduo-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='n270'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='n270-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='phenom'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='phenom-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </cpu>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <memoryBacking supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <enum name='sourceType'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>file</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>anonymous</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>memfd</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </memoryBacking>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <devices>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <disk supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='diskDevice'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>disk</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>cdrom</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>floppy</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>lun</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='bus'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>ide</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>fdc</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>scsi</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>usb</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>sata</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio-transitional</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio-non-transitional</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </disk>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <graphics supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vnc</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>egl-headless</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>dbus</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </graphics>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <video supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='modelType'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vga</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>cirrus</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>none</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>bochs</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>ramfb</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </video>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <hostdev supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='mode'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>subsystem</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='startupPolicy'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>default</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>mandatory</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>requisite</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>optional</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='subsysType'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>usb</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pci</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>scsi</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='capsType'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='pciBackend'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </hostdev>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <rng supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio-transitional</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio-non-transitional</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='backendModel'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>random</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>egd</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>builtin</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </rng>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <filesystem supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='driverType'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>path</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>handle</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtiofs</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </filesystem>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <tpm supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>tpm-tis</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>tpm-crb</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='backendModel'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>emulator</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>external</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='backendVersion'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>2.0</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </tpm>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <redirdev supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='bus'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>usb</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </redirdev>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <channel supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pty</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>unix</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </channel>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <crypto supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='model'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>qemu</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='backendModel'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>builtin</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </crypto>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <interface supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='backendType'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>default</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>passt</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </interface>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <panic supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>isa</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>hyperv</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </panic>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <console supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>null</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vc</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pty</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>dev</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>file</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pipe</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>stdio</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>udp</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>tcp</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>unix</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>qemu-vdagent</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>dbus</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </console>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </devices>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <features>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <gic supported='no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <vmcoreinfo supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <genid supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <backingStoreInput supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <backup supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <async-teardown supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <ps2 supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <sev supported='no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <sgx supported='no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <hyperv supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='features'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>relaxed</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vapic</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>spinlocks</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vpindex</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>runtime</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>synic</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>stimer</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>reset</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vendor_id</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>frequencies</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>reenlightenment</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>tlbflush</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>ipi</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>avic</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>emsr_bitmap</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>xmm_input</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <defaults>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <spinlocks>4095</spinlocks>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <stimer_direct>on</stimer_direct>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </defaults>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </hyperv>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <launchSecurity supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='sectype'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>tdx</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </launchSecurity>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </features>
Nov 26 01:36:56 compute-0 nova_compute[349430]: </domainCapabilities>
Nov 26 01:36:56 compute-0 nova_compute[349430]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 01:36:56 compute-0 nova_compute[349430]: 2025-11-26 01:36:56.685 349434 DEBUG nova.virt.libvirt.host [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 26 01:36:56 compute-0 nova_compute[349430]: 2025-11-26 01:36:56.694 349434 DEBUG nova.virt.libvirt.host [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 26 01:36:56 compute-0 nova_compute[349430]: <domainCapabilities>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <domain>kvm</domain>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <arch>x86_64</arch>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <vcpu max='4096'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <iothreads supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <os supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <enum name='firmware'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>efi</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <loader supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>rom</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pflash</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='readonly'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>yes</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>no</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='secure'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>yes</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>no</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </loader>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </os>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <cpu>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <mode name='host-passthrough' supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='hostPassthroughMigratable'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>on</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>off</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <mode name='maximum' supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='maximumMigratable'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>on</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>off</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <mode name='host-model' supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <vendor>AMD</vendor>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='x2apic'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='hypervisor'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='stibp'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='ssbd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='overflow-recov'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='succor'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='ibrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='lbrv'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='tsc-scale'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='flushbyasid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='pause-filter'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='pfthreshold'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='disable' name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <mode name='custom' supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-noTSX'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cooperlake'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cooperlake-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Cooperlake-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Denverton'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Denverton-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Denverton-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Denverton-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Dhyana-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Genoa'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amd-psfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='auto-ibrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='no-nested-data-bp'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='null-sel-clr-base'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='stibp-always-on'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amd-psfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='auto-ibrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='no-nested-data-bp'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='null-sel-clr-base'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='stibp-always-on'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Milan'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Milan-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Milan-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amd-psfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='no-nested-data-bp'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='null-sel-clr-base'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='stibp-always-on'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='EPYC-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='GraniteRapids'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='prefetchiti'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='GraniteRapids-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='prefetchiti'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='GraniteRapids-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx10'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx10-128'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx10-256'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx10-512'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='prefetchiti'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-noTSX'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Haswell-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v5'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v6'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v7'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='IvyBridge'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='IvyBridge-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='IvyBridge-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='IvyBridge-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='KnightsMill'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-4fmaps'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-4vnniw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512er'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512pf'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='KnightsMill-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-4fmaps'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-4vnniw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512er'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512pf'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Opteron_G4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Opteron_G4-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Opteron_G5'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tbm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Opteron_G5-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tbm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SierraForest'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-ne-convert'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cmpccxadd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='SierraForest-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-ifma'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-ne-convert'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx-vnni-int8'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cmpccxadd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v5'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Snowridge'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v2'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v3'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v4'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='athlon'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='athlon-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='core2duo'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='core2duo-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='coreduo'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='coreduo-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='n270'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='n270-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='phenom'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <blockers model='phenom-v1'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </cpu>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <memoryBacking supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <enum name='sourceType'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>file</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>anonymous</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>memfd</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </memoryBacking>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <devices>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <disk supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='diskDevice'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>disk</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>cdrom</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>floppy</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>lun</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='bus'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>fdc</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>scsi</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>usb</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>sata</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio-transitional</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio-non-transitional</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </disk>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <graphics supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vnc</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>egl-headless</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>dbus</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </graphics>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <video supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='modelType'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vga</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>cirrus</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>none</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>bochs</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>ramfb</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </video>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <hostdev supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='mode'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>subsystem</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='startupPolicy'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>default</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>mandatory</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>requisite</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>optional</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='subsysType'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>usb</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pci</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>scsi</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='capsType'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='pciBackend'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </hostdev>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <rng supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio-transitional</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtio-non-transitional</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='backendModel'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>random</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>egd</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>builtin</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </rng>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <filesystem supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='driverType'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>path</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>handle</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>virtiofs</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </filesystem>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <tpm supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>tpm-tis</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>tpm-crb</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='backendModel'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>emulator</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>external</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='backendVersion'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>2.0</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </tpm>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <redirdev supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='bus'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>usb</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </redirdev>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <channel supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pty</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>unix</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </channel>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <crypto supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='model'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>qemu</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='backendModel'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>builtin</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </crypto>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <interface supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='backendType'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>default</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>passt</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </interface>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <panic supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>isa</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>hyperv</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </panic>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <console supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>null</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vc</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pty</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>dev</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>file</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pipe</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>stdio</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>udp</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>tcp</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>unix</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>qemu-vdagent</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>dbus</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </console>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </devices>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <features>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <gic supported='no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <vmcoreinfo supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <genid supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <backingStoreInput supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <backup supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <async-teardown supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <ps2 supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <sev supported='no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <sgx supported='no'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <hyperv supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='features'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>relaxed</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vapic</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>spinlocks</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vpindex</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>runtime</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>synic</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>stimer</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>reset</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>vendor_id</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>frequencies</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>reenlightenment</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>tlbflush</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>ipi</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>avic</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>emsr_bitmap</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>xmm_input</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <defaults>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <spinlocks>4095</spinlocks>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <stimer_direct>on</stimer_direct>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </defaults>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </hyperv>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <launchSecurity supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='sectype'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>tdx</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </launchSecurity>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </features>
Nov 26 01:36:56 compute-0 nova_compute[349430]: </domainCapabilities>
Nov 26 01:36:56 compute-0 nova_compute[349430]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 01:36:56 compute-0 nova_compute[349430]: 2025-11-26 01:36:56.795 349434 DEBUG nova.virt.libvirt.host [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 26 01:36:56 compute-0 nova_compute[349430]: <domainCapabilities>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <domain>kvm</domain>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <arch>x86_64</arch>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <vcpu max='240'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <iothreads supported='yes'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <os supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <enum name='firmware'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <loader supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>rom</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>pflash</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='readonly'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>yes</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>no</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='secure'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>no</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </loader>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  </os>
Nov 26 01:36:56 compute-0 nova_compute[349430]:  <cpu>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <mode name='host-passthrough' supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='hostPassthroughMigratable'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>on</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>off</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <mode name='maximum' supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <enum name='maximumMigratable'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>on</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:        <value>off</value>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:56 compute-0 nova_compute[349430]:    <mode name='host-model' supported='yes'>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <vendor>AMD</vendor>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='x2apic'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 01:36:56 compute-0 nova_compute[349430]:      <feature policy='require' name='hypervisor'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='stibp'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='ssbd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='overflow-recov'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='succor'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='ibrs'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='lbrv'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='tsc-scale'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='flushbyasid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='pause-filter'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='pfthreshold'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <feature policy='disable' name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <mode name='custom' supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Broadwell'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Broadwell-IBRS'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Broadwell-noTSX'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v2'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v3'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Broadwell-v4'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Cooperlake'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Cooperlake-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Cooperlake-v2'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Denverton'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Denverton-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Denverton-v2'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Denverton-v3'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Dhyana-v2'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='EPYC-Genoa'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amd-psfd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='auto-ibrs'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='no-nested-data-bp'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='null-sel-clr-base'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='stibp-always-on'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amd-psfd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='auto-ibrs'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='no-nested-data-bp'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='null-sel-clr-base'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='stibp-always-on'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='EPYC-Milan'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='EPYC-Milan-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='EPYC-Milan-v2'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amd-psfd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='no-nested-data-bp'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='null-sel-clr-base'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='stibp-always-on'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome-v2'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='EPYC-Rome-v3'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='EPYC-v3'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='EPYC-v4'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='GraniteRapids'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-fp16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='prefetchiti'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='GraniteRapids-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-fp16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='prefetchiti'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='GraniteRapids-v2'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-fp16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx10'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx10-128'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx10-256'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx10-512'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='prefetchiti'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Haswell'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Haswell-IBRS'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Haswell-noTSX'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Haswell-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Haswell-v2'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Haswell-v3'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Haswell-v4'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v2'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v3'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v4'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v5'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v6'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Icelake-Server-v7'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='IvyBridge'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='IvyBridge-IBRS'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='IvyBridge-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='IvyBridge-v2'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='KnightsMill'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-4fmaps'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-4vnniw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512er'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512pf'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='KnightsMill-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-4fmaps'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-4vnniw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512er'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512pf'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Opteron_G4'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Opteron_G4-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Opteron_G5'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='tbm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Opteron_G5-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fma4'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='tbm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xop'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids-v2'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='SapphireRapids-v3'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-int8'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='amx-tile'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-bf16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-fp16'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bitalg'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512ifma'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vbmi2'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrc'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fzrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='la57'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='taa-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='tsx-ldtrk'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xfd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='SierraForest'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx-ifma'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx-ne-convert'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx-vnni-int8'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='cmpccxadd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='SierraForest-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx-ifma'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx-ne-convert'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx-vnni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx-vnni-int8'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='bus-lock-detect'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='cmpccxadd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fbsdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='fsrs'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ibrs-all'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='mcdt-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pbrsb-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='psdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='serialize'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vaes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='vpclmulqdq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v2'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v3'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Skylake-Client-v4'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v2'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='hle'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='rtm'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v3'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v4'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Skylake-Server-v5'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512bw'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512cd'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512dq'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512f'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='avx512vl'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='invpcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pcid'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='pku'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Snowridge'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='mpx'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v2'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v3'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='core-capability'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='split-lock-detect'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='Snowridge-v4'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='cldemote'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='erms'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='gfni'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='movdir64b'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='movdiri'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='xsaves'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='athlon'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='athlon-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='core2duo'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='core2duo-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='coreduo'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='coreduo-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='n270'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='n270-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='ss'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='phenom'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <blockers model='phenom-v1'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='3dnow'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <feature name='3dnowext'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </blockers>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </mode>
Nov 26 01:36:57 compute-0 nova_compute[349430]:  </cpu>
Nov 26 01:36:57 compute-0 nova_compute[349430]:  <memoryBacking supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <enum name='sourceType'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <value>file</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <value>anonymous</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <value>memfd</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:  </memoryBacking>
Nov 26 01:36:57 compute-0 nova_compute[349430]:  <devices>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <disk supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='diskDevice'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>disk</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>cdrom</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>floppy</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>lun</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='bus'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>ide</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>fdc</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>scsi</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>usb</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>sata</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>virtio-transitional</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>virtio-non-transitional</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </disk>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <graphics supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>vnc</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>egl-headless</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>dbus</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </graphics>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <video supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='modelType'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>vga</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>cirrus</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>none</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>bochs</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>ramfb</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </video>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <hostdev supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='mode'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>subsystem</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='startupPolicy'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>default</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>mandatory</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>requisite</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>optional</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='subsysType'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>usb</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>pci</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>scsi</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='capsType'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='pciBackend'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </hostdev>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <rng supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>virtio</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>virtio-transitional</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>virtio-non-transitional</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='backendModel'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>random</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>egd</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>builtin</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </rng>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <filesystem supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='driverType'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>path</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>handle</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>virtiofs</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </filesystem>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <tpm supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>tpm-tis</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>tpm-crb</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='backendModel'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>emulator</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>external</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='backendVersion'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>2.0</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </tpm>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <redirdev supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='bus'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>usb</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </redirdev>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <channel supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>pty</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>unix</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </channel>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <crypto supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='model'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>qemu</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='backendModel'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>builtin</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </crypto>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <interface supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='backendType'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>default</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>passt</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </interface>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <panic supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='model'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>isa</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>hyperv</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </panic>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <console supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='type'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>null</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>vc</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>pty</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>dev</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>file</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>pipe</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>stdio</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>udp</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>tcp</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>unix</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>qemu-vdagent</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>dbus</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </console>
Nov 26 01:36:57 compute-0 nova_compute[349430]:  </devices>
Nov 26 01:36:57 compute-0 nova_compute[349430]:  <features>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <gic supported='no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <vmcoreinfo supported='yes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <genid supported='yes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <backingStoreInput supported='yes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <backup supported='yes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <async-teardown supported='yes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <ps2 supported='yes'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <sev supported='no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <sgx supported='no'/>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <hyperv supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='features'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>relaxed</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>vapic</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>spinlocks</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>vpindex</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>runtime</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>synic</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>stimer</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>reset</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>vendor_id</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>frequencies</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>reenlightenment</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>tlbflush</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>ipi</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>avic</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>emsr_bitmap</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>xmm_input</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <defaults>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <spinlocks>4095</spinlocks>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <stimer_direct>on</stimer_direct>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </defaults>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </hyperv>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    <launchSecurity supported='yes'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      <enum name='sectype'>
Nov 26 01:36:57 compute-0 nova_compute[349430]:        <value>tdx</value>
Nov 26 01:36:57 compute-0 nova_compute[349430]:      </enum>
Nov 26 01:36:57 compute-0 nova_compute[349430]:    </launchSecurity>
Nov 26 01:36:57 compute-0 nova_compute[349430]:  </features>
Nov 26 01:36:57 compute-0 nova_compute[349430]: </domainCapabilities>
Nov 26 01:36:57 compute-0 nova_compute[349430]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 01:36:57 compute-0 nova_compute[349430]: 2025-11-26 01:36:56.912 349434 DEBUG nova.virt.libvirt.host [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 26 01:36:57 compute-0 nova_compute[349430]: 2025-11-26 01:36:56.913 349434 INFO nova.virt.libvirt.host [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Secure Boot support detected#033[00m
Nov 26 01:36:57 compute-0 nova_compute[349430]: 2025-11-26 01:36:56.917 349434 INFO nova.virt.libvirt.driver [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 26 01:36:57 compute-0 nova_compute[349430]: 2025-11-26 01:36:56.918 349434 INFO nova.virt.libvirt.driver [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 26 01:36:57 compute-0 nova_compute[349430]: 2025-11-26 01:36:56.940 349434 DEBUG nova.virt.libvirt.driver [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 26 01:36:57 compute-0 nova_compute[349430]: 2025-11-26 01:36:56.999 349434 INFO nova.virt.node [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Determined node identity 0e9e5c9b-dee2-4076-966b-e19b2697b966 from /var/lib/nova/compute_id#033[00m
Nov 26 01:36:57 compute-0 nova_compute[349430]: 2025-11-26 01:36:57.041 349434 WARNING nova.compute.manager [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Compute nodes ['0e9e5c9b-dee2-4076-966b-e19b2697b966'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 26 01:36:57 compute-0 nova_compute[349430]: 2025-11-26 01:36:57.115 349434 INFO nova.compute.manager [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 26 01:36:57 compute-0 nova_compute[349430]: 2025-11-26 01:36:57.161 349434 WARNING nova.compute.manager [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 26 01:36:57 compute-0 nova_compute[349430]: 2025-11-26 01:36:57.162 349434 DEBUG oslo_concurrency.lockutils [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:36:57 compute-0 nova_compute[349430]: 2025-11-26 01:36:57.162 349434 DEBUG oslo_concurrency.lockutils [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:36:57 compute-0 nova_compute[349430]: 2025-11-26 01:36:57.163 349434 DEBUG oslo_concurrency.lockutils [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:36:57 compute-0 nova_compute[349430]: 2025-11-26 01:36:57.164 349434 DEBUG nova.compute.resource_tracker [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:36:57 compute-0 nova_compute[349430]: 2025-11-26 01:36:57.164 349434 DEBUG oslo_concurrency.processutils [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:36:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:36:57 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2057391954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:36:57 compute-0 nova_compute[349430]: 2025-11-26 01:36:57.688 349434 DEBUG oslo_concurrency.processutils [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:36:57 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 26 01:36:57 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 26 01:36:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:36:58 compute-0 nova_compute[349430]: 2025-11-26 01:36:58.356 349434 WARNING nova.virt.libvirt.driver [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:36:58 compute-0 nova_compute[349430]: 2025-11-26 01:36:58.360 349434 DEBUG nova.compute.resource_tracker [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4565MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:36:58 compute-0 nova_compute[349430]: 2025-11-26 01:36:58.360 349434 DEBUG oslo_concurrency.lockutils [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:36:58 compute-0 nova_compute[349430]: 2025-11-26 01:36:58.361 349434 DEBUG oslo_concurrency.lockutils [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:36:58 compute-0 nova_compute[349430]: 2025-11-26 01:36:58.383 349434 WARNING nova.compute.resource_tracker [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] No compute node record for compute-0.ctlplane.example.com:0e9e5c9b-dee2-4076-966b-e19b2697b966: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 0e9e5c9b-dee2-4076-966b-e19b2697b966 could not be found.#033[00m
Nov 26 01:36:58 compute-0 nova_compute[349430]: 2025-11-26 01:36:58.407 349434 INFO nova.compute.resource_tracker [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 0e9e5c9b-dee2-4076-966b-e19b2697b966#033[00m
Nov 26 01:36:58 compute-0 nova_compute[349430]: 2025-11-26 01:36:58.493 349434 DEBUG nova.compute.resource_tracker [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:36:58 compute-0 nova_compute[349430]: 2025-11-26 01:36:58.494 349434 DEBUG nova.compute.resource_tracker [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:36:58 compute-0 python3.9[350136]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 26 01:36:58 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 01:36:58 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 01:36:59 compute-0 nova_compute[349430]: 2025-11-26 01:36:59.367 349434 INFO nova.scheduler.client.report [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] [req-079d223c-01d2-4c52-8d27-0b1d08c3bd9c] Created resource provider record via placement API for resource provider with UUID 0e9e5c9b-dee2-4076-966b-e19b2697b966 and name compute-0.ctlplane.example.com.#033[00m
Nov 26 01:36:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:36:59 compute-0 podman[158021]: time="2025-11-26T01:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:36:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42588 "" "Go-http-client/1.1"
Nov 26 01:36:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8119 "" "Go-http-client/1.1"
Nov 26 01:36:59 compute-0 nova_compute[349430]: 2025-11-26 01:36:59.797 349434 DEBUG oslo_concurrency.processutils [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:36:59 compute-0 python3.9[350310]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 01:37:00 compute-0 systemd[1]: Stopping nova_compute container...
Nov 26 01:37:00 compute-0 nova_compute[349430]: 2025-11-26 01:37:00.113 349434 DEBUG oslo_concurrency.lockutils [None req-486ebb25-ad0e-4b3f-8292-d85f60f9fb52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:37:00 compute-0 nova_compute[349430]: 2025-11-26 01:37:00.114 349434 DEBUG oslo_concurrency.lockutils [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:37:00 compute-0 nova_compute[349430]: 2025-11-26 01:37:00.114 349434 DEBUG oslo_concurrency.lockutils [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:37:00 compute-0 nova_compute[349430]: 2025-11-26 01:37:00.114 349434 DEBUG oslo_concurrency.lockutils [None req-1a2490b8-5be9-4cf1-a316-271ce0d38d3d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:37:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:00 compute-0 virtqemud[138515]: End of file while reading data: Input/output error
Nov 26 01:37:00 compute-0 systemd[1]: libpod-ac7effc437d16249430863efb1c7b725e07f28ac46be28a00010c23b9db621e6.scope: Deactivated successfully.
Nov 26 01:37:00 compute-0 systemd[1]: libpod-ac7effc437d16249430863efb1c7b725e07f28ac46be28a00010c23b9db621e6.scope: Consumed 4.614s CPU time.
Nov 26 01:37:00 compute-0 podman[350334]: 2025-11-26 01:37:00.615129178 +0000 UTC m=+0.579098209 container died ac7effc437d16249430863efb1c7b725e07f28ac46be28a00010c23b9db621e6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 01:37:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac7effc437d16249430863efb1c7b725e07f28ac46be28a00010c23b9db621e6-userdata-shm.mount: Deactivated successfully.
Nov 26 01:37:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b71409531e552783a5c89b5b3c3f686d14563002c222d23aff80a6f2d424b359-merged.mount: Deactivated successfully.
Nov 26 01:37:01 compute-0 openstack_network_exporter[160178]: ERROR   01:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:37:01 compute-0 openstack_network_exporter[160178]: ERROR   01:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:37:01 compute-0 openstack_network_exporter[160178]: ERROR   01:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:37:01 compute-0 openstack_network_exporter[160178]: ERROR   01:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:37:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:37:01 compute-0 openstack_network_exporter[160178]: ERROR   01:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:37:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:37:01 compute-0 podman[350334]: 2025-11-26 01:37:01.56102771 +0000 UTC m=+1.524996691 container cleanup ac7effc437d16249430863efb1c7b725e07f28ac46be28a00010c23b9db621e6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3)
Nov 26 01:37:01 compute-0 podman[350334]: nova_compute
Nov 26 01:37:01 compute-0 podman[350360]: nova_compute
Nov 26 01:37:01 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 26 01:37:01 compute-0 systemd[1]: Stopped nova_compute container.
Nov 26 01:37:01 compute-0 systemd[1]: edpm_nova_compute.service: Consumed 1.192s CPU time, 17.6M memory peak, read 0B from disk, written 107.0K to disk.
Nov 26 01:37:01 compute-0 systemd[1]: Starting nova_compute container...
Nov 26 01:37:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71409531e552783a5c89b5b3c3f686d14563002c222d23aff80a6f2d424b359/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71409531e552783a5c89b5b3c3f686d14563002c222d23aff80a6f2d424b359/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71409531e552783a5c89b5b3c3f686d14563002c222d23aff80a6f2d424b359/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71409531e552783a5c89b5b3c3f686d14563002c222d23aff80a6f2d424b359/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71409531e552783a5c89b5b3c3f686d14563002c222d23aff80a6f2d424b359/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:01 compute-0 podman[350372]: 2025-11-26 01:37:01.907088923 +0000 UTC m=+0.147600260 container init ac7effc437d16249430863efb1c7b725e07f28ac46be28a00010c23b9db621e6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251118)
Nov 26 01:37:01 compute-0 podman[350372]: 2025-11-26 01:37:01.927757382 +0000 UTC m=+0.168268719 container start ac7effc437d16249430863efb1c7b725e07f28ac46be28a00010c23b9db621e6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 01:37:01 compute-0 podman[350372]: nova_compute
Nov 26 01:37:01 compute-0 nova_compute[350387]: + sudo -E kolla_set_configs
Nov 26 01:37:01 compute-0 systemd[1]: Started nova_compute container.
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Validating config file
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Copying service configuration files
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Deleting /etc/ceph
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Creating directory /etc/ceph
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /etc/ceph
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Writing out command to execute
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 01:37:02 compute-0 nova_compute[350387]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 01:37:02 compute-0 nova_compute[350387]: ++ cat /run_command
Nov 26 01:37:02 compute-0 nova_compute[350387]: + CMD=nova-compute
Nov 26 01:37:02 compute-0 nova_compute[350387]: + ARGS=
Nov 26 01:37:02 compute-0 nova_compute[350387]: + sudo kolla_copy_cacerts
Nov 26 01:37:02 compute-0 nova_compute[350387]: + [[ ! -n '' ]]
Nov 26 01:37:02 compute-0 nova_compute[350387]: + . kolla_extend_start
Nov 26 01:37:02 compute-0 nova_compute[350387]: Running command: 'nova-compute'
Nov 26 01:37:02 compute-0 nova_compute[350387]: + echo 'Running command: '\''nova-compute'\'''
Nov 26 01:37:02 compute-0 nova_compute[350387]: + umask 0022
Nov 26 01:37:02 compute-0 nova_compute[350387]: + exec nova-compute
Nov 26 01:37:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:03 compute-0 python3.9[350551]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 26 01:37:03 compute-0 systemd[1]: Started libpod-conmon-d1aca02bfc4238c227812804c261db3b54300ce558d483a03a70331a43bbb588.scope.
Nov 26 01:37:03 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdae92c34daf69728c5a1a4a3c275c4b592ed04d329f7a412afe0475ac5731f2/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdae92c34daf69728c5a1a4a3c275c4b592ed04d329f7a412afe0475ac5731f2/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdae92c34daf69728c5a1a4a3c275c4b592ed04d329f7a412afe0475ac5731f2/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:03 compute-0 podman[350575]: 2025-11-26 01:37:03.610540236 +0000 UTC m=+0.231111311 container init d1aca02bfc4238c227812804c261db3b54300ce558d483a03a70331a43bbb588 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 01:37:03 compute-0 podman[350575]: 2025-11-26 01:37:03.632414039 +0000 UTC m=+0.252985054 container start d1aca02bfc4238c227812804c261db3b54300ce558d483a03a70331a43bbb588 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3)
Nov 26 01:37:03 compute-0 python3.9[350551]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 26 01:37:03 compute-0 nova_compute_init[350595]: INFO:nova_statedir:Applying nova statedir ownership
Nov 26 01:37:03 compute-0 nova_compute_init[350595]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 26 01:37:03 compute-0 nova_compute_init[350595]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 26 01:37:03 compute-0 nova_compute_init[350595]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 26 01:37:03 compute-0 nova_compute_init[350595]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 26 01:37:03 compute-0 nova_compute_init[350595]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 26 01:37:03 compute-0 nova_compute_init[350595]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 26 01:37:03 compute-0 nova_compute_init[350595]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 26 01:37:03 compute-0 nova_compute_init[350595]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 26 01:37:03 compute-0 nova_compute_init[350595]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 26 01:37:03 compute-0 nova_compute_init[350595]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 26 01:37:03 compute-0 nova_compute_init[350595]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 26 01:37:03 compute-0 nova_compute_init[350595]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 26 01:37:03 compute-0 nova_compute_init[350595]: INFO:nova_statedir:Nova statedir ownership complete
Nov 26 01:37:03 compute-0 systemd[1]: libpod-d1aca02bfc4238c227812804c261db3b54300ce558d483a03a70331a43bbb588.scope: Deactivated successfully.
Nov 26 01:37:03 compute-0 conmon[350588]: conmon d1aca02bfc4238c22781 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1aca02bfc4238c227812804c261db3b54300ce558d483a03a70331a43bbb588.scope/container/memory.events
Nov 26 01:37:03 compute-0 podman[350596]: 2025-11-26 01:37:03.7280061 +0000 UTC m=+0.050916519 container died d1aca02bfc4238c227812804c261db3b54300ce558d483a03a70331a43bbb588 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 26 01:37:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1aca02bfc4238c227812804c261db3b54300ce558d483a03a70331a43bbb588-userdata-shm.mount: Deactivated successfully.
Nov 26 01:37:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdae92c34daf69728c5a1a4a3c275c4b592ed04d329f7a412afe0475ac5731f2-merged.mount: Deactivated successfully.
Nov 26 01:37:03 compute-0 podman[350607]: 2025-11-26 01:37:03.839345301 +0000 UTC m=+0.106352733 container cleanup d1aca02bfc4238c227812804c261db3b54300ce558d483a03a70331a43bbb588 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 01:37:03 compute-0 systemd[1]: libpod-conmon-d1aca02bfc4238c227812804c261db3b54300ce558d483a03a70331a43bbb588.scope: Deactivated successfully.
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.118 350391 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.119 350391 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.119 350391 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.119 350391 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 26 01:37:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.261 350391 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.288 350391 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.289 350391 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 26 01:37:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:37:04 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Nov 26 01:37:04 compute-0 systemd[1]: session-55.scope: Consumed 4min 11.101s CPU time.
Nov 26 01:37:04 compute-0 systemd-logind[800]: Session 55 logged out. Waiting for processes to exit.
Nov 26 01:37:04 compute-0 systemd-logind[800]: Removed session 55.
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.752 350391 INFO nova.virt.driver [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.885 350391 INFO nova.compute.provider_config [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.907 350391 DEBUG oslo_concurrency.lockutils [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.908 350391 DEBUG oslo_concurrency.lockutils [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.908 350391 DEBUG oslo_concurrency.lockutils [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.908 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.908 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.908 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.909 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.909 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.909 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.909 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.909 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.909 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.909 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.910 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.910 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.910 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.910 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.910 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.910 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.911 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.911 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.911 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.911 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.911 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.911 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.911 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.912 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.912 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.912 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.912 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.912 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.912 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.913 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.913 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.913 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.913 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.913 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.913 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.913 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.914 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.914 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.914 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.914 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.914 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.915 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.915 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.915 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.915 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.915 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.915 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.915 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.916 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.916 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.916 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.916 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.916 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.916 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.916 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.917 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.917 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.917 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.917 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.917 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.917 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.917 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.917 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.918 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.918 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.918 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.918 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.918 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.918 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.919 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.919 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.919 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.919 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.919 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.919 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.919 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.919 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.920 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.920 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.920 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.920 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.920 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.920 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.920 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.921 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.921 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.921 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.921 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.921 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.921 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.921 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.922 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.922 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.922 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.922 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.922 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.922 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.922 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.923 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.923 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.923 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.923 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.923 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.923 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.923 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.924 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.924 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.924 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.924 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.924 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.924 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.924 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.925 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.925 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.925 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.925 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.925 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.925 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.925 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.926 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.926 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.926 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.926 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.926 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.926 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.926 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.926 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.927 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.927 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.927 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.927 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.927 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.927 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.927 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.928 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.928 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.928 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.928 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.928 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.928 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.928 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.928 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.929 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.929 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.929 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.929 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.929 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.929 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.929 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.930 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.930 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.930 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.930 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.930 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.930 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.931 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.931 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.931 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.931 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.931 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.931 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.932 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.932 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.932 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.932 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.932 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.932 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.932 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.933 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.933 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.933 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.933 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.933 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.933 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.933 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.934 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.934 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.934 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.934 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.934 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.934 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.935 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.935 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.935 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.935 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.935 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.935 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.935 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.936 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.936 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.936 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.936 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.936 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.936 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.936 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.937 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.937 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.937 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.937 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.937 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.937 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.937 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.938 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.938 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.938 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.938 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.938 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.938 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.939 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.939 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.939 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.939 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.939 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.939 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.939 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.940 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.940 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.940 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.940 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.940 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.940 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.940 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.941 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.941 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.941 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.941 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.941 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.941 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.941 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.942 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.942 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.942 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.942 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.942 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.942 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.942 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.943 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.943 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.943 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.943 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.943 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.943 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.944 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.944 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.944 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.944 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.944 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.944 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.944 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.945 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.945 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.945 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.945 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.945 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.945 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.945 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.946 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.946 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.946 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.946 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.946 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.946 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.946 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.947 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.947 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.947 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.947 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.947 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.947 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.948 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.948 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.948 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.948 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.948 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.948 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.948 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.949 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.949 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.949 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.949 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.949 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.949 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.949 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.950 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.950 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.950 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.950 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.950 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.950 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.950 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.951 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.951 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.951 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.951 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.951 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.951 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.951 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.952 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.952 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.952 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.952 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.952 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.952 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.952 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.953 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.953 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.953 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.953 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.953 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.953 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.953 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.954 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.954 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.954 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.954 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.954 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.954 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.955 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.955 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.955 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.955 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.955 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.956 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.956 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.956 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.956 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.956 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.956 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.956 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.957 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.957 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.957 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.957 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.957 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.957 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.957 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.958 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.958 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.958 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.958 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.958 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.958 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.959 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.959 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.959 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.959 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.959 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.959 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.959 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.960 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.960 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.960 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.960 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.960 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.960 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.961 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.961 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.961 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.961 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.961 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.961 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.962 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.962 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.962 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.962 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.962 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.962 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.963 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.963 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.963 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.963 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.963 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.963 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.963 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.964 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.964 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.964 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.964 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.964 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.964 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.964 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.965 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.965 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.965 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.965 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.965 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.965 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.965 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.966 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.966 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.966 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.966 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.966 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.966 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.967 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.967 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.967 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.967 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.967 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.967 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.967 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.968 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.968 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.968 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.968 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.968 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.968 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.968 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.969 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.969 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.969 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.969 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.969 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.969 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.969 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.969 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.970 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.970 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.970 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.970 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.970 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.970 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.971 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.971 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.971 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.971 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.971 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.971 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.971 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.972 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.972 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.972 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.972 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.972 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.972 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.972 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.973 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.973 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.973 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.973 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.973 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.973 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.973 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.974 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.974 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.974 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.974 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.974 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.974 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.974 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.975 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.975 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.975 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.975 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.975 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.975 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.976 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.976 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.976 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.976 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.976 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.976 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.976 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.977 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.977 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.977 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.977 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.977 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.977 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.977 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.978 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.978 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.978 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.978 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.978 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.978 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.979 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.979 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.979 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.979 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.979 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.979 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.979 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.980 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.980 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.980 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.980 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.980 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.980 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.980 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.981 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.981 350391 WARNING oslo_config.cfg [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 26 01:37:04 compute-0 nova_compute[350387]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 26 01:37:04 compute-0 nova_compute[350387]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 26 01:37:04 compute-0 nova_compute[350387]: and ``live_migration_inbound_addr`` respectively.
Nov 26 01:37:04 compute-0 nova_compute[350387]: ).  Its value may be silently ignored in the future.#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.981 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.981 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.981 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.982 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.982 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.982 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.982 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.982 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.983 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.983 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.983 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.983 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.983 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.983 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.984 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.984 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.984 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.984 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.984 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.rbd_secret_uuid        = 36901f64-240e-5c29-a2e2-29b56f2c329c log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.984 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.985 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.985 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.985 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.985 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.985 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.986 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.986 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.986 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.986 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.986 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.986 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.987 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.987 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.987 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.987 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.987 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.987 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.988 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.988 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.988 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.988 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.988 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.988 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.988 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.989 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.989 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.989 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.989 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.989 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.989 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.989 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.990 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.990 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.990 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.990 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.990 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.990 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.990 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.990 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.991 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.991 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.991 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.991 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.991 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.991 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.992 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.992 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.992 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.992 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.992 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.992 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.992 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.993 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.993 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.993 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.993 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.993 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.993 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.993 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.994 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.994 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.994 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.994 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.994 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.994 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.994 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.995 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.995 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.995 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.995 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.995 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.995 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.995 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.996 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.996 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.996 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.996 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.996 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.996 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.996 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.997 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.997 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.997 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.997 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.997 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.997 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.997 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.997 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.998 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.998 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.998 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.998 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.998 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.998 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.998 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.999 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.999 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.999 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.999 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:04 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.999 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.999 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:04.999 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.000 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.000 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.000 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.000 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.000 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.000 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.001 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.001 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.001 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.001 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.001 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.001 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.001 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.002 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.002 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.002 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.002 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.002 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.002 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.003 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.003 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.003 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.003 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.003 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.003 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.004 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.004 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.004 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.004 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.004 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.004 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.005 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.005 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.005 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.006 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.006 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.006 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.007 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.007 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.007 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.007 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.008 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.008 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.008 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.008 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.009 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.009 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.009 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.009 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.010 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.010 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.010 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.010 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.011 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.011 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.011 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.011 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.012 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.012 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.012 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.013 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.013 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.013 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.013 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.014 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.014 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.014 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.014 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.014 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.015 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.015 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.015 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.015 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.015 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.015 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.016 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.016 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.016 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.016 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.016 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.017 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.017 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.017 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.018 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.018 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.018 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.018 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.018 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.019 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.019 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.019 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.019 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.019 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.019 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.020 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.020 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.020 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.020 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.020 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.021 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.021 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.021 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.021 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.021 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.022 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.022 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.022 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.022 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.022 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.023 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.023 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.023 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.023 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.023 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.024 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.024 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.024 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.024 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.025 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.025 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.025 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.025 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.025 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.026 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.026 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.026 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.026 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.026 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.027 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.027 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.027 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.027 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.027 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.027 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.028 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.028 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.028 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.028 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.028 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.029 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.029 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.029 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.029 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.029 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.030 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.030 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.030 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.030 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.030 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.031 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.031 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.031 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.031 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.031 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.032 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.032 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.032 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.032 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.032 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.033 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.033 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.033 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.033 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.034 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.034 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.034 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.034 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.034 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.035 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.035 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.035 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.035 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.036 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.036 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.036 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.036 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.037 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.037 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.037 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.037 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.037 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.038 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.038 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.038 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.038 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.038 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.039 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.039 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.039 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.039 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.040 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.040 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.040 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.040 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.040 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.040 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.041 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.041 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.041 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.041 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.041 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.042 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.042 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.042 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.042 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.042 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.043 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.043 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.043 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.043 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.043 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.043 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.044 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.044 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.044 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.044 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.044 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.045 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.045 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.045 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.045 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.045 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.045 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.046 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.046 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.046 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.046 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.046 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.046 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.047 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.047 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.047 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.047 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.047 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.047 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.048 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.048 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.048 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.048 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.048 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.048 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.049 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.049 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.049 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.049 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.049 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.049 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.050 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.050 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.050 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.050 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.050 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.050 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.051 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.051 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.051 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.051 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.051 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.051 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.052 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.052 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.052 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.052 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.052 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.053 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.053 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.053 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.053 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.053 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.053 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.054 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.054 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.054 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.054 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.054 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.054 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.055 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.055 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.055 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.055 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.055 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.056 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.056 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.056 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.056 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.056 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.057 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.057 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.057 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.057 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.057 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.058 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.058 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.058 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.058 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.058 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.058 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.059 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.059 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.059 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.059 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.059 350391 DEBUG oslo_service.service [None req-cf1d7a01-4787-412a-bc0a-ec8a61a2b02d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.060 350391 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.079 350391 INFO nova.virt.node [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Determined node identity 0e9e5c9b-dee2-4076-966b-e19b2697b966 from /var/lib/nova/compute_id#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.080 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.081 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.081 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.081 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.104 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fa453f6fd00> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.111 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fa453f6fd00> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.113 350391 INFO nova.virt.libvirt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.120 350391 INFO nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Libvirt host capabilities <capabilities>
Nov 26 01:37:05 compute-0 nova_compute[350387]: 
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <host>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <uuid>2220aeb1-94e1-4f31-94a2-20ade60d36f9</uuid>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <cpu>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <arch>x86_64</arch>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model>EPYC-Rome-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <vendor>AMD</vendor>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <microcode version='16777317'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <signature family='23' model='49' stepping='0'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='x2apic'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='tsc-deadline'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='osxsave'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='hypervisor'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='tsc_adjust'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='spec-ctrl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='stibp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='arch-capabilities'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='ssbd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='cmp_legacy'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='topoext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='virt-ssbd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='lbrv'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='tsc-scale'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='vmcb-clean'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='pause-filter'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='pfthreshold'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='svme-addr-chk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='rdctl-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='skip-l1dfl-vmentry'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='mds-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature name='pschange-mc-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <pages unit='KiB' size='4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <pages unit='KiB' size='2048'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <pages unit='KiB' size='1048576'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </cpu>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <power_management>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <suspend_mem/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </power_management>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <iommu support='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <migration_features>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <live/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <uri_transports>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <uri_transport>tcp</uri_transport>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <uri_transport>rdma</uri_transport>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </uri_transports>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </migration_features>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <topology>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <cells num='1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <cell id='0'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:          <memory unit='KiB'>7864316</memory>
Nov 26 01:37:05 compute-0 nova_compute[350387]:          <pages unit='KiB' size='4'>1966079</pages>
Nov 26 01:37:05 compute-0 nova_compute[350387]:          <pages unit='KiB' size='2048'>0</pages>
Nov 26 01:37:05 compute-0 nova_compute[350387]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 26 01:37:05 compute-0 nova_compute[350387]:          <distances>
Nov 26 01:37:05 compute-0 nova_compute[350387]:            <sibling id='0' value='10'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:          </distances>
Nov 26 01:37:05 compute-0 nova_compute[350387]:          <cpus num='8'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:          </cpus>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        </cell>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </cells>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </topology>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <cache>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </cache>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <secmodel>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model>selinux</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <doi>0</doi>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </secmodel>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <secmodel>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model>dac</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <doi>0</doi>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </secmodel>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </host>
Nov 26 01:37:05 compute-0 nova_compute[350387]: 
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <guest>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <os_type>hvm</os_type>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <arch name='i686'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <wordsize>32</wordsize>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <domain type='qemu'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <domain type='kvm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </arch>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <features>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <pae/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <nonpae/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <acpi default='on' toggle='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <apic default='on' toggle='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <cpuselection/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <deviceboot/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <disksnapshot default='on' toggle='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <externalSnapshot/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </features>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </guest>
Nov 26 01:37:05 compute-0 nova_compute[350387]: 
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <guest>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <os_type>hvm</os_type>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <arch name='x86_64'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <wordsize>64</wordsize>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <domain type='qemu'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <domain type='kvm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </arch>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <features>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <acpi default='on' toggle='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <apic default='on' toggle='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <cpuselection/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <deviceboot/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <disksnapshot default='on' toggle='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <externalSnapshot/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </features>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </guest>
Nov 26 01:37:05 compute-0 nova_compute[350387]: 
Nov 26 01:37:05 compute-0 nova_compute[350387]: </capabilities>
Nov 26 01:37:05 compute-0 nova_compute[350387]: #033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.133 350391 DEBUG nova.virt.libvirt.volume.mount [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.137 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.143 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 26 01:37:05 compute-0 nova_compute[350387]: <domainCapabilities>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <domain>kvm</domain>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <arch>i686</arch>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <vcpu max='240'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <iothreads supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <os supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <enum name='firmware'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <loader supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>rom</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pflash</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='readonly'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>yes</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>no</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='secure'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>no</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </loader>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </os>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <cpu>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='host-passthrough' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='hostPassthroughMigratable'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>on</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>off</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='maximum' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='maximumMigratable'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>on</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>off</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='host-model' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <vendor>AMD</vendor>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='x2apic'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='hypervisor'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='stibp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='ssbd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='overflow-recov'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='succor'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='ibrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='lbrv'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='tsc-scale'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='flushbyasid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='pause-filter'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='pfthreshold'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='disable' name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='custom' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cooperlake'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cooperlake-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cooperlake-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Dhyana-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Genoa'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amd-psfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='auto-ibrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='no-nested-data-bp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='null-sel-clr-base'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='stibp-always-on'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amd-psfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='auto-ibrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='no-nested-data-bp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='null-sel-clr-base'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='stibp-always-on'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Milan'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Milan-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Milan-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amd-psfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='no-nested-data-bp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='null-sel-clr-base'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='stibp-always-on'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='GraniteRapids'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='prefetchiti'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='GraniteRapids-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='prefetchiti'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='GraniteRapids-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10-128'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10-256'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10-512'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='prefetchiti'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v6'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v7'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='KnightsMill'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4fmaps'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4vnniw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512er'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512pf'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='KnightsMill-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4fmaps'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4vnniw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512er'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512pf'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G4-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tbm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G5-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tbm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SierraForest'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ne-convert'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cmpccxadd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SierraForest-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ne-convert'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cmpccxadd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='athlon'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='athlon-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='core2duo'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='core2duo-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='coreduo'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='coreduo-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='n270'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='n270-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='phenom'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='phenom-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </cpu>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <memoryBacking supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <enum name='sourceType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>file</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>anonymous</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>memfd</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </memoryBacking>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <devices>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <disk supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='diskDevice'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>disk</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>cdrom</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>floppy</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>lun</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='bus'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>ide</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>fdc</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>scsi</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>usb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>sata</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-non-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <graphics supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vnc</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>egl-headless</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>dbus</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </graphics>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <video supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='modelType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vga</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>cirrus</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>none</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>bochs</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>ramfb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </video>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <hostdev supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='mode'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>subsystem</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='startupPolicy'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>default</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>mandatory</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>requisite</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>optional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='subsysType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>usb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pci</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>scsi</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='capsType'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='pciBackend'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </hostdev>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <rng supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-non-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendModel'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>random</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>egd</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>builtin</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </rng>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <filesystem supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='driverType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>path</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>handle</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtiofs</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </filesystem>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <tpm supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tpm-tis</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tpm-crb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendModel'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>emulator</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>external</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendVersion'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>2.0</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </tpm>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <redirdev supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='bus'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>usb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </redirdev>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <channel supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pty</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>unix</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </channel>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <crypto supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>qemu</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendModel'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>builtin</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </crypto>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <interface supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>default</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>passt</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </interface>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <panic supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>isa</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>hyperv</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </panic>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <console supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>null</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vc</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pty</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>dev</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>file</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pipe</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>stdio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>udp</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tcp</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>unix</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>qemu-vdagent</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>dbus</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </console>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </devices>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <features>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <gic supported='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <vmcoreinfo supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <genid supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <backingStoreInput supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <backup supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <async-teardown supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <ps2 supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <sev supported='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <sgx supported='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <hyperv supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='features'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>relaxed</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vapic</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>spinlocks</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vpindex</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>runtime</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>synic</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>stimer</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>reset</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vendor_id</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>frequencies</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>reenlightenment</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tlbflush</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>ipi</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>avic</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>emsr_bitmap</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>xmm_input</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <defaults>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <spinlocks>4095</spinlocks>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <stimer_direct>on</stimer_direct>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </defaults>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </hyperv>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <launchSecurity supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='sectype'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tdx</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </launchSecurity>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </features>
Nov 26 01:37:05 compute-0 nova_compute[350387]: </domainCapabilities>
Nov 26 01:37:05 compute-0 nova_compute[350387]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.150 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 26 01:37:05 compute-0 nova_compute[350387]: <domainCapabilities>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <domain>kvm</domain>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <arch>i686</arch>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <vcpu max='4096'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <iothreads supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <os supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <enum name='firmware'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <loader supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>rom</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pflash</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='readonly'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>yes</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>no</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='secure'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>no</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </loader>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </os>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <cpu>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='host-passthrough' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='hostPassthroughMigratable'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>on</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>off</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='maximum' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='maximumMigratable'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>on</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>off</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='host-model' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <vendor>AMD</vendor>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='x2apic'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='hypervisor'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='stibp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='ssbd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='overflow-recov'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='succor'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='ibrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='lbrv'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='tsc-scale'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='flushbyasid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='pause-filter'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='pfthreshold'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='disable' name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='custom' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cooperlake'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cooperlake-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cooperlake-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Dhyana-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Genoa'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amd-psfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='auto-ibrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='no-nested-data-bp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='null-sel-clr-base'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='stibp-always-on'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amd-psfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='auto-ibrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='no-nested-data-bp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='null-sel-clr-base'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='stibp-always-on'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Milan'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Milan-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Milan-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amd-psfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='no-nested-data-bp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='null-sel-clr-base'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='stibp-always-on'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='GraniteRapids'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='prefetchiti'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='GraniteRapids-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='prefetchiti'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='GraniteRapids-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10-128'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10-256'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10-512'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='prefetchiti'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v6'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v7'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='KnightsMill'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4fmaps'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4vnniw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512er'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512pf'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='KnightsMill-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4fmaps'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4vnniw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512er'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512pf'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G4-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tbm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G5-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tbm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SierraForest'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ne-convert'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cmpccxadd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SierraForest-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ne-convert'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cmpccxadd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='athlon'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='athlon-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='core2duo'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='core2duo-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='coreduo'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='coreduo-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='n270'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='n270-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='phenom'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='phenom-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </cpu>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <memoryBacking supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <enum name='sourceType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>file</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>anonymous</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>memfd</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </memoryBacking>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <devices>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <disk supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='diskDevice'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>disk</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>cdrom</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>floppy</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>lun</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='bus'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>fdc</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>scsi</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>usb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>sata</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-non-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <graphics supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vnc</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>egl-headless</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>dbus</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </graphics>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <video supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='modelType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vga</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>cirrus</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>none</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>bochs</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>ramfb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </video>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <hostdev supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='mode'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>subsystem</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='startupPolicy'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>default</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>mandatory</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>requisite</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>optional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='subsysType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>usb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pci</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>scsi</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='capsType'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='pciBackend'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </hostdev>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <rng supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-non-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendModel'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>random</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>egd</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>builtin</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </rng>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <filesystem supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='driverType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>path</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>handle</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtiofs</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </filesystem>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <tpm supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tpm-tis</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tpm-crb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendModel'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>emulator</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>external</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendVersion'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>2.0</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </tpm>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <redirdev supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='bus'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>usb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </redirdev>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <channel supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pty</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>unix</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </channel>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <crypto supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>qemu</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendModel'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>builtin</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </crypto>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <interface supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>default</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>passt</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </interface>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <panic supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>isa</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>hyperv</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </panic>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <console supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>null</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vc</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pty</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>dev</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>file</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pipe</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>stdio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>udp</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tcp</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>unix</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>qemu-vdagent</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>dbus</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </console>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </devices>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <features>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <gic supported='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <vmcoreinfo supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <genid supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <backingStoreInput supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <backup supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <async-teardown supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <ps2 supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <sev supported='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <sgx supported='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <hyperv supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='features'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>relaxed</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vapic</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>spinlocks</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vpindex</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>runtime</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>synic</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>stimer</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>reset</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vendor_id</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>frequencies</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>reenlightenment</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tlbflush</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>ipi</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>avic</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>emsr_bitmap</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>xmm_input</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <defaults>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <spinlocks>4095</spinlocks>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <stimer_direct>on</stimer_direct>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </defaults>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </hyperv>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <launchSecurity supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='sectype'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tdx</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </launchSecurity>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </features>
Nov 26 01:37:05 compute-0 nova_compute[350387]: </domainCapabilities>
Nov 26 01:37:05 compute-0 nova_compute[350387]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.216 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.310 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 26 01:37:05 compute-0 nova_compute[350387]: <domainCapabilities>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <domain>kvm</domain>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <arch>x86_64</arch>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <vcpu max='240'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <iothreads supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <os supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <enum name='firmware'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <loader supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>rom</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pflash</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='readonly'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>yes</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>no</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='secure'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>no</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </loader>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </os>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <cpu>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='host-passthrough' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='hostPassthroughMigratable'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>on</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>off</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='maximum' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='maximumMigratable'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>on</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>off</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='host-model' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <vendor>AMD</vendor>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='x2apic'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='hypervisor'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='stibp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='ssbd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='overflow-recov'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='succor'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='ibrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='lbrv'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='tsc-scale'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='flushbyasid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='pause-filter'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='pfthreshold'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='disable' name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='custom' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cooperlake'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cooperlake-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cooperlake-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Dhyana-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Genoa'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amd-psfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='auto-ibrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='no-nested-data-bp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='null-sel-clr-base'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='stibp-always-on'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amd-psfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='auto-ibrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='no-nested-data-bp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='null-sel-clr-base'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='stibp-always-on'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Milan'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Milan-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Milan-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amd-psfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='no-nested-data-bp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='null-sel-clr-base'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='stibp-always-on'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='GraniteRapids'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='prefetchiti'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='GraniteRapids-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='prefetchiti'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='GraniteRapids-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10-128'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10-256'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10-512'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='prefetchiti'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v6'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v7'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='KnightsMill'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4fmaps'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4vnniw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512er'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512pf'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='KnightsMill-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4fmaps'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4vnniw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512er'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512pf'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G4-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tbm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G5-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tbm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SierraForest'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ne-convert'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cmpccxadd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SierraForest-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ne-convert'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cmpccxadd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='athlon'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='athlon-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='core2duo'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='core2duo-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='coreduo'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='coreduo-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='n270'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='n270-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='phenom'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='phenom-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </cpu>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <memoryBacking supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <enum name='sourceType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>file</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>anonymous</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>memfd</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </memoryBacking>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <devices>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <disk supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='diskDevice'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>disk</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>cdrom</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>floppy</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>lun</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='bus'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>ide</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>fdc</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>scsi</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>usb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>sata</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-non-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <graphics supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vnc</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>egl-headless</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>dbus</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </graphics>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <video supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='modelType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vga</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>cirrus</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>none</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>bochs</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>ramfb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </video>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <hostdev supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='mode'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>subsystem</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='startupPolicy'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>default</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>mandatory</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>requisite</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>optional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='subsysType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>usb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pci</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>scsi</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='capsType'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='pciBackend'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </hostdev>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <rng supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-non-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendModel'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>random</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>egd</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>builtin</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </rng>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <filesystem supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='driverType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>path</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>handle</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtiofs</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </filesystem>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <tpm supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tpm-tis</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tpm-crb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendModel'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>emulator</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>external</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendVersion'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>2.0</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </tpm>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <redirdev supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='bus'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>usb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </redirdev>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <channel supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pty</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>unix</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </channel>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <crypto supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>qemu</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendModel'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>builtin</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </crypto>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <interface supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>default</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>passt</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </interface>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <panic supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>isa</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>hyperv</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </panic>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <console supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>null</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vc</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pty</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>dev</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>file</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pipe</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>stdio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>udp</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tcp</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>unix</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>qemu-vdagent</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>dbus</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </console>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </devices>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <features>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <gic supported='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <vmcoreinfo supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <genid supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <backingStoreInput supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <backup supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <async-teardown supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <ps2 supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <sev supported='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <sgx supported='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <hyperv supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='features'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>relaxed</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vapic</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>spinlocks</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vpindex</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>runtime</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>synic</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>stimer</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>reset</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vendor_id</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>frequencies</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>reenlightenment</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tlbflush</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>ipi</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>avic</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>emsr_bitmap</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>xmm_input</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <defaults>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <spinlocks>4095</spinlocks>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <stimer_direct>on</stimer_direct>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </defaults>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </hyperv>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <launchSecurity supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='sectype'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tdx</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </launchSecurity>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </features>
Nov 26 01:37:05 compute-0 nova_compute[350387]: </domainCapabilities>
Nov 26 01:37:05 compute-0 nova_compute[350387]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.331 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 26 01:37:05 compute-0 nova_compute[350387]: <domainCapabilities>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <domain>kvm</domain>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <arch>x86_64</arch>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <vcpu max='4096'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <iothreads supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <os supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <enum name='firmware'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>efi</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <loader supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>rom</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pflash</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='readonly'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>yes</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>no</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='secure'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>yes</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>no</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </loader>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </os>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <cpu>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='host-passthrough' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='hostPassthroughMigratable'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>on</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>off</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='maximum' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='maximumMigratable'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>on</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>off</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='host-model' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <vendor>AMD</vendor>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='x2apic'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='hypervisor'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='stibp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='ssbd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='overflow-recov'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='succor'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='ibrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='lbrv'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='tsc-scale'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='flushbyasid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='pause-filter'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='pfthreshold'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <feature policy='disable' name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <mode name='custom' supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Broadwell-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cooperlake'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cooperlake-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Cooperlake-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Denverton-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Dhyana-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Genoa'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amd-psfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='auto-ibrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='no-nested-data-bp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='null-sel-clr-base'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='stibp-always-on'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amd-psfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='auto-ibrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='no-nested-data-bp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='null-sel-clr-base'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='stibp-always-on'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Milan'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Milan-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Milan-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amd-psfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='no-nested-data-bp'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='null-sel-clr-base'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='stibp-always-on'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-Rome-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='EPYC-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='GraniteRapids'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='prefetchiti'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='GraniteRapids-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='prefetchiti'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='GraniteRapids-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10-128'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10-256'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx10-512'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='prefetchiti'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Haswell-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v6'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Icelake-Server-v7'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='IvyBridge-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='KnightsMill'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4fmaps'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4vnniw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512er'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512pf'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='KnightsMill-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4fmaps'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-4vnniw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512er'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512pf'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G4-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tbm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Opteron_G5-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fma4'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tbm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xop'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SapphireRapids-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='amx-tile'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-bf16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-fp16'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bitalg'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vbmi2'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrc'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fzrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='la57'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='taa-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='tsx-ldtrk'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xfd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SierraForest'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ne-convert'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cmpccxadd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='SierraForest-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ifma'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-ne-convert'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx-vnni-int8'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='bus-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cmpccxadd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fbsdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='fsrs'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ibrs-all'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mcdt-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pbrsb-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='psdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='serialize'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vaes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='vpclmulqdq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Client-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='hle'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='rtm'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Skylake-Server-v5'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512bw'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512cd'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512dq'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512f'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='avx512vl'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='invpcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pcid'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='pku'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='mpx'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v2'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v3'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='core-capability'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='split-lock-detect'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='Snowridge-v4'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='cldemote'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='erms'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='gfni'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdir64b'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='movdiri'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='xsaves'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='athlon'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='athlon-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='core2duo'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='core2duo-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='coreduo'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='coreduo-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='n270'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='n270-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='ss'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='phenom'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <blockers model='phenom-v1'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnow'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <feature name='3dnowext'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </blockers>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </mode>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </cpu>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <memoryBacking supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <enum name='sourceType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>file</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>anonymous</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <value>memfd</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </memoryBacking>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <devices>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <disk supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='diskDevice'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>disk</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>cdrom</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>floppy</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>lun</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='bus'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>fdc</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>scsi</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>usb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>sata</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-non-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <graphics supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vnc</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>egl-headless</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>dbus</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </graphics>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <video supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='modelType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vga</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>cirrus</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>none</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>bochs</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>ramfb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </video>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <hostdev supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='mode'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>subsystem</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='startupPolicy'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>default</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>mandatory</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>requisite</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>optional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='subsysType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>usb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pci</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>scsi</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='capsType'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='pciBackend'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </hostdev>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <rng supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtio-non-transitional</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendModel'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>random</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>egd</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>builtin</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </rng>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <filesystem supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='driverType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>path</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>handle</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>virtiofs</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </filesystem>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <tpm supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tpm-tis</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tpm-crb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendModel'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>emulator</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>external</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendVersion'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>2.0</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </tpm>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <redirdev supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='bus'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>usb</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </redirdev>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <channel supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pty</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>unix</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </channel>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <crypto supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>qemu</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendModel'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>builtin</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </crypto>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <interface supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='backendType'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>default</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>passt</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </interface>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <panic supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='model'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>isa</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>hyperv</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </panic>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <console supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='type'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>null</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vc</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pty</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>dev</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>file</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>pipe</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>stdio</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>udp</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tcp</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>unix</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>qemu-vdagent</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>dbus</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </console>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </devices>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  <features>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <gic supported='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <vmcoreinfo supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <genid supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <backingStoreInput supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <backup supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <async-teardown supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <ps2 supported='yes'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <sev supported='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <sgx supported='no'/>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <hyperv supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='features'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>relaxed</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vapic</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>spinlocks</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vpindex</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>runtime</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>synic</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>stimer</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>reset</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>vendor_id</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>frequencies</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>reenlightenment</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tlbflush</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>ipi</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>avic</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>emsr_bitmap</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>xmm_input</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <defaults>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <spinlocks>4095</spinlocks>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <stimer_direct>on</stimer_direct>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </defaults>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </hyperv>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    <launchSecurity supported='yes'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      <enum name='sectype'>
Nov 26 01:37:05 compute-0 nova_compute[350387]:        <value>tdx</value>
Nov 26 01:37:05 compute-0 nova_compute[350387]:      </enum>
Nov 26 01:37:05 compute-0 nova_compute[350387]:    </launchSecurity>
Nov 26 01:37:05 compute-0 nova_compute[350387]:  </features>
Nov 26 01:37:05 compute-0 nova_compute[350387]: </domainCapabilities>
Nov 26 01:37:05 compute-0 nova_compute[350387]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.474 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.474 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.474 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.475 350391 INFO nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Secure Boot support detected#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.479 350391 INFO nova.virt.libvirt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.479 350391 INFO nova.virt.libvirt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.496 350391 DEBUG nova.virt.libvirt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.530 350391 INFO nova.virt.node [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Determined node identity 0e9e5c9b-dee2-4076-966b-e19b2697b966 from /var/lib/nova/compute_id#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.553 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Verified node 0e9e5c9b-dee2-4076-966b-e19b2697b966 matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.576 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.668 350391 DEBUG oslo_concurrency.lockutils [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.669 350391 DEBUG oslo_concurrency.lockutils [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.669 350391 DEBUG oslo_concurrency.lockutils [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.669 350391 DEBUG nova.compute.resource_tracker [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:37:05 compute-0 nova_compute[350387]: 2025-11-26 01:37:05.670 350391 DEBUG oslo_concurrency.processutils [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:37:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:37:06 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4027706626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:37:06 compute-0 nova_compute[350387]: 2025-11-26 01:37:06.187 350391 DEBUG oslo_concurrency.processutils [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:37:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:06 compute-0 nova_compute[350387]: 2025-11-26 01:37:06.619 350391 WARNING nova.virt.libvirt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:37:06 compute-0 nova_compute[350387]: 2025-11-26 01:37:06.620 350391 DEBUG nova.compute.resource_tracker [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4598MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:37:06 compute-0 nova_compute[350387]: 2025-11-26 01:37:06.620 350391 DEBUG oslo_concurrency.lockutils [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:37:06 compute-0 nova_compute[350387]: 2025-11-26 01:37:06.620 350391 DEBUG oslo_concurrency.lockutils [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:37:06 compute-0 nova_compute[350387]: 2025-11-26 01:37:06.827 350391 DEBUG nova.compute.resource_tracker [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:37:06 compute-0 nova_compute[350387]: 2025-11-26 01:37:06.828 350391 DEBUG nova.compute.resource_tracker [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:37:06 compute-0 nova_compute[350387]: 2025-11-26 01:37:06.919 350391 DEBUG nova.scheduler.client.report [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Refreshing inventories for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 01:37:06 compute-0 nova_compute[350387]: 2025-11-26 01:37:06.959 350391 DEBUG nova.scheduler.client.report [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Updating ProviderTree inventory for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 from _refresh_and_get_inventory using data: {} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 01:37:06 compute-0 nova_compute[350387]: 2025-11-26 01:37:06.960 350391 DEBUG nova.compute.provider_tree [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:37:06 compute-0 nova_compute[350387]: 2025-11-26 01:37:06.984 350391 DEBUG nova.scheduler.client.report [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Refreshing aggregate associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.006 350391 DEBUG nova.scheduler.client.report [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Refreshing trait associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, traits: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.027 350391 DEBUG oslo_concurrency.processutils [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:37:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:37:07 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3950314235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.486 350391 DEBUG oslo_concurrency.processutils [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.497 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 26 01:37:07 compute-0 nova_compute[350387]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.498 350391 INFO nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.499 350391 DEBUG nova.compute.provider_tree [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Updating inventory in ProviderTree for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.500 350391 DEBUG nova.virt.libvirt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.594 350391 DEBUG nova.scheduler.client.report [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Updated inventory for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.594 350391 DEBUG nova.compute.provider_tree [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Updating resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.595 350391 DEBUG nova.compute.provider_tree [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Updating inventory in ProviderTree for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.698 350391 DEBUG nova.compute.provider_tree [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Updating resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.726 350391 DEBUG nova.compute.resource_tracker [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.727 350391 DEBUG oslo_concurrency.lockutils [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.727 350391 DEBUG nova.service [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.831 350391 DEBUG nova.service [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 26 01:37:07 compute-0 nova_compute[350387]: 2025-11-26 01:37:07.832 350391 DEBUG nova.servicegroup.drivers.db [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 26 01:37:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:37:09 compute-0 podman[350721]: 2025-11-26 01:37:09.611224909 +0000 UTC m=+0.148332771 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 01:37:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:37:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:37:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:37:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:37:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:37:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:37:11 compute-0 podman[350741]: 2025-11-26 01:37:11.578692926 +0000 UTC m=+0.130178122 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, org.label-schema.build-date=20251118, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 01:37:11 compute-0 podman[350742]: 2025-11-26 01:37:11.588239883 +0000 UTC m=+0.125536931 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 01:37:11 compute-0 podman[350743]: 2025-11-26 01:37:11.642465054 +0000 UTC m=+0.171074918 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller)
Nov 26 01:37:11 compute-0 systemd-logind[800]: New session 57 of user zuul.
Nov 26 01:37:11 compute-0 systemd[1]: Started Session 57 of User zuul.
Nov 26 01:37:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:12 compute-0 podman[350862]: 2025-11-26 01:37:12.577899751 +0000 UTC m=+0.106252860 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:37:12 compute-0 podman[350861]: 2025-11-26 01:37:12.586966105 +0000 UTC m=+0.128400741 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_id=edpm, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 26 01:37:13 compute-0 python3.9[351001]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:37:13 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 01:37:13 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 01:37:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:37:15 compute-0 python3.9[351158]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 01:37:15 compute-0 systemd[1]: Reloading.
Nov 26 01:37:15 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:37:15 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:37:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:17 compute-0 python3.9[351344]: ansible-ansible.builtin.service_facts Invoked
Nov 26 01:37:17 compute-0 network[351361]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 01:37:17 compute-0 network[351362]: 'network-scripts' will be removed from distribution in near future.
Nov 26 01:37:17 compute-0 network[351363]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 01:37:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:18 compute-0 podman[351376]: 2025-11-26 01:37:18.786115924 +0000 UTC m=+0.142674022 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 26 01:37:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:37:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:21 compute-0 podman[351460]: 2025-11-26 01:37:21.092583894 +0000 UTC m=+0.105984603 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 26 01:37:21 compute-0 podman[351459]: 2025-11-26 01:37:21.108072128 +0000 UTC m=+0.139905284 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1214.1726694543, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., container_name=kepler, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, architecture=x86_64, release-0.7.12=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 26 01:37:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:23 compute-0 python3.9[351695]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:37:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:37:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:37:24.946 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:37:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:37:24.946 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:37:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:37:24.947 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:37:25 compute-0 python3.9[351848]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:37:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:26 compute-0 python3.9[352000]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:37:28 compute-0 python3.9[352152]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:37:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:29 compute-0 python3.9[352304]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 01:37:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:37:29 compute-0 podman[158021]: time="2025-11-26T01:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:37:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 01:37:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8108 "" "Go-http-client/1.1"
Nov 26 01:37:29 compute-0 nova_compute[350387]: 2025-11-26 01:37:29.834 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:37:29 compute-0 nova_compute[350387]: 2025-11-26 01:37:29.866 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:37:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:31 compute-0 python3.9[352456]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 01:37:31 compute-0 systemd[1]: Reloading.
Nov 26 01:37:31 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 01:37:31 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 01:37:31 compute-0 openstack_network_exporter[160178]: ERROR   01:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:37:31 compute-0 openstack_network_exporter[160178]: ERROR   01:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:37:31 compute-0 openstack_network_exporter[160178]: ERROR   01:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:37:31 compute-0 openstack_network_exporter[160178]: ERROR   01:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:37:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:37:31 compute-0 openstack_network_exporter[160178]: ERROR   01:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:37:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:37:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Nov 26 01:37:32 compute-0 python3.9[352644]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:37:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Nov 26 01:37:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:37:35 compute-0 python3.9[352797]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:37:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 26 01:37:36 compute-0 python3.9[352947]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:37:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 01:37:38 compute-0 python3.9[353099]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:37:39 compute-0 python3.9[353175]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:37:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:37:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 01:37:40 compute-0 podman[353299]: 2025-11-26 01:37:40.282335974 +0000 UTC m=+0.119409999 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 01:37:40 compute-0 python3.9[353344]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:37:41
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.meta', 'volumes']
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:37:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:37:41 compute-0 podman[353470]: 2025-11-26 01:37:41.747989129 +0000 UTC m=+0.100240381 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:37:41 compute-0 podman[353468]: 2025-11-26 01:37:41.78796211 +0000 UTC m=+0.143440943 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm)
Nov 26 01:37:41 compute-0 podman[353535]: 2025-11-26 01:37:41.91170237 +0000 UTC m=+0.137968780 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:37:41 compute-0 python3.9[353537]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 26 01:37:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 01:37:43 compute-0 podman[353687]: 2025-11-26 01:37:43.570796039 +0000 UTC m=+0.110308964 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:37:43 compute-0 podman[353686]: 2025-11-26 01:37:43.581254933 +0000 UTC m=+0.130932423 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Nov 26 01:37:43 compute-0 python3.9[353740]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:37:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Nov 26 01:37:44 compute-0 python3.9[353828]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/ceilometer.conf _original_basename=ceilometer.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:37:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:37:45 compute-0 python3.9[353978]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:37:46 compute-0 python3.9[354054]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/polling.yaml _original_basename=polling.yaml recurse=False state=file path=/var/lib/openstack/config/telemetry/polling.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:37:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Nov 26 01:37:47 compute-0 python3.9[354204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:37:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 13 op/s
Nov 26 01:37:48 compute-0 python3.9[354280]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/custom.conf _original_basename=custom.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/custom.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:37:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:37:49 compute-0 podman[354390]: 2025-11-26 01:37:49.58567925 +0000 UTC m=+0.131892289 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:37:49 compute-0 python3.9[354451]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:37:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:37:51 compute-0 podman[354641]: 2025-11-26 01:37:51.296776938 +0000 UTC m=+0.108296988 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., container_name=kepler, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 26 01:37:51 compute-0 podman[354648]: 2025-11-26 01:37:51.340525425 +0000 UTC m=+0.148148845 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:37:51 compute-0 python3.9[354666]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:37:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 26 01:37:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 01:37:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:37:52 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:37:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:37:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:37:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:37:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:37:52 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev c727bb54-9148-4404-94e7-3721c95095cd does not exist
Nov 26 01:37:52 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 9f29ae5d-c60e-4789-9dd7-b446317b7a4f does not exist
Nov 26 01:37:52 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 6c88fcac-03e2-4f9b-86e8-c91c407ffa31 does not exist
Nov 26 01:37:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:37:52 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:37:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:37:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:37:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:37:52 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:37:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:52 compute-0 python3.9[354941]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:37:52 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 01:37:52 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:37:52 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:37:52 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:37:53 compute-0 python3.9[355112]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json _original_basename=ceilometer-agent-compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:37:53 compute-0 podman[355137]: 2025-11-26 01:37:53.216757762 +0000 UTC m=+0.087745112 container create 0a5e8b673dcc6f5bfe6f05c2a551207641160d039603f9916c4fe098b28bd7ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:37:53 compute-0 podman[355137]: 2025-11-26 01:37:53.182663626 +0000 UTC m=+0.053651026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:37:53 compute-0 systemd[1]: Started libpod-conmon-0a5e8b673dcc6f5bfe6f05c2a551207641160d039603f9916c4fe098b28bd7ab.scope.
Nov 26 01:37:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:37:53 compute-0 podman[355137]: 2025-11-26 01:37:53.353525597 +0000 UTC m=+0.224512997 container init 0a5e8b673dcc6f5bfe6f05c2a551207641160d039603f9916c4fe098b28bd7ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:37:53 compute-0 podman[355137]: 2025-11-26 01:37:53.37003428 +0000 UTC m=+0.241021620 container start 0a5e8b673dcc6f5bfe6f05c2a551207641160d039603f9916c4fe098b28bd7ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:37:53 compute-0 podman[355137]: 2025-11-26 01:37:53.376670166 +0000 UTC m=+0.247657506 container attach 0a5e8b673dcc6f5bfe6f05c2a551207641160d039603f9916c4fe098b28bd7ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:37:53 compute-0 mystifying_bassi[355177]: 167 167
Nov 26 01:37:53 compute-0 systemd[1]: libpod-0a5e8b673dcc6f5bfe6f05c2a551207641160d039603f9916c4fe098b28bd7ab.scope: Deactivated successfully.
Nov 26 01:37:53 compute-0 podman[355137]: 2025-11-26 01:37:53.382277203 +0000 UTC m=+0.253264543 container died 0a5e8b673dcc6f5bfe6f05c2a551207641160d039603f9916c4fe098b28bd7ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:37:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cacc60cefab14c5070ca21483cfdff635dcffde8c64aedf76303539d5c1b238-merged.mount: Deactivated successfully.
Nov 26 01:37:53 compute-0 podman[355137]: 2025-11-26 01:37:53.480091026 +0000 UTC m=+0.351078356 container remove 0a5e8b673dcc6f5bfe6f05c2a551207641160d039603f9916c4fe098b28bd7ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 01:37:53 compute-0 systemd[1]: libpod-conmon-0a5e8b673dcc6f5bfe6f05c2a551207641160d039603f9916c4fe098b28bd7ab.scope: Deactivated successfully.
Nov 26 01:37:53 compute-0 podman[355253]: 2025-11-26 01:37:53.751588168 +0000 UTC m=+0.079541791 container create 8ef61421b3f1982c1cbd357595de31762f337f01a39da28ca1691352c4092e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:37:53 compute-0 podman[355253]: 2025-11-26 01:37:53.722748919 +0000 UTC m=+0.050702572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:37:53 compute-0 systemd[1]: Started libpod-conmon-8ef61421b3f1982c1cbd357595de31762f337f01a39da28ca1691352c4092e8c.scope.
Nov 26 01:37:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50507ba515e439c4fbef7ce59fd99ef389505812c7ca26f936ed882f19e9634/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50507ba515e439c4fbef7ce59fd99ef389505812c7ca26f936ed882f19e9634/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50507ba515e439c4fbef7ce59fd99ef389505812c7ca26f936ed882f19e9634/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50507ba515e439c4fbef7ce59fd99ef389505812c7ca26f936ed882f19e9634/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50507ba515e439c4fbef7ce59fd99ef389505812c7ca26f936ed882f19e9634/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:53 compute-0 podman[355253]: 2025-11-26 01:37:53.93496039 +0000 UTC m=+0.262914043 container init 8ef61421b3f1982c1cbd357595de31762f337f01a39da28ca1691352c4092e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:37:53 compute-0 podman[355253]: 2025-11-26 01:37:53.961892155 +0000 UTC m=+0.289845808 container start 8ef61421b3f1982c1cbd357595de31762f337f01a39da28ca1691352c4092e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:37:53 compute-0 podman[355253]: 2025-11-26 01:37:53.969228511 +0000 UTC m=+0.297182134 container attach 8ef61421b3f1982c1cbd357595de31762f337f01a39da28ca1691352c4092e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:37:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:54 compute-0 python3.9[355347]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:37:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:37:54 compute-0 python3.9[355423]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:37:55 compute-0 great_carson[355300]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:37:55 compute-0 great_carson[355300]: --> relative data size: 1.0
Nov 26 01:37:55 compute-0 great_carson[355300]: --> All data devices are unavailable
Nov 26 01:37:55 compute-0 systemd[1]: libpod-8ef61421b3f1982c1cbd357595de31762f337f01a39da28ca1691352c4092e8c.scope: Deactivated successfully.
Nov 26 01:37:55 compute-0 podman[355253]: 2025-11-26 01:37:55.333195594 +0000 UTC m=+1.661149267 container died 8ef61421b3f1982c1cbd357595de31762f337f01a39da28ca1691352c4092e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 01:37:55 compute-0 systemd[1]: libpod-8ef61421b3f1982c1cbd357595de31762f337f01a39da28ca1691352c4092e8c.scope: Consumed 1.309s CPU time.
Nov 26 01:37:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e50507ba515e439c4fbef7ce59fd99ef389505812c7ca26f936ed882f19e9634-merged.mount: Deactivated successfully.
Nov 26 01:37:55 compute-0 podman[355253]: 2025-11-26 01:37:55.429518145 +0000 UTC m=+1.757471768 container remove 8ef61421b3f1982c1cbd357595de31762f337f01a39da28ca1691352c4092e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:37:55 compute-0 systemd[1]: libpod-conmon-8ef61421b3f1982c1cbd357595de31762f337f01a39da28ca1691352c4092e8c.scope: Deactivated successfully.
Nov 26 01:37:55 compute-0 python3.9[355659]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:37:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:56 compute-0 podman[355825]: 2025-11-26 01:37:56.390194791 +0000 UTC m=+0.095036676 container create edc3471da3ef037910f932024de2a3d8a7b63bcb4ca620ec22de2626331858fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 01:37:56 compute-0 podman[355825]: 2025-11-26 01:37:56.345803667 +0000 UTC m=+0.050645612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:37:56 compute-0 systemd[1]: Started libpod-conmon-edc3471da3ef037910f932024de2a3d8a7b63bcb4ca620ec22de2626331858fe.scope.
Nov 26 01:37:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:37:56 compute-0 python3.9[355827]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json _original_basename=ceilometer_agent_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:37:56 compute-0 podman[355825]: 2025-11-26 01:37:56.518307613 +0000 UTC m=+0.223149538 container init edc3471da3ef037910f932024de2a3d8a7b63bcb4ca620ec22de2626331858fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:37:56 compute-0 podman[355825]: 2025-11-26 01:37:56.533021646 +0000 UTC m=+0.237863571 container start edc3471da3ef037910f932024de2a3d8a7b63bcb4ca620ec22de2626331858fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kilby, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:37:56 compute-0 laughing_kilby[355842]: 167 167
Nov 26 01:37:56 compute-0 podman[355825]: 2025-11-26 01:37:56.539686943 +0000 UTC m=+0.244529148 container attach edc3471da3ef037910f932024de2a3d8a7b63bcb4ca620ec22de2626331858fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kilby, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:37:56 compute-0 systemd[1]: libpod-edc3471da3ef037910f932024de2a3d8a7b63bcb4ca620ec22de2626331858fe.scope: Deactivated successfully.
Nov 26 01:37:56 compute-0 podman[355825]: 2025-11-26 01:37:56.542189453 +0000 UTC m=+0.247031408 container died edc3471da3ef037910f932024de2a3d8a7b63bcb4ca620ec22de2626331858fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:37:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c435b29fdba49ff98a2e0c6d74480e3b3de3bdc0a2ce079e43cf67aae911e5e-merged.mount: Deactivated successfully.
Nov 26 01:37:56 compute-0 podman[355825]: 2025-11-26 01:37:56.612304139 +0000 UTC m=+0.317146054 container remove edc3471da3ef037910f932024de2a3d8a7b63bcb4ca620ec22de2626331858fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kilby, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:37:56 compute-0 systemd[1]: libpod-conmon-edc3471da3ef037910f932024de2a3d8a7b63bcb4ca620ec22de2626331858fe.scope: Deactivated successfully.
Nov 26 01:37:56 compute-0 podman[355889]: 2025-11-26 01:37:56.860029215 +0000 UTC m=+0.076045343 container create 94dd9269708a2f14aa0aba6f645c158dedda08b991de564b08789bbbdf8c35f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 01:37:56 compute-0 systemd[1]: Started libpod-conmon-94dd9269708a2f14aa0aba6f645c158dedda08b991de564b08789bbbdf8c35f7.scope.
Nov 26 01:37:56 compute-0 podman[355889]: 2025-11-26 01:37:56.840727644 +0000 UTC m=+0.056743792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:37:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3e483f4d462ec49d95b1de09cb186c2d1ae6c5564ae65e8510c0ffc5bdcfef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3e483f4d462ec49d95b1de09cb186c2d1ae6c5564ae65e8510c0ffc5bdcfef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3e483f4d462ec49d95b1de09cb186c2d1ae6c5564ae65e8510c0ffc5bdcfef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3e483f4d462ec49d95b1de09cb186c2d1ae6c5564ae65e8510c0ffc5bdcfef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:56 compute-0 podman[355889]: 2025-11-26 01:37:56.986778879 +0000 UTC m=+0.202795007 container init 94dd9269708a2f14aa0aba6f645c158dedda08b991de564b08789bbbdf8c35f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 01:37:57 compute-0 podman[355889]: 2025-11-26 01:37:57.001652606 +0000 UTC m=+0.217668734 container start 94dd9269708a2f14aa0aba6f645c158dedda08b991de564b08789bbbdf8c35f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 01:37:57 compute-0 podman[355889]: 2025-11-26 01:37:57.006733798 +0000 UTC m=+0.222749926 container attach 94dd9269708a2f14aa0aba6f645c158dedda08b991de564b08789bbbdf8c35f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bohr, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:37:57 compute-0 python3.9[356034]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:37:57 compute-0 charming_bohr[355939]: {
Nov 26 01:37:57 compute-0 charming_bohr[355939]:    "0": [
Nov 26 01:37:57 compute-0 charming_bohr[355939]:        {
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "devices": [
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "/dev/loop3"
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            ],
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "lv_name": "ceph_lv0",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "lv_size": "21470642176",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "name": "ceph_lv0",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "tags": {
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.cluster_name": "ceph",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.crush_device_class": "",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.encrypted": "0",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.osd_id": "0",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.type": "block",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.vdo": "0"
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            },
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "type": "block",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "vg_name": "ceph_vg0"
Nov 26 01:37:57 compute-0 charming_bohr[355939]:        }
Nov 26 01:37:57 compute-0 charming_bohr[355939]:    ],
Nov 26 01:37:57 compute-0 charming_bohr[355939]:    "1": [
Nov 26 01:37:57 compute-0 charming_bohr[355939]:        {
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "devices": [
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "/dev/loop4"
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            ],
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "lv_name": "ceph_lv1",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "lv_size": "21470642176",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "name": "ceph_lv1",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "tags": {
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.cluster_name": "ceph",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.crush_device_class": "",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.encrypted": "0",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.osd_id": "1",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.type": "block",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.vdo": "0"
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            },
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "type": "block",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "vg_name": "ceph_vg1"
Nov 26 01:37:57 compute-0 charming_bohr[355939]:        }
Nov 26 01:37:57 compute-0 charming_bohr[355939]:    ],
Nov 26 01:37:57 compute-0 charming_bohr[355939]:    "2": [
Nov 26 01:37:57 compute-0 charming_bohr[355939]:        {
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "devices": [
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "/dev/loop5"
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            ],
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "lv_name": "ceph_lv2",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "lv_size": "21470642176",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "name": "ceph_lv2",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "tags": {
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.cluster_name": "ceph",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.crush_device_class": "",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.encrypted": "0",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.osd_id": "2",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.type": "block",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:                "ceph.vdo": "0"
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            },
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "type": "block",
Nov 26 01:37:57 compute-0 charming_bohr[355939]:            "vg_name": "ceph_vg2"
Nov 26 01:37:57 compute-0 charming_bohr[355939]:        }
Nov 26 01:37:57 compute-0 charming_bohr[355939]:    ]
Nov 26 01:37:57 compute-0 charming_bohr[355939]: }
Nov 26 01:37:57 compute-0 systemd[1]: libpod-94dd9269708a2f14aa0aba6f645c158dedda08b991de564b08789bbbdf8c35f7.scope: Deactivated successfully.
Nov 26 01:37:57 compute-0 podman[355889]: 2025-11-26 01:37:57.759359411 +0000 UTC m=+0.975375529 container died 94dd9269708a2f14aa0aba6f645c158dedda08b991de564b08789bbbdf8c35f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 01:37:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb3e483f4d462ec49d95b1de09cb186c2d1ae6c5564ae65e8510c0ffc5bdcfef-merged.mount: Deactivated successfully.
Nov 26 01:37:57 compute-0 podman[355889]: 2025-11-26 01:37:57.839266082 +0000 UTC m=+1.055282220 container remove 94dd9269708a2f14aa0aba6f645c158dedda08b991de564b08789bbbdf8c35f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bohr, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:37:57 compute-0 systemd[1]: libpod-conmon-94dd9269708a2f14aa0aba6f645c158dedda08b991de564b08789bbbdf8c35f7.scope: Deactivated successfully.
Nov 26 01:37:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:37:58 compute-0 python3.9[356175]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:37:58 compute-0 podman[356342]: 2025-11-26 01:37:58.821738999 +0000 UTC m=+0.075490957 container create d2f938fa443048bd3f373aa49cb9cdc0158a4db9c84aad859e779570fa330473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:37:58 compute-0 podman[356342]: 2025-11-26 01:37:58.787066086 +0000 UTC m=+0.040818104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:37:58 compute-0 systemd[1]: Started libpod-conmon-d2f938fa443048bd3f373aa49cb9cdc0158a4db9c84aad859e779570fa330473.scope.
Nov 26 01:37:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:37:58 compute-0 podman[356342]: 2025-11-26 01:37:58.982419784 +0000 UTC m=+0.236171752 container init d2f938fa443048bd3f373aa49cb9cdc0158a4db9c84aad859e779570fa330473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:37:58 compute-0 podman[356342]: 2025-11-26 01:37:58.997387254 +0000 UTC m=+0.251139182 container start d2f938fa443048bd3f373aa49cb9cdc0158a4db9c84aad859e779570fa330473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:37:59 compute-0 podman[356342]: 2025-11-26 01:37:59.002397474 +0000 UTC m=+0.256149472 container attach d2f938fa443048bd3f373aa49cb9cdc0158a4db9c84aad859e779570fa330473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 01:37:59 compute-0 xenodochial_merkle[356382]: 167 167
Nov 26 01:37:59 compute-0 systemd[1]: libpod-d2f938fa443048bd3f373aa49cb9cdc0158a4db9c84aad859e779570fa330473.scope: Deactivated successfully.
Nov 26 01:37:59 compute-0 podman[356409]: 2025-11-26 01:37:59.078973371 +0000 UTC m=+0.049949831 container died d2f938fa443048bd3f373aa49cb9cdc0158a4db9c84aad859e779570fa330473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:37:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-7749741ea411c65e1599b90d9d770e4fe7108943472fec964e440dce11a0d6dd-merged.mount: Deactivated successfully.
Nov 26 01:37:59 compute-0 podman[356409]: 2025-11-26 01:37:59.136568876 +0000 UTC m=+0.107545286 container remove d2f938fa443048bd3f373aa49cb9cdc0158a4db9c84aad859e779570fa330473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:37:59 compute-0 systemd[1]: libpod-conmon-d2f938fa443048bd3f373aa49cb9cdc0158a4db9c84aad859e779570fa330473.scope: Deactivated successfully.
Nov 26 01:37:59 compute-0 podman[356431]: 2025-11-26 01:37:59.400734983 +0000 UTC m=+0.076769613 container create 92c4ca3b5000fc0f23255be2cc4d4891588e6d524e1c06f686f26ce33f310942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:37:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:37:59 compute-0 podman[356431]: 2025-11-26 01:37:59.376655228 +0000 UTC m=+0.052689888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:37:59 compute-0 systemd[1]: Started libpod-conmon-92c4ca3b5000fc0f23255be2cc4d4891588e6d524e1c06f686f26ce33f310942.scope.
Nov 26 01:37:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/426cbea5fd4747b22f00302ab4758ca4c74af6a22ac89cb510987fbe27cff401/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/426cbea5fd4747b22f00302ab4758ca4c74af6a22ac89cb510987fbe27cff401/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/426cbea5fd4747b22f00302ab4758ca4c74af6a22ac89cb510987fbe27cff401/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/426cbea5fd4747b22f00302ab4758ca4c74af6a22ac89cb510987fbe27cff401/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:37:59 compute-0 podman[356431]: 2025-11-26 01:37:59.566710697 +0000 UTC m=+0.242745387 container init 92c4ca3b5000fc0f23255be2cc4d4891588e6d524e1c06f686f26ce33f310942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:37:59 compute-0 podman[356431]: 2025-11-26 01:37:59.582535791 +0000 UTC m=+0.258570441 container start 92c4ca3b5000fc0f23255be2cc4d4891588e6d524e1c06f686f26ce33f310942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:37:59 compute-0 podman[356431]: 2025-11-26 01:37:59.589327361 +0000 UTC m=+0.265362011 container attach 92c4ca3b5000fc0f23255be2cc4d4891588e6d524e1c06f686f26ce33f310942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:37:59 compute-0 podman[158021]: time="2025-11-26T01:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:37:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44153 "" "Go-http-client/1.1"
Nov 26 01:37:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8529 "" "Go-http-client/1.1"
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.782 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.783 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7feff248b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff25140e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248a270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff35fda90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bb30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7feff25140b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7feff248b110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7feff248b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7feff248b740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7feff248b170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7feff248b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7feff248b1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7feff248ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7feff248b020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7feff248b0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7feff248ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7feff248bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff5310410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7feff248bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7feff24894f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7feff248b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7feff248bc20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.804 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7feff248b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.804 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7feff248bcb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.804 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff2489520>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff4ce75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.808 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248be00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.809 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.810 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248af90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.811 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7feff248aff0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7feff0ea82c0>] with cache [{}], pollster history [{'disk.device.usage': [], 'power.state': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'disk.device.write.bytes': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7feff55e84d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.812 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.812 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7feff248bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.812 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.813 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7feff248b0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.813 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.813 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7feff248bdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.814 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.814 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7feff248a2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.814 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.815 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7feff248aea0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.815 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.815 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7feff248afc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7feff3627890>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.816 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.816 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.817 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.818 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.819 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.819 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.820 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.821 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.821 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.821 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.822 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.823 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.827 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.827 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.827 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.828 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.828 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.828 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.828 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.829 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.829 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:37:59 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:37:59.830 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:38:00 compute-0 python3.9[356479]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:38:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]: {
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:        "osd_id": 0,
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:        "type": "bluestore"
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:    },
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:        "osd_id": 2,
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:        "type": "bluestore"
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:    },
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:        "osd_id": 1,
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:        "type": "bluestore"
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]:    }
Nov 26 01:38:00 compute-0 pedantic_mendeleev[356447]: }
Nov 26 01:38:00 compute-0 python3.9[356572]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/firewall.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/firewall.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:38:00 compute-0 systemd[1]: libpod-92c4ca3b5000fc0f23255be2cc4d4891588e6d524e1c06f686f26ce33f310942.scope: Deactivated successfully.
Nov 26 01:38:00 compute-0 systemd[1]: libpod-92c4ca3b5000fc0f23255be2cc4d4891588e6d524e1c06f686f26ce33f310942.scope: Consumed 1.236s CPU time.
Nov 26 01:38:00 compute-0 podman[356431]: 2025-11-26 01:38:00.831705426 +0000 UTC m=+1.507740056 container died 92c4ca3b5000fc0f23255be2cc4d4891588e6d524e1c06f686f26ce33f310942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-426cbea5fd4747b22f00302ab4758ca4c74af6a22ac89cb510987fbe27cff401-merged.mount: Deactivated successfully.
Nov 26 01:38:00 compute-0 podman[356431]: 2025-11-26 01:38:00.936493725 +0000 UTC m=+1.612528355 container remove 92c4ca3b5000fc0f23255be2cc4d4891588e6d524e1c06f686f26ce33f310942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:38:00 compute-0 systemd[1]: libpod-conmon-92c4ca3b5000fc0f23255be2cc4d4891588e6d524e1c06f686f26ce33f310942.scope: Deactivated successfully.
Nov 26 01:38:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:38:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:38:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:38:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:38:01 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 4db497a2-3800-40c9-84fb-580f1f97f4c2 does not exist
Nov 26 01:38:01 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 09e166eb-6f1c-4b7a-b79a-1c961af1dcda does not exist
Nov 26 01:38:01 compute-0 openstack_network_exporter[160178]: ERROR   01:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:38:01 compute-0 openstack_network_exporter[160178]: ERROR   01:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:38:01 compute-0 openstack_network_exporter[160178]: ERROR   01:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:38:01 compute-0 openstack_network_exporter[160178]: ERROR   01:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:38:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:38:01 compute-0 openstack_network_exporter[160178]: ERROR   01:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:38:01 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:38:01 compute-0 python3.9[356795]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:38:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:38:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:38:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:03 compute-0 python3.9[356871]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.json _original_basename=node_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:38:04 compute-0 python3.9[357021]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:38:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.301 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.303 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.303 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.303 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.333 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.333 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.335 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.336 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.337 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.337 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.338 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.338 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.339 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.382 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.383 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.383 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.384 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.385 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:38:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:38:04 compute-0 python3.9[357117]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:38:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:38:04 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600071965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:38:04 compute-0 nova_compute[350387]: 2025-11-26 01:38:04.905 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:38:05 compute-0 nova_compute[350387]: 2025-11-26 01:38:05.348 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:38:05 compute-0 nova_compute[350387]: 2025-11-26 01:38:05.349 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4534MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:38:05 compute-0 nova_compute[350387]: 2025-11-26 01:38:05.349 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:38:05 compute-0 nova_compute[350387]: 2025-11-26 01:38:05.350 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:38:05 compute-0 nova_compute[350387]: 2025-11-26 01:38:05.435 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:38:05 compute-0 nova_compute[350387]: 2025-11-26 01:38:05.436 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:38:05 compute-0 nova_compute[350387]: 2025-11-26 01:38:05.471 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:38:05 compute-0 python3.9[357289]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:38:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:38:05 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/951561265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:38:05 compute-0 nova_compute[350387]: 2025-11-26 01:38:05.954 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:38:05 compute-0 nova_compute[350387]: 2025-11-26 01:38:05.964 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:38:05 compute-0 nova_compute[350387]: 2025-11-26 01:38:05.987 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:38:05 compute-0 nova_compute[350387]: 2025-11-26 01:38:05.989 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:38:05 compute-0 nova_compute[350387]: 2025-11-26 01:38:05.990 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:38:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:06 compute-0 python3.9[357367]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json _original_basename=openstack_network_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:38:07 compute-0 python3.9[357517]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:38:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:08 compute-0 python3.9[357593]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml _original_basename=openstack_network_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:38:09 compute-0 python3.9[357743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:38:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:38:10 compute-0 python3.9[357819]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.json _original_basename=podman_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:38:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:10 compute-0 podman[357865]: 2025-11-26 01:38:10.568424914 +0000 UTC m=+0.118529505 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 01:38:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:38:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:38:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:38:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:38:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:38:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:38:11 compute-0 python3.9[357988]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:38:11 compute-0 python3.9[358064]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:38:12 compute-0 podman[358066]: 2025-11-26 01:38:12.11680808 +0000 UTC m=+0.089682286 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:38:12 compute-0 podman[358065]: 2025-11-26 01:38:12.174486397 +0000 UTC m=+0.147292901 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 26 01:38:12 compute-0 podman[358067]: 2025-11-26 01:38:12.18102359 +0000 UTC m=+0.142624190 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 01:38:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:13 compute-0 python3.9[358280]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:38:13 compute-0 podman[358282]: 2025-11-26 01:38:13.926692726 +0000 UTC m=+0.120825219 container health_status 48d7588f03ac33012b083c75f5259c81a1c62ae9a622e750c7129b78e0f02ae4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:38:13 compute-0 podman[358281]: 2025-11-26 01:38:13.949493646 +0000 UTC m=+0.150140931 container health_status 3302afe6d483da701b5d63f539b2f307031eb548c2e333722b1a74e8eaf35cc6 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-type=git, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Nov 26 01:38:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:14 compute-0 python3.9[358400]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:38:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:38:15 compute-0 python3.9[358550]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:38:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:16 compute-0 python3.9[358626]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:38:17 compute-0 python3.9[358776]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:38:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:18 compute-0 python3.9[358852]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:38:19 compute-0 python3.9[359004]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:38:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:38:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:20 compute-0 podman[359129]: 2025-11-26 01:38:20.434393425 +0000 UTC m=+0.129382709 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 26 01:38:20 compute-0 python3.9[359176]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:38:21 compute-0 podman[359300]: 2025-11-26 01:38:21.598601849 +0000 UTC m=+0.158547257 container health_status 3a4f267dab4e67df6705707624dea74b53de5646398f7474583b0cefc334f9f9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_id=edpm, vendor=Red Hat, Inc., container_name=kepler, architecture=x86_64, name=ubi9, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, managed_by=edpm_ansible)
Nov 26 01:38:21 compute-0 podman[359301]: 2025-11-26 01:38:21.602289882 +0000 UTC m=+0.156871260 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 01:38:21 compute-0 python3.9[359365]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:38:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:22 compute-0 python3.9[359517]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:38:24 compute-0 python3.9[359671]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:38:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:38:24 compute-0 python3.9[359749]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:38:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:38:24.948 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:38:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:38:24.949 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:38:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:38:24.949 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:38:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:26 compute-0 python3.9[359825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:38:26 compute-0 python3.9[359903]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ _original_basename=healthcheck.future recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:38:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:29 compute-0 python3.9[360055]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Nov 26 01:38:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:38:29 compute-0 podman[158021]: time="2025-11-26T01:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:38:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 01:38:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8108 "" "Go-http-client/1.1"
Nov 26 01:38:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:30 compute-0 python3.9[360207]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 01:38:31 compute-0 openstack_network_exporter[160178]: ERROR   01:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:38:31 compute-0 openstack_network_exporter[160178]: ERROR   01:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:38:31 compute-0 openstack_network_exporter[160178]: ERROR   01:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:38:31 compute-0 openstack_network_exporter[160178]: ERROR   01:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:38:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:38:31 compute-0 openstack_network_exporter[160178]: ERROR   01:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:38:31 compute-0 openstack_network_exporter[160178]: 
Nov 26 01:38:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:32 compute-0 python3[360359]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 01:38:32 compute-0 python3[360359]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "62d0cdbd80511c7b16dc1b12830c26126f29d8961a194546e50bdb4d0a16aab7",#012          "Digest": "sha256:583d65a78a39c16ddc92f319340c5bae76869328a8d5a1b9abe707fc5728d5c1",#012          "RepoTags": [#012               "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested"#012          ],#012          "RepoDigests": [#012               "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute@sha256:583d65a78a39c16ddc92f319340c5bae76869328a8d5a1b9abe707fc5728d5c1"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-11-24T05:10:04.205057461Z",#012          "Config": {#012               "User": "root",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.4",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251118",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 10 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "3c7bc1fa2adfe9145fe93e6d3cedb844",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 601979596,#012          "VirtualSize": 601979596,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/d9b3b733aa1c6500761d5288c1997b628803d05b26feef7ee8adfdd9643dcfba/diff:/var/lib/containers/storage/overlay/260bd78906ec2b20fe522fe5da3f74a5926b8f8e3c1f2ca1bbde23aba10cf672/diff:/var/lib/containers/storage/overlay/a83cbefd7bd667085f81ba0d97a2c30f62848b551cc628f9b8a17a8cd35173aa/diff:/var/lib/containers/storage/overlay/0d8549a173a3b232b2f6afed155b1026d32a65543a759bebeddf8a29d097339a/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/bbc7a538ef64c0c282fa2abceebcd68bf7ffaf36bd9c77504d11371ebf01264d/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/bbc7a538ef64c0c282fa2abceebcd68bf7ffaf36bd9c77504d11371ebf01264d/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:0d8549a173a3b232b2f6afed155b1026d32a65543a759bebeddf8a29d097339a",#012                    "sha256:0439f4dbf6bf989e58bc7280d1d99d7df1444eb4eb363c9188a5c8c4d15245c3",#012                    "sha256:4220c4d566cd4bcf775dfd9a47df1a84b0bb99927c9efc7f6290f25d262a946c",#012                    "sha256:1a4b3694be58e157e4795f41f5c812c43cbd9172e3e87cdae3b409670cf1a38a",#012                    "sha256:2c0a936aec69de55888f22b1d5a2abec072a1727cd078f13b3a312e2cceb3710"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.4",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251118",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 10 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "3c7bc1fa2adfe9145fe93e6d3cedb844",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "root",#012          "History": [#012               {#012                    "created": "2025-11-18T03:21:50.132885153Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:966583dfcd62b970491cd0f3247bc5ec24f5004b9857324649452ae8f57cd5e1 in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T03:21:50.132961772Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 10 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251118\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T03:21:52.734452447Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-11-24T05:03:55.905139534Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream10",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-24T05:03:55.905172155Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-24T05:03:55.905194885Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-24T05:03:55.905210906Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-24T05:03:55.905240227Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-24T05:03:55.905254447Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-24T05:03:56.238585035Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-24T05:03:56.740252967Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/centos.repo\" ]; then rm -f /etc/yum.repos.d/centos*.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-24T05:04:07.133736142Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && cr
Nov 26 01:38:33 compute-0 python3.9[360569]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:38:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:38:35 compute-0 python3.9[360723]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:38:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:36 compute-0 python3.9[360874]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764121115.3062825-484-279637368685128/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:38:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:38 compute-0 python3.9[360950]: ansible-systemd Invoked with state=started name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 01:38:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:38:39 compute-0 python3.9[361104]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 01:38:39 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Nov 26 01:38:39 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:38:39.813 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 26 01:38:39 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:38:39.916 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Nov 26 01:38:39 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:38:39.916 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Nov 26 01:38:39 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:38:39.916 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Nov 26 01:38:39 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:38:39.916 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Nov 26 01:38:39 compute-0 ceilometer_agent_compute[154508]: 2025-11-26 01:38:39.933 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Nov 26 01:38:39 compute-0 virtqemud[138515]: End of file while reading data: Input/output error
Nov 26 01:38:39 compute-0 virtqemud[138515]: End of file while reading data: Input/output error
Nov 26 01:38:40 compute-0 systemd[1]: libpod-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.scope: Deactivated successfully.
Nov 26 01:38:40 compute-0 systemd[1]: libpod-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.scope: Consumed 4.277s CPU time.
Nov 26 01:38:40 compute-0 podman[361108]: 2025-11-26 01:38:40.182678816 +0000 UTC m=+0.472416647 container died bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 01:38:40 compute-0 systemd[1]: bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0-1a6bb104d2d31e81.timer: Deactivated successfully.
Nov 26 01:38:40 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.
Nov 26 01:38:40 compute-0 systemd[1]: bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0-1a6bb104d2d31e81.service: Failed to open /run/systemd/transient/bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0-1a6bb104d2d31e81.service: No such file or directory
Nov 26 01:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0-userdata-shm.mount: Deactivated successfully.
Nov 26 01:38:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4c6231f10c9f6da920409201cda8a091527e023af12d49171280888a978522b-merged.mount: Deactivated successfully.
Nov 26 01:38:40 compute-0 podman[361108]: 2025-11-26 01:38:40.286652841 +0000 UTC m=+0.576390612 container cleanup bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 01:38:40 compute-0 podman[361108]: ceilometer_agent_compute
Nov 26 01:38:40 compute-0 podman[361136]: ceilometer_agent_compute
Nov 26 01:38:40 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Nov 26 01:38:40 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Nov 26 01:38:40 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Nov 26 01:38:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6231f10c9f6da920409201cda8a091527e023af12d49171280888a978522b/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 01:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6231f10c9f6da920409201cda8a091527e023af12d49171280888a978522b/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 26 01:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6231f10c9f6da920409201cda8a091527e023af12d49171280888a978522b/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 26 01:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c6231f10c9f6da920409201cda8a091527e023af12d49171280888a978522b/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 26 01:38:40 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.
Nov 26 01:38:40 compute-0 podman[361148]: 2025-11-26 01:38:40.673892619 +0000 UTC m=+0.231797070 container init bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: + sudo -E kolla_set_configs
Nov 26 01:38:40 compute-0 podman[361148]: 2025-11-26 01:38:40.724067596 +0000 UTC m=+0.281971987 container start bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:38:40 compute-0 podman[361148]: ceilometer_agent_compute
Nov 26 01:38:40 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: sudo: unable to send audit message: Operation not permitted
Nov 26 01:38:40 compute-0 podman[361166]: 2025-11-26 01:38:40.772554696 +0000 UTC m=+0.140264944 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 26 01:38:40 compute-0 podman[361183]: 2025-11-26 01:38:40.832596269 +0000 UTC m=+0.098673728 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Validating config file
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Copying service configuration files
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: INFO:__main__:Writing out command to execute
Nov 26 01:38:40 compute-0 systemd[1]: bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0-3819216423d0d933.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 01:38:40 compute-0 systemd[1]: bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0-3819216423d0d933.service: Failed with result 'exit-code'.
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: ++ cat /run_command
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: + ARGS=
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: + sudo kolla_copy_cacerts
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: sudo: unable to send audit message: Operation not permitted
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: + [[ ! -n '' ]]
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: + . kolla_extend_start
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: + umask 0022
Nov 26 01:38:40 compute-0 ceilometer_agent_compute[361163]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:38:41
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'images', 'volumes', 'cephfs.cephfs.meta', '.rgw.root']
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:38:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:38:41 compute-0 python3.9[361362]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:38:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.444 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.444 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.444 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.444 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.444 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.444 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.445 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.445 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.445 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.445 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.445 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.445 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.445 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.446 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.446 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.446 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.446 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 podman[361412]: 2025-11-26 01:38:42.446224713 +0000 UTC m=+0.154321568 container health_status cc05c7b1c6d160e9190cd9b22fb0377ce58506f8cc0337a5346cf3e3ab3044d7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.446 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.446 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.446 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.446 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.447 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.447 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.447 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.447 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.447 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.447 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.447 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.447 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.448 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.448 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.448 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.448 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.448 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.448 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.448 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.448 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.449 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.449 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.449 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.449 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.449 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.449 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.449 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.449 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.450 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.450 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.450 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.450 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.450 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.450 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.450 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.450 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.450 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.450 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.451 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.451 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.451 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.451 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.451 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.451 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.451 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.451 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.451 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.451 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.451 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.451 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.452 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.452 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.452 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.452 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.452 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.452 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.452 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.452 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.452 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.452 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.453 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.453 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.453 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.453 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.453 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.453 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.453 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.453 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.454 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.454 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.454 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.454 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.454 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.454 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.454 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.454 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.455 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.455 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.455 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.455 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.455 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.455 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.455 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.455 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.455 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.455 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.455 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.456 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.456 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.456 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.456 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.456 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.456 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.456 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.456 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.456 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.456 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.456 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.457 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.457 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.457 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.457 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.457 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.457 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.457 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.457 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.457 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.457 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.457 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.458 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.458 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.458 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.458 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.458 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.458 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.458 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.458 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.458 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.458 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.458 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.458 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.458 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.459 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.459 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.459 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.459 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.459 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.459 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.459 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.459 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.459 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.459 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.459 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.459 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.460 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.460 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.483 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.483 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.484 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.484 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.484 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.484 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.484 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.484 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.484 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.484 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.485 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.485 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.485 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.485 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.485 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.485 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.485 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.485 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.485 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.485 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.485 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.485 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.485 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.485 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.486 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.487 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.488 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.489 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.489 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.489 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.489 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.489 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.489 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.489 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.489 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.489 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.489 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.489 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.489 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.489 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.489 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.489 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.490 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.491 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.491 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.491 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.491 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.491 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.491 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.491 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.491 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.491 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.491 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.491 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.491 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.491 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.491 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.492 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.492 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.492 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.492 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.492 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.492 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.492 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.492 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.492 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.492 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.493 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.494 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.494 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.494 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.496 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.498 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.499 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Nov 26 01:38:42 compute-0 podman[361413]: 2025-11-26 01:38:42.510488435 +0000 UTC m=+0.213197409 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.537 15 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.550 15 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.551 15 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.551 15 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 26 01:38:42 compute-0 python3.9[361476]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/node_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/node_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.774 15 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.775 15 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.775 15 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.775 15 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.775 15 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.775 15 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.775 15 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.775 15 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.775 15 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.775 15 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.776 15 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.776 15 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.776 15 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.776 15 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.776 15 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.777 15 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.778 15 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.778 15 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.779 15 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.779 15 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.779 15 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.780 15 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.780 15 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.780 15 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.780 15 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.781 15 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.781 15 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.781 15 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.781 15 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.782 15 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.782 15 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.782 15 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.782 15 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.782 15 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.783 15 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.783 15 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.783 15 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.783 15 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.784 15 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.784 15 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.784 15 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.784 15 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.784 15 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.785 15 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.785 15 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.785 15 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.785 15 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.786 15 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.786 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.786 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.787 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.787 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.787 15 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.787 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.788 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.788 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.788 15 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.789 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.789 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.789 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.790 15 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.790 15 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.790 15 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.791 15 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.791 15 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.791 15 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.791 15 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.792 15 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.792 15 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.792 15 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.792 15 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.792 15 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.793 15 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.793 15 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.793 15 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.793 15 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.793 15 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.794 15 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.794 15 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.794 15 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.794 15 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.795 15 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.795 15 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.795 15 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.795 15 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.796 15 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.796 15 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.796 15 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.796 15 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.797 15 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.797 15 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.797 15 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.797 15 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.797 15 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.798 15 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.798 15 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.798 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.798 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.798 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.799 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.799 15 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.799 15 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.799 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.799 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.800 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.800 15 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.800 15 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.800 15 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.800 15 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.800 15 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.800 15 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.800 15 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.801 15 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.801 15 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.801 15 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.801 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.801 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.801 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.801 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.801 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.802 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.802 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.802 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.802 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.802 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.802 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.802 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.803 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.803 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.803 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.803 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.803 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.803 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.803 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.803 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.803 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.803 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.803 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.803 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.804 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.804 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.804 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.804 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.804 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.804 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.804 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.804 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.805 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.805 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.805 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.805 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.805 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.805 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.805 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.805 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.806 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.806 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.806 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.806 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.806 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.806 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.806 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.807 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.807 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.807 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.807 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.807 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.807 15 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.807 15 DEBUG cotyledon._service [-] Run service AgentManager(0) [15] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.813 15 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.854 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.855 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.856 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.857 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.858 15 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.858 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.859 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.860 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.860 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.861 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.861 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.861 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.864 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.864 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.864 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.868 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.868 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.868 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab81fdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.870 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.871 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:38:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:38:42.871 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:28 compute-0 rsyslogd[188548]: imjournal: 3513 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 26 01:42:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:29 compute-0 python3.9[390328]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 01:42:29 compute-0 systemd[1]: Started libpod-conmon-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.scope.
Nov 26 01:42:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:42:29 compute-0 podman[390329]: 2025-11-26 01:42:29.516772976 +0000 UTC m=+0.126761209 container exec bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 26 01:42:29 compute-0 podman[390329]: 2025-11-26 01:42:29.550220571 +0000 UTC m=+0.160208774 container exec_died bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Nov 26 01:42:29 compute-0 systemd[1]: libpod-conmon-bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0.scope: Deactivated successfully.
Nov 26 01:42:29 compute-0 podman[158021]: time="2025-11-26T01:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:42:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42582 "" "Go-http-client/1.1"
Nov 26 01:42:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8084 "" "Go-http-client/1.1"
Nov 26 01:42:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:30 compute-0 podman[390436]: 2025-11-26 01:42:30.581552658 +0000 UTC m=+0.136759872 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:42:31 compute-0 python3.9[390535]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:42:31 compute-0 openstack_network_exporter[367323]: ERROR   01:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:42:31 compute-0 openstack_network_exporter[367323]: ERROR   01:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:42:31 compute-0 openstack_network_exporter[367323]: ERROR   01:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:42:31 compute-0 openstack_network_exporter[367323]: ERROR   01:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:42:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:42:31 compute-0 openstack_network_exporter[367323]: ERROR   01:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:42:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:42:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:32 compute-0 python3.9[390687]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Nov 26 01:42:33 compute-0 python3.9[390851]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 01:42:33 compute-0 systemd[1]: Started libpod-conmon-5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce.scope.
Nov 26 01:42:33 compute-0 podman[390852]: 2025-11-26 01:42:33.9713483 +0000 UTC m=+0.149807851 container exec 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:42:34 compute-0 podman[390852]: 2025-11-26 01:42:34.007411328 +0000 UTC m=+0.185870849 container exec_died 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:42:34 compute-0 systemd[1]: libpod-conmon-5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce.scope: Deactivated successfully.
Nov 26 01:42:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:42:35 compute-0 python3.9[391033]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 01:42:35 compute-0 systemd[1]: Started libpod-conmon-5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce.scope.
Nov 26 01:42:35 compute-0 podman[391034]: 2025-11-26 01:42:35.323097602 +0000 UTC m=+0.139649564 container exec 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:42:35 compute-0 podman[391034]: 2025-11-26 01:42:35.357682459 +0000 UTC m=+0.174234371 container exec_died 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:42:35 compute-0 systemd[1]: libpod-conmon-5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce.scope: Deactivated successfully.
Nov 26 01:42:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:36 compute-0 python3.9[391215]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:42:37 compute-0 python3.9[391469]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Nov 26 01:42:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:42:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:42:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:42:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:42:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:42:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:42:37 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev cb7b9c4e-823c-414f-8315-9bd5dc9acdcd does not exist
Nov 26 01:42:37 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev d5cf7401-6365-4575-81b5-37819295efba does not exist
Nov 26 01:42:37 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 7ef9e56a-4838-4b9d-9acb-62fe6b1bff5c does not exist
Nov 26 01:42:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:42:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:42:37 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:42:37 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:42:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:42:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:42:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:42:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:42:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:38 compute-0 python3.9[391767]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 01:42:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:42:39 compute-0 systemd[1]: Started libpod-conmon-fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e.scope.
Nov 26 01:42:39 compute-0 podman[391807]: 2025-11-26 01:42:39.016565837 +0000 UTC m=+0.075535593 container create 6feb895e2ca0a978c27405e951b011a64cf10613d97f276a5794922f7bd12f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 01:42:39 compute-0 podman[391798]: 2025-11-26 01:42:39.025574022 +0000 UTC m=+0.125432093 container exec fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:42:39 compute-0 podman[391798]: 2025-11-26 01:42:39.069418329 +0000 UTC m=+0.169276350 container exec_died fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:42:39 compute-0 podman[391807]: 2025-11-26 01:42:38.977765862 +0000 UTC m=+0.036735638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:42:39 compute-0 systemd[1]: Started libpod-conmon-6feb895e2ca0a978c27405e951b011a64cf10613d97f276a5794922f7bd12f4f.scope.
Nov 26 01:42:39 compute-0 systemd[1]: libpod-conmon-fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e.scope: Deactivated successfully.
Nov 26 01:42:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:42:39 compute-0 podman[391807]: 2025-11-26 01:42:39.168401654 +0000 UTC m=+0.227371470 container init 6feb895e2ca0a978c27405e951b011a64cf10613d97f276a5794922f7bd12f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 01:42:39 compute-0 podman[391807]: 2025-11-26 01:42:39.180148716 +0000 UTC m=+0.239118462 container start 6feb895e2ca0a978c27405e951b011a64cf10613d97f276a5794922f7bd12f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:42:39 compute-0 podman[391807]: 2025-11-26 01:42:39.186099674 +0000 UTC m=+0.245069450 container attach 6feb895e2ca0a978c27405e951b011a64cf10613d97f276a5794922f7bd12f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 01:42:39 compute-0 angry_northcutt[391847]: 167 167
Nov 26 01:42:39 compute-0 systemd[1]: libpod-6feb895e2ca0a978c27405e951b011a64cf10613d97f276a5794922f7bd12f4f.scope: Deactivated successfully.
Nov 26 01:42:39 compute-0 podman[391807]: 2025-11-26 01:42:39.193162733 +0000 UTC m=+0.252132469 container died 6feb895e2ca0a978c27405e951b011a64cf10613d97f276a5794922f7bd12f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:42:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3a2220f73cef36d78c7d89f5a5af24debc1bb3f8511aa813fce656407b2c33f-merged.mount: Deactivated successfully.
Nov 26 01:42:39 compute-0 podman[391807]: 2025-11-26 01:42:39.263434767 +0000 UTC m=+0.322404533 container remove 6feb895e2ca0a978c27405e951b011a64cf10613d97f276a5794922f7bd12f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 01:42:39 compute-0 systemd[1]: libpod-conmon-6feb895e2ca0a978c27405e951b011a64cf10613d97f276a5794922f7bd12f4f.scope: Deactivated successfully.
Nov 26 01:42:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:42:39 compute-0 podman[391916]: 2025-11-26 01:42:39.547115636 +0000 UTC m=+0.086358639 container create a556f6f664af173fc3782e312b179b9bd5b9694df3f914ff06467adf93e064bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 01:42:39 compute-0 podman[391916]: 2025-11-26 01:42:39.514117674 +0000 UTC m=+0.053360727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:42:39 compute-0 systemd[1]: Started libpod-conmon-a556f6f664af173fc3782e312b179b9bd5b9694df3f914ff06467adf93e064bd.scope.
Nov 26 01:42:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eafef9eb5772ecbc3caddeeec0fd3c135bdfe892e87d5ad878d17bdda69e91f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eafef9eb5772ecbc3caddeeec0fd3c135bdfe892e87d5ad878d17bdda69e91f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eafef9eb5772ecbc3caddeeec0fd3c135bdfe892e87d5ad878d17bdda69e91f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eafef9eb5772ecbc3caddeeec0fd3c135bdfe892e87d5ad878d17bdda69e91f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eafef9eb5772ecbc3caddeeec0fd3c135bdfe892e87d5ad878d17bdda69e91f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:42:39 compute-0 podman[391916]: 2025-11-26 01:42:39.690592787 +0000 UTC m=+0.229835760 container init a556f6f664af173fc3782e312b179b9bd5b9694df3f914ff06467adf93e064bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chatterjee, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:42:39 compute-0 podman[391916]: 2025-11-26 01:42:39.707649608 +0000 UTC m=+0.246892601 container start a556f6f664af173fc3782e312b179b9bd5b9694df3f914ff06467adf93e064bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chatterjee, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 01:42:39 compute-0 podman[391916]: 2025-11-26 01:42:39.713969627 +0000 UTC m=+0.253212640 container attach a556f6f664af173fc3782e312b179b9bd5b9694df3f914ff06467adf93e064bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chatterjee, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:42:40 compute-0 python3.9[392041]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 01:42:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:40 compute-0 systemd[1]: Started libpod-conmon-fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e.scope.
Nov 26 01:42:40 compute-0 podman[392042]: 2025-11-26 01:42:40.481884287 +0000 UTC m=+0.143426160 container exec fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:42:40 compute-0 podman[392042]: 2025-11-26 01:42:40.516632108 +0000 UTC m=+0.178173981 container exec_died fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 01:42:40 compute-0 systemd[1]: libpod-conmon-fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e.scope: Deactivated successfully.
Nov 26 01:42:40 compute-0 lucid_chatterjee[391961]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:42:40 compute-0 lucid_chatterjee[391961]: --> relative data size: 1.0
Nov 26 01:42:40 compute-0 lucid_chatterjee[391961]: --> All data devices are unavailable
Nov 26 01:42:40 compute-0 systemd[1]: libpod-a556f6f664af173fc3782e312b179b9bd5b9694df3f914ff06467adf93e064bd.scope: Deactivated successfully.
Nov 26 01:42:40 compute-0 systemd[1]: libpod-a556f6f664af173fc3782e312b179b9bd5b9694df3f914ff06467adf93e064bd.scope: Consumed 1.189s CPU time.
Nov 26 01:42:40 compute-0 podman[391916]: 2025-11-26 01:42:40.965637815 +0000 UTC m=+1.504880808 container died a556f6f664af173fc3782e312b179b9bd5b9694df3f914ff06467adf93e064bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chatterjee, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 01:42:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eafef9eb5772ecbc3caddeeec0fd3c135bdfe892e87d5ad878d17bdda69e91f-merged.mount: Deactivated successfully.
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:42:41
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'backups', 'images', '.mgr']
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:42:41 compute-0 podman[391916]: 2025-11-26 01:42:41.08341148 +0000 UTC m=+1.622654483 container remove a556f6f664af173fc3782e312b179b9bd5b9694df3f914ff06467adf93e064bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chatterjee, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:42:41 compute-0 systemd[1]: libpod-conmon-a556f6f664af173fc3782e312b179b9bd5b9694df3f914ff06467adf93e064bd.scope: Deactivated successfully.
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:42:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:42:41 compute-0 python3.9[392332]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:42:42 compute-0 podman[392444]: 2025-11-26 01:42:42.283044757 +0000 UTC m=+0.093538701 container create ee2ce84a5a3e58392140f9b1023612ba6616af8b7b0fa0ff1f91d8d4545f965b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:42:42 compute-0 podman[392444]: 2025-11-26 01:42:42.24807665 +0000 UTC m=+0.058570654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:42:42 compute-0 systemd[1]: Started libpod-conmon-ee2ce84a5a3e58392140f9b1023612ba6616af8b7b0fa0ff1f91d8d4545f965b.scope.
Nov 26 01:42:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:42:42 compute-0 podman[392444]: 2025-11-26 01:42:42.453995644 +0000 UTC m=+0.264489578 container init ee2ce84a5a3e58392140f9b1023612ba6616af8b7b0fa0ff1f91d8d4545f965b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 01:42:42 compute-0 podman[392444]: 2025-11-26 01:42:42.473924236 +0000 UTC m=+0.284418140 container start ee2ce84a5a3e58392140f9b1023612ba6616af8b7b0fa0ff1f91d8d4545f965b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 01:42:42 compute-0 podman[392444]: 2025-11-26 01:42:42.479080232 +0000 UTC m=+0.289574136 container attach ee2ce84a5a3e58392140f9b1023612ba6616af8b7b0fa0ff1f91d8d4545f965b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:42:42 compute-0 interesting_turing[392465]: 167 167
Nov 26 01:42:42 compute-0 systemd[1]: libpod-ee2ce84a5a3e58392140f9b1023612ba6616af8b7b0fa0ff1f91d8d4545f965b.scope: Deactivated successfully.
Nov 26 01:42:42 compute-0 podman[392444]: 2025-11-26 01:42:42.486335487 +0000 UTC m=+0.296829431 container died ee2ce84a5a3e58392140f9b1023612ba6616af8b7b0fa0ff1f91d8d4545f965b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 01:42:42 compute-0 podman[392455]: 2025-11-26 01:42:42.506321401 +0000 UTC m=+0.142267228 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:42:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d0d1e53bb3fe1d591594ebb6617612f4b1a61262f60b7b50ba5b79a8216c6de-merged.mount: Deactivated successfully.
Nov 26 01:42:42 compute-0 podman[392444]: 2025-11-26 01:42:42.569641469 +0000 UTC m=+0.380135383 container remove ee2ce84a5a3e58392140f9b1023612ba6616af8b7b0fa0ff1f91d8d4545f965b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:42:42 compute-0 systemd[1]: libpod-conmon-ee2ce84a5a3e58392140f9b1023612ba6616af8b7b0fa0ff1f91d8d4545f965b.scope: Deactivated successfully.
Nov 26 01:42:42 compute-0 podman[392530]: 2025-11-26 01:42:42.846414573 +0000 UTC m=+0.094311544 container create 961e4dae42c7d3bbd8fbf9261b14b868e8869e05fdf2b7642eb10590438cf929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.855 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.857 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.857 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.858 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.859 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.859 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.859 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.859 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.859 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.860 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.860 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.860 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.860 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.861 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.861 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.861 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.861 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.864 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.864 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.864 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.865 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.866 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.866 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.866 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.866 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.867 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.867 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.867 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.867 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.867 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.868 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.868 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.868 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.868 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.868 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.869 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.869 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.869 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.869 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.869 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.870 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.870 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.870 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.870 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.871 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.871 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.871 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.871 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.871 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.872 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.872 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.872 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.873 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.873 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.873 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.873 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.874 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.874 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.874 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.874 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.875 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.875 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.875 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.875 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.876 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.876 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.876 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.877 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.877 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.877 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.880 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.880 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.880 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.881 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.881 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.881 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.881 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:42:42.883 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:42:42 compute-0 podman[392530]: 2025-11-26 01:42:42.810169969 +0000 UTC m=+0.058067020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:42:42 compute-0 systemd[1]: Started libpod-conmon-961e4dae42c7d3bbd8fbf9261b14b868e8869e05fdf2b7642eb10590438cf929.scope.
Nov 26 01:42:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92ee13e6e472cdc5a26b7a53a7a2c607d267e7b7dccc6c23f6bc5d74b7158bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92ee13e6e472cdc5a26b7a53a7a2c607d267e7b7dccc6c23f6bc5d74b7158bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92ee13e6e472cdc5a26b7a53a7a2c607d267e7b7dccc6c23f6bc5d74b7158bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92ee13e6e472cdc5a26b7a53a7a2c607d267e7b7dccc6c23f6bc5d74b7158bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:42:43 compute-0 podman[392530]: 2025-11-26 01:42:43.035642675 +0000 UTC m=+0.283539716 container init 961e4dae42c7d3bbd8fbf9261b14b868e8869e05fdf2b7642eb10590438cf929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:42:43 compute-0 podman[392530]: 2025-11-26 01:42:43.055456865 +0000 UTC m=+0.303353866 container start 961e4dae42c7d3bbd8fbf9261b14b868e8869e05fdf2b7642eb10590438cf929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:42:43 compute-0 podman[392530]: 2025-11-26 01:42:43.062152964 +0000 UTC m=+0.310050025 container attach 961e4dae42c7d3bbd8fbf9261b14b868e8869e05fdf2b7642eb10590438cf929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:42:43 compute-0 podman[392604]: 2025-11-26 01:42:43.24303367 +0000 UTC m=+0.111499819 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 26 01:42:43 compute-0 podman[392600]: 2025-11-26 01:42:43.243549465 +0000 UTC m=+0.119620118 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Nov 26 01:42:43 compute-0 python3.9[392659]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]: {
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:    "0": [
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:        {
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "devices": [
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "/dev/loop3"
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            ],
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "lv_name": "ceph_lv0",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "lv_size": "21470642176",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "name": "ceph_lv0",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "tags": {
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.cluster_name": "ceph",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.crush_device_class": "",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.encrypted": "0",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.osd_id": "0",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.type": "block",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.vdo": "0"
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            },
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "type": "block",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "vg_name": "ceph_vg0"
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:        }
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:    ],
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:    "1": [
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:        {
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "devices": [
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "/dev/loop4"
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            ],
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "lv_name": "ceph_lv1",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "lv_size": "21470642176",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "name": "ceph_lv1",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "tags": {
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.cluster_name": "ceph",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.crush_device_class": "",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.encrypted": "0",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.osd_id": "1",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.type": "block",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.vdo": "0"
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            },
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "type": "block",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "vg_name": "ceph_vg1"
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:        }
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:    ],
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:    "2": [
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:        {
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "devices": [
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "/dev/loop5"
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            ],
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "lv_name": "ceph_lv2",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "lv_size": "21470642176",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "name": "ceph_lv2",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "tags": {
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.cluster_name": "ceph",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.crush_device_class": "",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.encrypted": "0",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.osd_id": "2",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.type": "block",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:                "ceph.vdo": "0"
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            },
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "type": "block",
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:            "vg_name": "ceph_vg2"
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:        }
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]:    ]
Nov 26 01:42:43 compute-0 relaxed_hawking[392570]: }
Nov 26 01:42:43 compute-0 systemd[1]: libpod-961e4dae42c7d3bbd8fbf9261b14b868e8869e05fdf2b7642eb10590438cf929.scope: Deactivated successfully.
Nov 26 01:42:43 compute-0 podman[392530]: 2025-11-26 01:42:43.898581488 +0000 UTC m=+1.146478459 container died 961e4dae42c7d3bbd8fbf9261b14b868e8869e05fdf2b7642eb10590438cf929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hawking, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 01:42:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f92ee13e6e472cdc5a26b7a53a7a2c607d267e7b7dccc6c23f6bc5d74b7158bd-merged.mount: Deactivated successfully.
Nov 26 01:42:43 compute-0 podman[392530]: 2025-11-26 01:42:43.977490456 +0000 UTC m=+1.225387427 container remove 961e4dae42c7d3bbd8fbf9261b14b868e8869e05fdf2b7642eb10590438cf929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 01:42:43 compute-0 systemd[1]: libpod-conmon-961e4dae42c7d3bbd8fbf9261b14b868e8869e05fdf2b7642eb10590438cf929.scope: Deactivated successfully.
Nov 26 01:42:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:42:45 compute-0 podman[392926]: 2025-11-26 01:42:45.211929976 +0000 UTC m=+0.090025351 container create 876a9c819c37686e696fbbbc1796b2b053704367c9a60cf8c1d4c562a33ed3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hofstadter, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:42:45 compute-0 podman[392926]: 2025-11-26 01:42:45.178812921 +0000 UTC m=+0.056908326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:42:45 compute-0 systemd[1]: Started libpod-conmon-876a9c819c37686e696fbbbc1796b2b053704367c9a60cf8c1d4c562a33ed3d3.scope.
Nov 26 01:42:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:42:45 compute-0 podman[392926]: 2025-11-26 01:42:45.373626872 +0000 UTC m=+0.251722247 container init 876a9c819c37686e696fbbbc1796b2b053704367c9a60cf8c1d4c562a33ed3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 01:42:45 compute-0 podman[392926]: 2025-11-26 01:42:45.399563884 +0000 UTC m=+0.277659249 container start 876a9c819c37686e696fbbbc1796b2b053704367c9a60cf8c1d4c562a33ed3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:42:45 compute-0 eager_hofstadter[392960]: 167 167
Nov 26 01:42:45 compute-0 podman[392926]: 2025-11-26 01:42:45.411396708 +0000 UTC m=+0.289492133 container attach 876a9c819c37686e696fbbbc1796b2b053704367c9a60cf8c1d4c562a33ed3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Nov 26 01:42:45 compute-0 systemd[1]: libpod-876a9c819c37686e696fbbbc1796b2b053704367c9a60cf8c1d4c562a33ed3d3.scope: Deactivated successfully.
Nov 26 01:42:45 compute-0 podman[392926]: 2025-11-26 01:42:45.422265855 +0000 UTC m=+0.300361220 container died 876a9c819c37686e696fbbbc1796b2b053704367c9a60cf8c1d4c562a33ed3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 01:42:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a73ee1e7fa6e28b59e578ba36360f33fa730bde8982f0906c993a0f666dcdd3b-merged.mount: Deactivated successfully.
Nov 26 01:42:45 compute-0 podman[392926]: 2025-11-26 01:42:45.493516596 +0000 UTC m=+0.371611941 container remove 876a9c819c37686e696fbbbc1796b2b053704367c9a60cf8c1d4c562a33ed3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 01:42:45 compute-0 systemd[1]: libpod-conmon-876a9c819c37686e696fbbbc1796b2b053704367c9a60cf8c1d4c562a33ed3d3.scope: Deactivated successfully.
Nov 26 01:42:45 compute-0 podman[393004]: 2025-11-26 01:42:45.653480943 +0000 UTC m=+0.160326988 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:42:45 compute-0 python3.9[393016]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 01:42:45 compute-0 podman[393043]: 2025-11-26 01:42:45.721333588 +0000 UTC m=+0.072673993 container create 2df95190ef28902c185e8c53f6ded3b57f5fbfd6ca39ceb96aaf19aaf7b0c5c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 01:42:45 compute-0 podman[393043]: 2025-11-26 01:42:45.681810422 +0000 UTC m=+0.033150827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:42:45 compute-0 systemd[1]: Started libpod-conmon-2df95190ef28902c185e8c53f6ded3b57f5fbfd6ca39ceb96aaf19aaf7b0c5c1.scope.
Nov 26 01:42:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5e79c3d96e7df2845431b0ee2dcc22fd719ef89bd28b1f1b5e9db0aaa8ac58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5e79c3d96e7df2845431b0ee2dcc22fd719ef89bd28b1f1b5e9db0aaa8ac58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5e79c3d96e7df2845431b0ee2dcc22fd719ef89bd28b1f1b5e9db0aaa8ac58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5e79c3d96e7df2845431b0ee2dcc22fd719ef89bd28b1f1b5e9db0aaa8ac58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:42:45 compute-0 systemd[1]: Started libpod-conmon-27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670.scope.
Nov 26 01:42:45 compute-0 podman[393043]: 2025-11-26 01:42:45.854853048 +0000 UTC m=+0.206193443 container init 2df95190ef28902c185e8c53f6ded3b57f5fbfd6ca39ceb96aaf19aaf7b0c5c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 26 01:42:45 compute-0 podman[393057]: 2025-11-26 01:42:45.866570579 +0000 UTC m=+0.137177664 container exec 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9)
Nov 26 01:42:45 compute-0 podman[393043]: 2025-11-26 01:42:45.871180929 +0000 UTC m=+0.222521314 container start 2df95190ef28902c185e8c53f6ded3b57f5fbfd6ca39ceb96aaf19aaf7b0c5c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curie, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:42:45 compute-0 podman[393043]: 2025-11-26 01:42:45.878403513 +0000 UTC m=+0.229743918 container attach 2df95190ef28902c185e8c53f6ded3b57f5fbfd6ca39ceb96aaf19aaf7b0c5c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 01:42:45 compute-0 podman[393057]: 2025-11-26 01:42:45.898535981 +0000 UTC m=+0.169143046 container exec_died 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git)
Nov 26 01:42:45 compute-0 systemd[1]: libpod-conmon-27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670.scope: Deactivated successfully.
Nov 26 01:42:45 compute-0 podman[393075]: 2025-11-26 01:42:45.961232951 +0000 UTC m=+0.123315592 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 01:42:45 compute-0 systemd[1]: 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22-53281d58fccfb4a5.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 01:42:45 compute-0 systemd[1]: 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22-53281d58fccfb4a5.service: Failed with result 'exit-code'.
Nov 26 01:42:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:47 compute-0 great_curie[393069]: {
Nov 26 01:42:47 compute-0 great_curie[393069]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:42:47 compute-0 great_curie[393069]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:42:47 compute-0 great_curie[393069]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:42:47 compute-0 great_curie[393069]:        "osd_id": 0,
Nov 26 01:42:47 compute-0 great_curie[393069]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:42:47 compute-0 great_curie[393069]:        "type": "bluestore"
Nov 26 01:42:47 compute-0 great_curie[393069]:    },
Nov 26 01:42:47 compute-0 great_curie[393069]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:42:47 compute-0 great_curie[393069]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:42:47 compute-0 great_curie[393069]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:42:47 compute-0 great_curie[393069]:        "osd_id": 2,
Nov 26 01:42:47 compute-0 great_curie[393069]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:42:47 compute-0 great_curie[393069]:        "type": "bluestore"
Nov 26 01:42:47 compute-0 great_curie[393069]:    },
Nov 26 01:42:47 compute-0 great_curie[393069]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:42:47 compute-0 great_curie[393069]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:42:47 compute-0 great_curie[393069]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:42:47 compute-0 great_curie[393069]:        "osd_id": 1,
Nov 26 01:42:47 compute-0 great_curie[393069]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:42:47 compute-0 great_curie[393069]:        "type": "bluestore"
Nov 26 01:42:47 compute-0 great_curie[393069]:    }
Nov 26 01:42:47 compute-0 great_curie[393069]: }
Nov 26 01:42:47 compute-0 python3.9[393282]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 01:42:47 compute-0 systemd[1]: libpod-2df95190ef28902c185e8c53f6ded3b57f5fbfd6ca39ceb96aaf19aaf7b0c5c1.scope: Deactivated successfully.
Nov 26 01:42:47 compute-0 podman[393043]: 2025-11-26 01:42:47.098318904 +0000 UTC m=+1.449659319 container died 2df95190ef28902c185e8c53f6ded3b57f5fbfd6ca39ceb96aaf19aaf7b0c5c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 01:42:47 compute-0 systemd[1]: libpod-2df95190ef28902c185e8c53f6ded3b57f5fbfd6ca39ceb96aaf19aaf7b0c5c1.scope: Consumed 1.191s CPU time.
Nov 26 01:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b5e79c3d96e7df2845431b0ee2dcc22fd719ef89bd28b1f1b5e9db0aaa8ac58-merged.mount: Deactivated successfully.
Nov 26 01:42:47 compute-0 podman[393043]: 2025-11-26 01:42:47.198881643 +0000 UTC m=+1.550222028 container remove 2df95190ef28902c185e8c53f6ded3b57f5fbfd6ca39ceb96aaf19aaf7b0c5c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curie, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 01:42:47 compute-0 systemd[1]: Started libpod-conmon-27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670.scope.
Nov 26 01:42:47 compute-0 systemd[1]: libpod-conmon-2df95190ef28902c185e8c53f6ded3b57f5fbfd6ca39ceb96aaf19aaf7b0c5c1.scope: Deactivated successfully.
Nov 26 01:42:47 compute-0 podman[393294]: 2025-11-26 01:42:47.257324113 +0000 UTC m=+0.161056618 container exec 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Nov 26 01:42:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:42:47 compute-0 podman[393294]: 2025-11-26 01:42:47.292784654 +0000 UTC m=+0.196517119 container exec_died 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Nov 26 01:42:47 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:42:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:42:47 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:42:47 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev df8828c3-4c41-4479-b09f-eb1e59fe47c2 does not exist
Nov 26 01:42:47 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev d6391796-2bb9-4851-9786-49856a383e74 does not exist
Nov 26 01:42:47 compute-0 systemd[1]: libpod-conmon-27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670.scope: Deactivated successfully.
Nov 26 01:42:48 compute-0 podman[393509]: 2025-11-26 01:42:48.219312373 +0000 UTC m=+0.128033126 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, release-0.7.12=, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, version=9.4, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, io.openshift.tags=base rhel9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 26 01:42:48 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:42:48 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:42:48 compute-0 python3.9[393556]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:42:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:42:49 compute-0 python3.9[393708]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:42:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:42:50 compute-0 python3.9[393873]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 01:42:51 compute-0 systemd[1]: Started libpod-conmon-576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22.scope.
Nov 26 01:42:51 compute-0 podman[393874]: 2025-11-26 01:42:51.049926597 +0000 UTC m=+0.134494158 container exec 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:42:51 compute-0 podman[393874]: 2025-11-26 01:42:51.084516784 +0000 UTC m=+0.169084325 container exec_died 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Nov 26 01:42:51 compute-0 systemd[1]: libpod-conmon-576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22.scope: Deactivated successfully.
Nov 26 01:42:52 compute-0 python3.9[394057]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 01:42:52 compute-0 systemd[1]: Started libpod-conmon-576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22.scope.
Nov 26 01:42:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:52 compute-0 podman[394058]: 2025-11-26 01:42:52.419710768 +0000 UTC m=+0.149223033 container exec 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 01:42:52 compute-0 podman[394058]: 2025-11-26 01:42:52.455947631 +0000 UTC m=+0.185459886 container exec_died 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:42:52 compute-0 systemd[1]: libpod-conmon-576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22.scope: Deactivated successfully.
Nov 26 01:42:53 compute-0 podman[394241]: 2025-11-26 01:42:53.576178538 +0000 UTC m=+0.120090491 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Nov 26 01:42:53 compute-0 python3.9[394240]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:42:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:42:54 compute-0 python3.9[394412]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Nov 26 01:42:56 compute-0 python3.9[394576]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 01:42:56 compute-0 systemd[1]: Started libpod-conmon-1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9.scope.
Nov 26 01:42:56 compute-0 podman[394577]: 2025-11-26 01:42:56.20023968 +0000 UTC m=+0.157479317 container exec 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=edpm, distribution-scope=public, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, managed_by=edpm_ansible, version=9.4, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.expose-services=)
Nov 26 01:42:56 compute-0 podman[394577]: 2025-11-26 01:42:56.235410063 +0000 UTC m=+0.192649640 container exec_died 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, version=9.4, release=1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, container_name=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.expose-services=)
Nov 26 01:42:56 compute-0 systemd[1]: libpod-conmon-1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9.scope: Deactivated successfully.
Nov 26 01:42:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:57 compute-0 podman[394654]: 2025-11-26 01:42:57.608339434 +0000 UTC m=+0.147132915 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible)
Nov 26 01:42:58 compute-0 python3.9[394777]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 01:42:58 compute-0 systemd[1]: Started libpod-conmon-1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9.scope.
Nov 26 01:42:58 compute-0 podman[394778]: 2025-11-26 01:42:58.375215855 +0000 UTC m=+0.136922107 container exec 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, release-0.7.12=, container_name=kepler, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Nov 26 01:42:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:42:58 compute-0 podman[394778]: 2025-11-26 01:42:58.413104315 +0000 UTC m=+0.174810577 container exec_died 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc.)
Nov 26 01:42:58 compute-0 systemd[1]: libpod-conmon-1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9.scope: Deactivated successfully.
Nov 26 01:42:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:42:59 compute-0 podman[158021]: time="2025-11-26T01:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:42:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42581 "" "Go-http-client/1.1"
Nov 26 01:42:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8093 "" "Go-http-client/1.1"
Nov 26 01:43:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:00 compute-0 python3.9[394958]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:01 compute-0 openstack_network_exporter[367323]: ERROR   01:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:43:01 compute-0 openstack_network_exporter[367323]: ERROR   01:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:43:01 compute-0 openstack_network_exporter[367323]: ERROR   01:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:43:01 compute-0 openstack_network_exporter[367323]: ERROR   01:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:43:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:43:01 compute-0 openstack_network_exporter[367323]: ERROR   01:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:43:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:43:01 compute-0 podman[395082]: 2025-11-26 01:43:01.547628479 +0000 UTC m=+0.174596620 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:43:01 compute-0 python3.9[395135]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Nov 26 01:43:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:03 compute-0 python3.9[395300]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 01:43:03 compute-0 systemd[1]: Started libpod-conmon-e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce.scope.
Nov 26 01:43:03 compute-0 podman[395301]: 2025-11-26 01:43:03.301030552 +0000 UTC m=+0.141583269 container exec e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:43:03 compute-0 podman[395301]: 2025-11-26 01:43:03.311041834 +0000 UTC m=+0.151594531 container exec_died e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 01:43:03 compute-0 systemd[1]: libpod-conmon-e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce.scope: Deactivated successfully.
Nov 26 01:43:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:43:04 compute-0 python3.9[395483]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 01:43:04 compute-0 systemd[1]: Started libpod-conmon-e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce.scope.
Nov 26 01:43:04 compute-0 podman[395484]: 2025-11-26 01:43:04.705572275 +0000 UTC m=+0.144060748 container exec e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:43:04 compute-0 podman[395484]: 2025-11-26 01:43:04.740719988 +0000 UTC m=+0.179208441 container exec_died e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:43:04 compute-0 systemd[1]: libpod-conmon-e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce.scope: Deactivated successfully.
Nov 26 01:43:05 compute-0 python3.9[395664]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:07 compute-0 python3.9[395816]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Nov 26 01:43:08 compute-0 nova_compute[350387]: 2025-11-26 01:43:08.324 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:43:08 compute-0 nova_compute[350387]: 2025-11-26 01:43:08.325 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:43:08 compute-0 nova_compute[350387]: 2025-11-26 01:43:08.326 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:43:08 compute-0 nova_compute[350387]: 2025-11-26 01:43:08.326 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 01:43:08 compute-0 nova_compute[350387]: 2025-11-26 01:43:08.359 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 01:43:08 compute-0 nova_compute[350387]: 2025-11-26 01:43:08.359 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:43:08 compute-0 nova_compute[350387]: 2025-11-26 01:43:08.360 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:43:08 compute-0 nova_compute[350387]: 2025-11-26 01:43:08.360 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:43:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:08 compute-0 python3.9[395981]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 01:43:08 compute-0 systemd[1]: Started libpod-conmon-ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2.scope.
Nov 26 01:43:08 compute-0 podman[395982]: 2025-11-26 01:43:08.683396518 +0000 UTC m=+0.156407176 container exec ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 26 01:43:08 compute-0 podman[395982]: 2025-11-26 01:43:08.717259985 +0000 UTC m=+0.190270633 container exec_died ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 01:43:08 compute-0 systemd[1]: libpod-conmon-ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2.scope: Deactivated successfully.
Nov 26 01:43:09 compute-0 nova_compute[350387]: 2025-11-26 01:43:09.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:43:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:43:10 compute-0 nova_compute[350387]: 2025-11-26 01:43:10.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:43:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:10 compute-0 python3.9[396164]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 01:43:11 compute-0 systemd[1]: Started libpod-conmon-ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2.scope.
Nov 26 01:43:11 compute-0 podman[396165]: 2025-11-26 01:43:11.087608125 +0000 UTC m=+0.157373524 container exec ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 01:43:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:43:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:43:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:43:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:43:11 compute-0 podman[396165]: 2025-11-26 01:43:11.12496963 +0000 UTC m=+0.194735019 container exec_died ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 26 01:43:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:43:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:43:11 compute-0 systemd[1]: libpod-conmon-ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2.scope: Deactivated successfully.
Nov 26 01:43:11 compute-0 nova_compute[350387]: 2025-11-26 01:43:11.294 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:43:11 compute-0 nova_compute[350387]: 2025-11-26 01:43:11.315 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:43:11 compute-0 nova_compute[350387]: 2025-11-26 01:43:11.316 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:43:12 compute-0 nova_compute[350387]: 2025-11-26 01:43:12.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:43:12 compute-0 nova_compute[350387]: 2025-11-26 01:43:12.333 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:43:12 compute-0 nova_compute[350387]: 2025-11-26 01:43:12.334 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:43:12 compute-0 nova_compute[350387]: 2025-11-26 01:43:12.334 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:43:12 compute-0 nova_compute[350387]: 2025-11-26 01:43:12.335 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:43:12 compute-0 nova_compute[350387]: 2025-11-26 01:43:12.336 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:43:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:12 compute-0 python3.9[396348]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:43:12 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446401771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:43:12 compute-0 nova_compute[350387]: 2025-11-26 01:43:12.834 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:43:13 compute-0 nova_compute[350387]: 2025-11-26 01:43:13.409 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:43:13 compute-0 nova_compute[350387]: 2025-11-26 01:43:13.412 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4599MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:43:13 compute-0 nova_compute[350387]: 2025-11-26 01:43:13.413 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:43:13 compute-0 nova_compute[350387]: 2025-11-26 01:43:13.413 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:43:13 compute-0 podman[396399]: 2025-11-26 01:43:13.568224469 +0000 UTC m=+0.103911385 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 01:43:13 compute-0 podman[396398]: 2025-11-26 01:43:13.574674841 +0000 UTC m=+0.116726386 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 26 01:43:13 compute-0 podman[396397]: 2025-11-26 01:43:13.604546884 +0000 UTC m=+0.149537322 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 26 01:43:13 compute-0 nova_compute[350387]: 2025-11-26 01:43:13.892 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:43:13 compute-0 nova_compute[350387]: 2025-11-26 01:43:13.893 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:43:13 compute-0 nova_compute[350387]: 2025-11-26 01:43:13.908 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:43:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:43:14 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2855891093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:43:14 compute-0 nova_compute[350387]: 2025-11-26 01:43:14.339 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:43:14 compute-0 nova_compute[350387]: 2025-11-26 01:43:14.353 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:43:14 compute-0 nova_compute[350387]: 2025-11-26 01:43:14.368 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:43:14 compute-0 nova_compute[350387]: 2025-11-26 01:43:14.371 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:43:14 compute-0 nova_compute[350387]: 2025-11-26 01:43:14.371 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:43:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:43:14 compute-0 python3.9[396599]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:15 compute-0 python3.9[396753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:43:16 compute-0 podman[396803]: 2025-11-26 01:43:16.270749257 +0000 UTC m=+0.129758194 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Nov 26 01:43:16 compute-0 podman[396804]: 2025-11-26 01:43:16.336148064 +0000 UTC m=+0.190806398 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:43:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:16 compute-0 python3.9[396864]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/kepler.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/kepler.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:17 compute-0 python3.9[397024]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:18 compute-0 podman[397148]: 2025-11-26 01:43:18.593708969 +0000 UTC m=+0.134118107 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_id=edpm, architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543)
Nov 26 01:43:18 compute-0 python3.9[397195]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:43:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:43:19 compute-0 python3.9[397273]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:20 compute-0 python3.9[397426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:43:21 compute-0 python3.9[397504]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.cenyffgw recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:22 compute-0 python3.9[397656]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:43:23 compute-0 podman[397706]: 2025-11-26 01:43:23.803482984 +0000 UTC m=+0.130969989 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 01:43:24 compute-0 python3.9[397753]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:43:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:43:24.953 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:43:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:43:24.954 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:43:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:43:24.954 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:43:25 compute-0 python3.9[397905]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:43:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:27 compute-0 python3[398058]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 01:43:28 compute-0 podman[398182]: 2025-11-26 01:43:28.049278373 +0000 UTC m=+0.126405380 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, distribution-scope=public, version=9.6)
Nov 26 01:43:28 compute-0 python3.9[398228]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:43:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:28 compute-0 python3.9[398309]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:43:29 compute-0 podman[158021]: time="2025-11-26T01:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:43:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 01:43:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8096 "" "Go-http-client/1.1"
Nov 26 01:43:30 compute-0 python3.9[398461]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:43:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:30 compute-0 python3.9[398539]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:31 compute-0 openstack_network_exporter[367323]: ERROR   01:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:43:31 compute-0 openstack_network_exporter[367323]: ERROR   01:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:43:31 compute-0 openstack_network_exporter[367323]: ERROR   01:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:43:31 compute-0 openstack_network_exporter[367323]: ERROR   01:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:43:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:43:31 compute-0 openstack_network_exporter[367323]: ERROR   01:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:43:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:43:32 compute-0 podman[398663]: 2025-11-26 01:43:32.241460667 +0000 UTC m=+0.140169428 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:43:32 compute-0 python3.9[398714]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:43:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:33 compute-0 python3.9[398792]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:34 compute-0 python3.9[398944]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:43:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:43:34 compute-0 python3.9[399022]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:36 compute-0 python3.9[399174]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:43:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:36 compute-0 python3.9[399252]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:38 compute-0 python3.9[399404]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:43:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:39 compute-0 python3.9[399559]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:43:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:40 compute-0 python3.9[399711]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:43:41
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['images', 'default.rgw.log', 'backups', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.data']
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:43:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:43:41 compute-0 python3.9[399864]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 01:43:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:43 compute-0 python3.9[400016]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:43:43 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Nov 26 01:43:43 compute-0 systemd[1]: session-58.scope: Consumed 2min 7.888s CPU time.
Nov 26 01:43:43 compute-0 systemd-logind[800]: Session 58 logged out. Waiting for processes to exit.
Nov 26 01:43:43 compute-0 systemd-logind[800]: Removed session 58.
Nov 26 01:43:43 compute-0 podman[400042]: 2025-11-26 01:43:43.812011299 +0000 UTC m=+0.113357371 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 26 01:43:43 compute-0 podman[400043]: 2025-11-26 01:43:43.817410982 +0000 UTC m=+0.112759675 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 01:43:43 compute-0 podman[400041]: 2025-11-26 01:43:43.84426984 +0000 UTC m=+0.150940452 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Nov 26 01:43:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:43:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:46 compute-0 podman[400097]: 2025-11-26 01:43:46.54049397 +0000 UTC m=+0.095678222 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Nov 26 01:43:46 compute-0 podman[400098]: 2025-11-26 01:43:46.613226044 +0000 UTC m=+0.152211059 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 01:43:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:43:48 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:43:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:43:48 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:43:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:43:48 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:43:48 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev d0102f8e-bbce-409b-adee-2d4dd3b1dcee does not exist
Nov 26 01:43:48 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5157e946-73e5-433c-b301-33c7e60e143f does not exist
Nov 26 01:43:48 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 0054a466-22e2-4044-b46e-406283cbea9c does not exist
Nov 26 01:43:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:43:48 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:43:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:43:48 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:43:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:43:48 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:43:49 compute-0 podman[400292]: 2025-11-26 01:43:49.264609519 +0000 UTC m=+0.134179529 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., container_name=kepler, distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, release-0.7.12=, io.buildah.version=1.29.0, config_id=edpm, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 01:43:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:43:49 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:43:49 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:43:49 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:43:50 compute-0 podman[400428]: 2025-11-26 01:43:50.079023001 +0000 UTC m=+0.090128416 container create 21dca69b15a83dc4345fdca29875a44273aa44bf5c3e40a8dcb511ccedf70904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:43:50 compute-0 podman[400428]: 2025-11-26 01:43:50.038294821 +0000 UTC m=+0.049400286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:43:50 compute-0 systemd[1]: Started libpod-conmon-21dca69b15a83dc4345fdca29875a44273aa44bf5c3e40a8dcb511ccedf70904.scope.
Nov 26 01:43:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:43:50 compute-0 podman[400428]: 2025-11-26 01:43:50.210157853 +0000 UTC m=+0.221263268 container init 21dca69b15a83dc4345fdca29875a44273aa44bf5c3e40a8dcb511ccedf70904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:43:50 compute-0 podman[400428]: 2025-11-26 01:43:50.227350509 +0000 UTC m=+0.238455894 container start 21dca69b15a83dc4345fdca29875a44273aa44bf5c3e40a8dcb511ccedf70904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 01:43:50 compute-0 podman[400428]: 2025-11-26 01:43:50.232206116 +0000 UTC m=+0.243311501 container attach 21dca69b15a83dc4345fdca29875a44273aa44bf5c3e40a8dcb511ccedf70904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:43:50 compute-0 zen_gould[400443]: 167 167
Nov 26 01:43:50 compute-0 systemd[1]: libpod-21dca69b15a83dc4345fdca29875a44273aa44bf5c3e40a8dcb511ccedf70904.scope: Deactivated successfully.
Nov 26 01:43:50 compute-0 podman[400428]: 2025-11-26 01:43:50.241103427 +0000 UTC m=+0.252208832 container died 21dca69b15a83dc4345fdca29875a44273aa44bf5c3e40a8dcb511ccedf70904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b8507bf08363f052b53727c9e63c8d897980f55d0a62f66bbd38624b9635faf-merged.mount: Deactivated successfully.
Nov 26 01:43:50 compute-0 podman[400428]: 2025-11-26 01:43:50.3071065 +0000 UTC m=+0.318211905 container remove 21dca69b15a83dc4345fdca29875a44273aa44bf5c3e40a8dcb511ccedf70904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:43:50 compute-0 systemd[1]: libpod-conmon-21dca69b15a83dc4345fdca29875a44273aa44bf5c3e40a8dcb511ccedf70904.scope: Deactivated successfully.
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:50 compute-0 systemd-logind[800]: New session 59 of user zuul.
Nov 26 01:43:50 compute-0 systemd[1]: Started Session 59 of User zuul.
Nov 26 01:43:50 compute-0 podman[400467]: 2025-11-26 01:43:50.525146076 +0000 UTC m=+0.062723372 container create 33d04ee6c80113e03674a0f1a28b6f35b314529a2a200f03a8a64a7e02487201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:43:50 compute-0 podman[400467]: 2025-11-26 01:43:50.499287016 +0000 UTC m=+0.036864392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:43:50 compute-0 systemd[1]: Started libpod-conmon-33d04ee6c80113e03674a0f1a28b6f35b314529a2a200f03a8a64a7e02487201.scope.
Nov 26 01:43:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6518b27b0dd8a01196942eb5365f055c3a583fdb5058429e749d07aaf566dcec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6518b27b0dd8a01196942eb5365f055c3a583fdb5058429e749d07aaf566dcec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6518b27b0dd8a01196942eb5365f055c3a583fdb5058429e749d07aaf566dcec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6518b27b0dd8a01196942eb5365f055c3a583fdb5058429e749d07aaf566dcec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6518b27b0dd8a01196942eb5365f055c3a583fdb5058429e749d07aaf566dcec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:43:50 compute-0 podman[400467]: 2025-11-26 01:43:50.658734398 +0000 UTC m=+0.196311734 container init 33d04ee6c80113e03674a0f1a28b6f35b314529a2a200f03a8a64a7e02487201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:43:50 compute-0 podman[400467]: 2025-11-26 01:43:50.674153223 +0000 UTC m=+0.211730549 container start 33d04ee6c80113e03674a0f1a28b6f35b314529a2a200f03a8a64a7e02487201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:43:50 compute-0 podman[400467]: 2025-11-26 01:43:50.681043088 +0000 UTC m=+0.218620434 container attach 33d04ee6c80113e03674a0f1a28b6f35b314529a2a200f03a8a64a7e02487201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:43:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:43:51 compute-0 distracted_khayyam[400484]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:43:51 compute-0 distracted_khayyam[400484]: --> relative data size: 1.0
Nov 26 01:43:51 compute-0 distracted_khayyam[400484]: --> All data devices are unavailable
Nov 26 01:43:51 compute-0 python3.9[400649]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:43:51 compute-0 systemd[1]: libpod-33d04ee6c80113e03674a0f1a28b6f35b314529a2a200f03a8a64a7e02487201.scope: Deactivated successfully.
Nov 26 01:43:51 compute-0 systemd[1]: libpod-33d04ee6c80113e03674a0f1a28b6f35b314529a2a200f03a8a64a7e02487201.scope: Consumed 1.195s CPU time.
Nov 26 01:43:51 compute-0 podman[400467]: 2025-11-26 01:43:51.925604945 +0000 UTC m=+1.463182281 container died 33d04ee6c80113e03674a0f1a28b6f35b314529a2a200f03a8a64a7e02487201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:43:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6518b27b0dd8a01196942eb5365f055c3a583fdb5058429e749d07aaf566dcec-merged.mount: Deactivated successfully.
Nov 26 01:43:52 compute-0 podman[400467]: 2025-11-26 01:43:52.030596989 +0000 UTC m=+1.568174295 container remove 33d04ee6c80113e03674a0f1a28b6f35b314529a2a200f03a8a64a7e02487201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:43:52 compute-0 systemd[1]: libpod-conmon-33d04ee6c80113e03674a0f1a28b6f35b314529a2a200f03a8a64a7e02487201.scope: Deactivated successfully.
Nov 26 01:43:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:53 compute-0 podman[400895]: 2025-11-26 01:43:53.000283946 +0000 UTC m=+0.096357532 container create 0c992720a0758c0ad3f3f77658f6eae91484d6493bb2391ac4e29a0e0c27f5e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:43:53 compute-0 podman[400895]: 2025-11-26 01:43:52.964204097 +0000 UTC m=+0.060277693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:43:53 compute-0 systemd[1]: Started libpod-conmon-0c992720a0758c0ad3f3f77658f6eae91484d6493bb2391ac4e29a0e0c27f5e7.scope.
Nov 26 01:43:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:43:53 compute-0 podman[400895]: 2025-11-26 01:43:53.146543384 +0000 UTC m=+0.242616990 container init 0c992720a0758c0ad3f3f77658f6eae91484d6493bb2391ac4e29a0e0c27f5e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_booth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:43:53 compute-0 podman[400895]: 2025-11-26 01:43:53.164736648 +0000 UTC m=+0.260810164 container start 0c992720a0758c0ad3f3f77658f6eae91484d6493bb2391ac4e29a0e0c27f5e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 01:43:53 compute-0 podman[400895]: 2025-11-26 01:43:53.17120185 +0000 UTC m=+0.267275436 container attach 0c992720a0758c0ad3f3f77658f6eae91484d6493bb2391ac4e29a0e0c27f5e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_booth, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:43:53 compute-0 hopeful_booth[400934]: 167 167
Nov 26 01:43:53 compute-0 systemd[1]: libpod-0c992720a0758c0ad3f3f77658f6eae91484d6493bb2391ac4e29a0e0c27f5e7.scope: Deactivated successfully.
Nov 26 01:43:53 compute-0 podman[400895]: 2025-11-26 01:43:53.177398645 +0000 UTC m=+0.273472191 container died 0c992720a0758c0ad3f3f77658f6eae91484d6493bb2391ac4e29a0e0c27f5e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_booth, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5422459815b801490e979928380ef4b7fcab6ecf2369eb4f4b3491ced3704a4e-merged.mount: Deactivated successfully.
Nov 26 01:43:53 compute-0 podman[400895]: 2025-11-26 01:43:53.248124412 +0000 UTC m=+0.344197928 container remove 0c992720a0758c0ad3f3f77658f6eae91484d6493bb2391ac4e29a0e0c27f5e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_booth, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:43:53 compute-0 systemd[1]: libpod-conmon-0c992720a0758c0ad3f3f77658f6eae91484d6493bb2391ac4e29a0e0c27f5e7.scope: Deactivated successfully.
Nov 26 01:43:53 compute-0 podman[401010]: 2025-11-26 01:43:53.513984358 +0000 UTC m=+0.079811424 container create c100c9926f9e89f176da7fe012bea843144dfee70abe45c54e3078ffdbb13729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_vaughan, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:43:53 compute-0 podman[401010]: 2025-11-26 01:43:53.487305005 +0000 UTC m=+0.053132111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:43:53 compute-0 systemd[1]: Started libpod-conmon-c100c9926f9e89f176da7fe012bea843144dfee70abe45c54e3078ffdbb13729.scope.
Nov 26 01:43:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f08db8c84b307e3f1ff4b19fded7fa8689b3624a11847a6ab9c0ed7be72618/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f08db8c84b307e3f1ff4b19fded7fa8689b3624a11847a6ab9c0ed7be72618/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f08db8c84b307e3f1ff4b19fded7fa8689b3624a11847a6ab9c0ed7be72618/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f08db8c84b307e3f1ff4b19fded7fa8689b3624a11847a6ab9c0ed7be72618/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:43:53 compute-0 podman[401010]: 2025-11-26 01:43:53.68229001 +0000 UTC m=+0.248117126 container init c100c9926f9e89f176da7fe012bea843144dfee70abe45c54e3078ffdbb13729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 01:43:53 compute-0 python3.9[401005]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Nov 26 01:43:53 compute-0 podman[401010]: 2025-11-26 01:43:53.714477578 +0000 UTC m=+0.280304634 container start c100c9926f9e89f176da7fe012bea843144dfee70abe45c54e3078ffdbb13729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_vaughan, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 26 01:43:53 compute-0 podman[401010]: 2025-11-26 01:43:53.729446721 +0000 UTC m=+0.295273867 container attach c100c9926f9e89f176da7fe012bea843144dfee70abe45c54e3078ffdbb13729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:43:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:43:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4611 writes, 20K keys, 4611 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4611 writes, 4611 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1296 writes, 5624 keys, 1296 commit groups, 1.0 writes per commit group, ingest: 8.48 MB, 0.01 MB/s#012Interval WAL: 1296 writes, 1296 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    112.8      0.19              0.11        11    0.018       0      0       0.0       0.0#012  L6      1/0    6.64 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    174.2    142.9      0.48              0.27        10    0.048     42K   5269       0.0       0.0#012 Sum      1/0    6.64 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    124.5    134.3      0.68              0.38        21    0.032     42K   5269       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3    130.3    130.5      0.27              0.16         8    0.034     18K   2066       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    174.2    142.9      0.48              0.27        10    0.048     42K   5269       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    117.0      0.19              0.11        10    0.019       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 0.7 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5636b955b1f0#2 capacity: 308.00 MB usage: 6.50 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000122 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(409,6.14 MB,1.99512%) FilterBlock(22,125.23 KB,0.0397075%) IndexBlock(22,241.64 KB,0.076616%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 26 01:43:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:43:54 compute-0 podman[401136]: 2025-11-26 01:43:54.578465271 +0000 UTC m=+0.120825312 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]: {
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:    "0": [
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:        {
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "devices": [
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "/dev/loop3"
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            ],
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "lv_name": "ceph_lv0",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "lv_size": "21470642176",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "name": "ceph_lv0",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "tags": {
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.cluster_name": "ceph",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.crush_device_class": "",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.encrypted": "0",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.osd_id": "0",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.type": "block",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.vdo": "0"
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            },
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "type": "block",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "vg_name": "ceph_vg0"
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:        }
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:    ],
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:    "1": [
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:        {
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "devices": [
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "/dev/loop4"
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            ],
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "lv_name": "ceph_lv1",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "lv_size": "21470642176",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "name": "ceph_lv1",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "tags": {
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.cluster_name": "ceph",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.crush_device_class": "",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.encrypted": "0",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.osd_id": "1",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.type": "block",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.vdo": "0"
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            },
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "type": "block",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "vg_name": "ceph_vg1"
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:        }
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:    ],
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:    "2": [
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:        {
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "devices": [
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "/dev/loop5"
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            ],
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "lv_name": "ceph_lv2",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "lv_size": "21470642176",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "name": "ceph_lv2",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "tags": {
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.cluster_name": "ceph",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.crush_device_class": "",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.encrypted": "0",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.osd_id": "2",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.type": "block",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:                "ceph.vdo": "0"
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            },
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "type": "block",
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:            "vg_name": "ceph_vg2"
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:        }
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]:    ]
Nov 26 01:43:54 compute-0 jovial_vaughan[401025]: }
Nov 26 01:43:54 compute-0 systemd[1]: libpod-c100c9926f9e89f176da7fe012bea843144dfee70abe45c54e3078ffdbb13729.scope: Deactivated successfully.
Nov 26 01:43:54 compute-0 podman[401010]: 2025-11-26 01:43:54.640139592 +0000 UTC m=+1.205966658 container died c100c9926f9e89f176da7fe012bea843144dfee70abe45c54e3078ffdbb13729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 01:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2f08db8c84b307e3f1ff4b19fded7fa8689b3624a11847a6ab9c0ed7be72618-merged.mount: Deactivated successfully.
Nov 26 01:43:54 compute-0 podman[401010]: 2025-11-26 01:43:54.753787881 +0000 UTC m=+1.319614917 container remove c100c9926f9e89f176da7fe012bea843144dfee70abe45c54e3078ffdbb13729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_vaughan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 01:43:54 compute-0 systemd[1]: libpod-conmon-c100c9926f9e89f176da7fe012bea843144dfee70abe45c54e3078ffdbb13729.scope: Deactivated successfully.
Nov 26 01:43:54 compute-0 python3.9[401205]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 01:43:55 compute-0 podman[401395]: 2025-11-26 01:43:55.943141969 +0000 UTC m=+0.097126983 container create 08db8cd1cc496f57d8b4c1dd32332994dbce0d579d46b4bdd50dda994cc86605 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:43:55 compute-0 podman[401395]: 2025-11-26 01:43:55.902768939 +0000 UTC m=+0.056754043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:43:56 compute-0 systemd[1]: Started libpod-conmon-08db8cd1cc496f57d8b4c1dd32332994dbce0d579d46b4bdd50dda994cc86605.scope.
Nov 26 01:43:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:43:56 compute-0 podman[401395]: 2025-11-26 01:43:56.07527813 +0000 UTC m=+0.229263174 container init 08db8cd1cc496f57d8b4c1dd32332994dbce0d579d46b4bdd50dda994cc86605 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 01:43:56 compute-0 podman[401395]: 2025-11-26 01:43:56.092185807 +0000 UTC m=+0.246170851 container start 08db8cd1cc496f57d8b4c1dd32332994dbce0d579d46b4bdd50dda994cc86605 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pare, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:43:56 compute-0 podman[401395]: 2025-11-26 01:43:56.098625239 +0000 UTC m=+0.252610283 container attach 08db8cd1cc496f57d8b4c1dd32332994dbce0d579d46b4bdd50dda994cc86605 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pare, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 01:43:56 compute-0 unruffled_pare[401453]: 167 167
Nov 26 01:43:56 compute-0 systemd[1]: libpod-08db8cd1cc496f57d8b4c1dd32332994dbce0d579d46b4bdd50dda994cc86605.scope: Deactivated successfully.
Nov 26 01:43:56 compute-0 podman[401395]: 2025-11-26 01:43:56.105942795 +0000 UTC m=+0.259927899 container died 08db8cd1cc496f57d8b4c1dd32332994dbce0d579d46b4bdd50dda994cc86605 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pare, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 01:43:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b30bb57aa17d46eed68f0c352346d0a925c19d4c34d5a74f19a8acc621a7a0f-merged.mount: Deactivated successfully.
Nov 26 01:43:56 compute-0 podman[401395]: 2025-11-26 01:43:56.183351371 +0000 UTC m=+0.337336415 container remove 08db8cd1cc496f57d8b4c1dd32332994dbce0d579d46b4bdd50dda994cc86605 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pare, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 01:43:56 compute-0 systemd[1]: libpod-conmon-08db8cd1cc496f57d8b4c1dd32332994dbce0d579d46b4bdd50dda994cc86605.scope: Deactivated successfully.
Nov 26 01:43:56 compute-0 python3.9[401458]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 01:43:56 compute-0 podman[401479]: 2025-11-26 01:43:56.435263773 +0000 UTC m=+0.080244956 container create dc68cc1afa38cefe2b232122993597bfbff53fca927044398aff3867cc2a3271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 01:43:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:56 compute-0 podman[401479]: 2025-11-26 01:43:56.400044749 +0000 UTC m=+0.045025982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:43:56 compute-0 systemd[1]: Started libpod-conmon-dc68cc1afa38cefe2b232122993597bfbff53fca927044398aff3867cc2a3271.scope.
Nov 26 01:43:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9710ad652b1aaebf2a9d2440706c09ac50ee1a6daffd9f8931b278264148737/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9710ad652b1aaebf2a9d2440706c09ac50ee1a6daffd9f8931b278264148737/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9710ad652b1aaebf2a9d2440706c09ac50ee1a6daffd9f8931b278264148737/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9710ad652b1aaebf2a9d2440706c09ac50ee1a6daffd9f8931b278264148737/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:43:56 compute-0 podman[401479]: 2025-11-26 01:43:56.60624803 +0000 UTC m=+0.251229263 container init dc68cc1afa38cefe2b232122993597bfbff53fca927044398aff3867cc2a3271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:43:56 compute-0 podman[401479]: 2025-11-26 01:43:56.621429389 +0000 UTC m=+0.266410552 container start dc68cc1afa38cefe2b232122993597bfbff53fca927044398aff3867cc2a3271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lamport, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 01:43:56 compute-0 podman[401479]: 2025-11-26 01:43:56.626553864 +0000 UTC m=+0.271535097 container attach dc68cc1afa38cefe2b232122993597bfbff53fca927044398aff3867cc2a3271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lamport, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:56.952622) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121436952660, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1465, "num_deletes": 251, "total_data_size": 2325269, "memory_usage": 2374832, "flush_reason": "Manual Compaction"}
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121436971562, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2292214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19415, "largest_seqno": 20879, "table_properties": {"data_size": 2285393, "index_size": 3956, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14016, "raw_average_key_size": 19, "raw_value_size": 2271748, "raw_average_value_size": 3204, "num_data_blocks": 181, "num_entries": 709, "num_filter_entries": 709, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764121280, "oldest_key_time": 1764121280, "file_creation_time": 1764121436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 19034 microseconds, and 11156 cpu microseconds.
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:56.971650) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2292214 bytes OK
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:56.971675) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:56.974207) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:56.974230) EVENT_LOG_v1 {"time_micros": 1764121436974223, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:56.974252) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2318843, prev total WAL file size 2318843, number of live WAL files 2.
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:56.976189) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2238KB)], [47(6799KB)]
Nov 26 01:43:56 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121436976259, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9254692, "oldest_snapshot_seqno": -1}
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4265 keys, 7478316 bytes, temperature: kUnknown
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121437029384, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7478316, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7448847, "index_size": 17711, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 105460, "raw_average_key_size": 24, "raw_value_size": 7370531, "raw_average_value_size": 1728, "num_data_blocks": 744, "num_entries": 4265, "num_filter_entries": 4265, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764121436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:57.029651) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7478316 bytes
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:57.031875) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.0 rd, 140.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 6.6 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(7.3) write-amplify(3.3) OK, records in: 4779, records dropped: 514 output_compression: NoCompression
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:57.031902) EVENT_LOG_v1 {"time_micros": 1764121437031889, "job": 24, "event": "compaction_finished", "compaction_time_micros": 53202, "compaction_time_cpu_micros": 33387, "output_level": 6, "num_output_files": 1, "total_output_size": 7478316, "num_input_records": 4779, "num_output_records": 4265, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121437032678, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121437035183, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:56.975572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:57.035371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:57.035378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:57.035380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:57.035382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:43:57 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:43:57.035384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:43:57 compute-0 laughing_lamport[401495]: {
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:        "osd_id": 0,
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:        "type": "bluestore"
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:    },
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:        "osd_id": 2,
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:        "type": "bluestore"
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:    },
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:        "osd_id": 1,
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:        "type": "bluestore"
Nov 26 01:43:57 compute-0 laughing_lamport[401495]:    }
Nov 26 01:43:57 compute-0 laughing_lamport[401495]: }
Nov 26 01:43:57 compute-0 systemd[1]: libpod-dc68cc1afa38cefe2b232122993597bfbff53fca927044398aff3867cc2a3271.scope: Deactivated successfully.
Nov 26 01:43:57 compute-0 podman[401479]: 2025-11-26 01:43:57.849915431 +0000 UTC m=+1.494896624 container died dc68cc1afa38cefe2b232122993597bfbff53fca927044398aff3867cc2a3271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:43:57 compute-0 systemd[1]: libpod-dc68cc1afa38cefe2b232122993597bfbff53fca927044398aff3867cc2a3271.scope: Consumed 1.213s CPU time.
Nov 26 01:43:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9710ad652b1aaebf2a9d2440706c09ac50ee1a6daffd9f8931b278264148737-merged.mount: Deactivated successfully.
Nov 26 01:43:57 compute-0 podman[401479]: 2025-11-26 01:43:57.930117736 +0000 UTC m=+1.575098899 container remove dc68cc1afa38cefe2b232122993597bfbff53fca927044398aff3867cc2a3271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 01:43:57 compute-0 systemd[1]: libpod-conmon-dc68cc1afa38cefe2b232122993597bfbff53fca927044398aff3867cc2a3271.scope: Deactivated successfully.
Nov 26 01:43:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:43:58 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:43:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:43:58 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:43:58 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 52361b22-11e5-457f-98e9-bf9271d5e925 does not exist
Nov 26 01:43:58 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 8e7c0f48-f229-48a9-aeff-59d4e4b7f7e4 does not exist
Nov 26 01:43:58 compute-0 podman[401639]: 2025-11-26 01:43:58.247599289 +0000 UTC m=+0.101581199 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9)
Nov 26 01:43:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:43:58 compute-0 python3.9[401759]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:43:59 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:43:59 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:43:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:43:59 compute-0 podman[158021]: time="2025-11-26T01:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:43:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 01:43:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8097 "" "Go-http-client/1.1"
Nov 26 01:43:59 compute-0 python3.9[401837]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/pki/rsyslog/ca-openshift.crt _original_basename=ca-openshift.crt recurse=False state=file path=/etc/pki/rsyslog/ca-openshift.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:44:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:00 compute-0 python3.9[401989]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:44:01 compute-0 openstack_network_exporter[367323]: ERROR   01:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:44:01 compute-0 openstack_network_exporter[367323]: ERROR   01:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:44:01 compute-0 openstack_network_exporter[367323]: ERROR   01:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:44:01 compute-0 openstack_network_exporter[367323]: ERROR   01:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:44:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:44:01 compute-0 openstack_network_exporter[367323]: ERROR   01:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:44:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:44:02 compute-0 python3.9[402141]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 01:44:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:02 compute-0 podman[402180]: 2025-11-26 01:44:02.55121271 +0000 UTC m=+0.104209453 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:44:02 compute-0 python3.9[402243]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/rsyslog.d/10-telemetry.conf _original_basename=10-telemetry.conf recurse=False state=file path=/etc/rsyslog.d/10-telemetry.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 01:44:03 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Nov 26 01:44:03 compute-0 systemd[1]: session-59.scope: Consumed 10.446s CPU time.
Nov 26 01:44:03 compute-0 systemd-logind[800]: Session 59 logged out. Waiting for processes to exit.
Nov 26 01:44:03 compute-0 systemd-logind[800]: Removed session 59.
Nov 26 01:44:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:44:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:09 compute-0 nova_compute[350387]: 2025-11-26 01:44:09.368 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:44:09 compute-0 nova_compute[350387]: 2025-11-26 01:44:09.369 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:44:09 compute-0 nova_compute[350387]: 2025-11-26 01:44:09.369 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:44:09 compute-0 nova_compute[350387]: 2025-11-26 01:44:09.369 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 01:44:09 compute-0 nova_compute[350387]: 2025-11-26 01:44:09.386 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 01:44:09 compute-0 nova_compute[350387]: 2025-11-26 01:44:09.386 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:44:09 compute-0 nova_compute[350387]: 2025-11-26 01:44:09.387 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:44:09 compute-0 nova_compute[350387]: 2025-11-26 01:44:09.387 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:44:09 compute-0 nova_compute[350387]: 2025-11-26 01:44:09.388 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:44:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:44:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:44:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:44:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:44:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:44:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:44:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:44:11 compute-0 nova_compute[350387]: 2025-11-26 01:44:11.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:44:11 compute-0 nova_compute[350387]: 2025-11-26 01:44:11.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:44:11 compute-0 nova_compute[350387]: 2025-11-26 01:44:11.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:44:12 compute-0 nova_compute[350387]: 2025-11-26 01:44:12.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:44:12 compute-0 nova_compute[350387]: 2025-11-26 01:44:12.337 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:44:12 compute-0 nova_compute[350387]: 2025-11-26 01:44:12.338 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:44:12 compute-0 nova_compute[350387]: 2025-11-26 01:44:12.338 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:44:12 compute-0 nova_compute[350387]: 2025-11-26 01:44:12.338 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:44:12 compute-0 nova_compute[350387]: 2025-11-26 01:44:12.339 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:44:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:44:12 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2481885451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:44:12 compute-0 nova_compute[350387]: 2025-11-26 01:44:12.862 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:44:13 compute-0 nova_compute[350387]: 2025-11-26 01:44:13.455 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:44:13 compute-0 nova_compute[350387]: 2025-11-26 01:44:13.456 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4590MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:44:13 compute-0 nova_compute[350387]: 2025-11-26 01:44:13.456 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:44:13 compute-0 nova_compute[350387]: 2025-11-26 01:44:13.457 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:44:13 compute-0 nova_compute[350387]: 2025-11-26 01:44:13.533 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:44:13 compute-0 nova_compute[350387]: 2025-11-26 01:44:13.533 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:44:13 compute-0 nova_compute[350387]: 2025-11-26 01:44:13.563 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:44:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:44:14 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724882398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:44:14 compute-0 nova_compute[350387]: 2025-11-26 01:44:14.149 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:44:14 compute-0 nova_compute[350387]: 2025-11-26 01:44:14.157 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:44:14 compute-0 nova_compute[350387]: 2025-11-26 01:44:14.176 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:44:14 compute-0 nova_compute[350387]: 2025-11-26 01:44:14.178 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:44:14 compute-0 nova_compute[350387]: 2025-11-26 01:44:14.178 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:44:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:44:14 compute-0 podman[402313]: 2025-11-26 01:44:14.554357736 +0000 UTC m=+0.100194890 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 01:44:14 compute-0 podman[402312]: 2025-11-26 01:44:14.573229448 +0000 UTC m=+0.120172333 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 26 01:44:14 compute-0 podman[402314]: 2025-11-26 01:44:14.599722316 +0000 UTC m=+0.130948728 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:44:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:17 compute-0 podman[402370]: 2025-11-26 01:44:17.593768874 +0000 UTC m=+0.136364381 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Nov 26 01:44:17 compute-0 podman[402371]: 2025-11-26 01:44:17.663022659 +0000 UTC m=+0.200385978 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller)
Nov 26 01:44:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:44:19 compute-0 podman[402415]: 2025-11-26 01:44:19.56161896 +0000 UTC m=+0.114343579 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vendor=Red Hat, Inc., name=ubi9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 26 01:44:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:44:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:44:24.955 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:44:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:44:24.955 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:44:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:44:24.955 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:44:25 compute-0 podman[402435]: 2025-11-26 01:44:25.58099388 +0000 UTC m=+0.123503738 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 26 01:44:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:28 compute-0 podman[402454]: 2025-11-26 01:44:28.5587966 +0000 UTC m=+0.106673113 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, config_id=edpm, distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 26 01:44:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:44:29 compute-0 podman[158021]: time="2025-11-26T01:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:44:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 01:44:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8093 "" "Go-http-client/1.1"
Nov 26 01:44:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:31 compute-0 openstack_network_exporter[367323]: ERROR   01:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:44:31 compute-0 openstack_network_exporter[367323]: ERROR   01:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:44:31 compute-0 openstack_network_exporter[367323]: ERROR   01:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:44:31 compute-0 openstack_network_exporter[367323]: ERROR   01:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:44:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:44:31 compute-0 openstack_network_exporter[367323]: ERROR   01:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:44:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:44:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:44:33 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2232885772' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:44:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:44:33 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2232885772' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:44:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:44:33 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/138154792' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:44:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:44:33 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/138154792' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:44:33 compute-0 podman[402473]: 2025-11-26 01:44:33.575770291 +0000 UTC m=+0.122170690 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:44:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:44:33 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2991877099' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:44:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:44:33 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2991877099' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:44:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:44:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:44:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:44:41
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', '.mgr', 'images', 'volumes', 'vms', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups']
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:44:41 compute-0 ceph-mgr[193049]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2845592742
Nov 26 01:44:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.856 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.857 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.857 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.858 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.859 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.860 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.861 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.862 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.862 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.862 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.863 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.861 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.864 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.865 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.865 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.865 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.864 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.866 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.866 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.868 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.866 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.868 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.869 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.868 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.869 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.870 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.870 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.870 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.870 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ad718470>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.871 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.874 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.874 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.875 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.875 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.875 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.875 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.875 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.875 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.876 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.876 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.876 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.876 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.876 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.876 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.877 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.877 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.877 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.877 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.877 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.877 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.878 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.878 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.878 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.878 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.878 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.878 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.879 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.879 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.879 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.879 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.879 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.879 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.880 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.880 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.880 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.880 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.881 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.881 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.881 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.881 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.881 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.881 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.883 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.883 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.883 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.883 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.883 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.883 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.884 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.884 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.884 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.884 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.884 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:44:42.884 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:44:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:44:44 compute-0 podman[402498]: 2025-11-26 01:44:44.825107036 +0000 UTC m=+0.108828154 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 01:44:44 compute-0 podman[402500]: 2025-11-26 01:44:44.825179058 +0000 UTC m=+0.107073414 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:44:44 compute-0 podman[402499]: 2025-11-26 01:44:44.839621435 +0000 UTC m=+0.121459410 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 01:44:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:48 compute-0 podman[402556]: 2025-11-26 01:44:48.611816863 +0000 UTC m=+0.162420647 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 01:44:48 compute-0 podman[402557]: 2025-11-26 01:44:48.648781697 +0000 UTC m=+0.192391643 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:44:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:50 compute-0 podman[402598]: 2025-11-26 01:44:50.578509215 +0000 UTC m=+0.124398823 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.expose-services=, name=ubi9, vendor=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_id=edpm, architecture=x86_64, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:44:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:44:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:44:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:56 compute-0 podman[402616]: 2025-11-26 01:44:56.558671745 +0000 UTC m=+0.102236546 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 26 01:44:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:44:58 compute-0 podman[402710]: 2025-11-26 01:44:58.829541191 +0000 UTC m=+0.098084099 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 01:44:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:44:59 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:44:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:44:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:44:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:44:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:44:59 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 3e9ecdf5-c1e9-4e52-8777-61e092f26548 does not exist
Nov 26 01:44:59 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5b0c326e-8a47-47ff-9ad7-2a22c7a0886c does not exist
Nov 26 01:44:59 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 7926a751-6837-4605-862f-e7185079cefb does not exist
Nov 26 01:44:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:44:59 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:44:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:44:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:44:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:44:59 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:44:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:44:59 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:44:59 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:44:59 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:44:59 compute-0 podman[158021]: time="2025-11-26T01:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:44:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 01:44:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8100 "" "Go-http-client/1.1"
Nov 26 01:45:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:00 compute-0 podman[402921]: 2025-11-26 01:45:00.538233176 +0000 UTC m=+0.079157523 container create ae6b6a777f35d6760b1b0aa92735f3108d8891356e00fa6895cb083b8f84d135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:45:00 compute-0 podman[402921]: 2025-11-26 01:45:00.505272542 +0000 UTC m=+0.046196939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:45:00 compute-0 systemd[1]: Started libpod-conmon-ae6b6a777f35d6760b1b0aa92735f3108d8891356e00fa6895cb083b8f84d135.scope.
Nov 26 01:45:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:45:00 compute-0 podman[402921]: 2025-11-26 01:45:00.675686489 +0000 UTC m=+0.216610886 container init ae6b6a777f35d6760b1b0aa92735f3108d8891356e00fa6895cb083b8f84d135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_williamson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:45:00 compute-0 podman[402921]: 2025-11-26 01:45:00.694205373 +0000 UTC m=+0.235129730 container start ae6b6a777f35d6760b1b0aa92735f3108d8891356e00fa6895cb083b8f84d135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_williamson, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:45:00 compute-0 podman[402921]: 2025-11-26 01:45:00.700671137 +0000 UTC m=+0.241595554 container attach ae6b6a777f35d6760b1b0aa92735f3108d8891356e00fa6895cb083b8f84d135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_williamson, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:45:00 compute-0 funny_williamson[402937]: 167 167
Nov 26 01:45:00 compute-0 systemd[1]: libpod-ae6b6a777f35d6760b1b0aa92735f3108d8891356e00fa6895cb083b8f84d135.scope: Deactivated successfully.
Nov 26 01:45:00 compute-0 podman[402921]: 2025-11-26 01:45:00.708795147 +0000 UTC m=+0.249719464 container died ae6b6a777f35d6760b1b0aa92735f3108d8891356e00fa6895cb083b8f84d135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_williamson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 01:45:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dbe95e15c571b3076f2881d28d988f743818905279617f52d0951eeff5aa220-merged.mount: Deactivated successfully.
Nov 26 01:45:00 compute-0 podman[402921]: 2025-11-26 01:45:00.780312532 +0000 UTC m=+0.321236849 container remove ae6b6a777f35d6760b1b0aa92735f3108d8891356e00fa6895cb083b8f84d135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_williamson, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 01:45:00 compute-0 systemd[1]: libpod-conmon-ae6b6a777f35d6760b1b0aa92735f3108d8891356e00fa6895cb083b8f84d135.scope: Deactivated successfully.
Nov 26 01:45:01 compute-0 podman[402960]: 2025-11-26 01:45:01.028940994 +0000 UTC m=+0.086291275 container create 14f63784d55dfb82d83699a9efb5abb990ec62cb7912b9ef1c1770960c1c91a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_edison, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:45:01 compute-0 podman[402960]: 2025-11-26 01:45:00.987099729 +0000 UTC m=+0.044450070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:45:01 compute-0 systemd[1]: Started libpod-conmon-14f63784d55dfb82d83699a9efb5abb990ec62cb7912b9ef1c1770960c1c91a8.scope.
Nov 26 01:45:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0159c2f3a6699af78575fa86d423ac1365bde06c453fbd76fd42fa18b3d917/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0159c2f3a6699af78575fa86d423ac1365bde06c453fbd76fd42fa18b3d917/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0159c2f3a6699af78575fa86d423ac1365bde06c453fbd76fd42fa18b3d917/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0159c2f3a6699af78575fa86d423ac1365bde06c453fbd76fd42fa18b3d917/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0159c2f3a6699af78575fa86d423ac1365bde06c453fbd76fd42fa18b3d917/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:45:01 compute-0 podman[402960]: 2025-11-26 01:45:01.217188465 +0000 UTC m=+0.274538836 container init 14f63784d55dfb82d83699a9efb5abb990ec62cb7912b9ef1c1770960c1c91a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Nov 26 01:45:01 compute-0 podman[402960]: 2025-11-26 01:45:01.24737243 +0000 UTC m=+0.304722721 container start 14f63784d55dfb82d83699a9efb5abb990ec62cb7912b9ef1c1770960c1c91a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 01:45:01 compute-0 podman[402960]: 2025-11-26 01:45:01.256243441 +0000 UTC m=+0.313593722 container attach 14f63784d55dfb82d83699a9efb5abb990ec62cb7912b9ef1c1770960c1c91a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:45:01 compute-0 openstack_network_exporter[367323]: ERROR   01:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:45:01 compute-0 openstack_network_exporter[367323]: ERROR   01:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:45:01 compute-0 openstack_network_exporter[367323]: ERROR   01:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:45:01 compute-0 openstack_network_exporter[367323]: ERROR   01:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:45:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:45:01 compute-0 openstack_network_exporter[367323]: ERROR   01:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:45:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:45:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:02 compute-0 condescending_edison[402976]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:45:02 compute-0 condescending_edison[402976]: --> relative data size: 1.0
Nov 26 01:45:02 compute-0 condescending_edison[402976]: --> All data devices are unavailable
Nov 26 01:45:02 compute-0 podman[402960]: 2025-11-26 01:45:02.562106746 +0000 UTC m=+1.619457037 container died 14f63784d55dfb82d83699a9efb5abb990ec62cb7912b9ef1c1770960c1c91a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 01:45:02 compute-0 systemd[1]: libpod-14f63784d55dfb82d83699a9efb5abb990ec62cb7912b9ef1c1770960c1c91a8.scope: Deactivated successfully.
Nov 26 01:45:02 compute-0 systemd[1]: libpod-14f63784d55dfb82d83699a9efb5abb990ec62cb7912b9ef1c1770960c1c91a8.scope: Consumed 1.256s CPU time.
Nov 26 01:45:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd0159c2f3a6699af78575fa86d423ac1365bde06c453fbd76fd42fa18b3d917-merged.mount: Deactivated successfully.
Nov 26 01:45:02 compute-0 podman[402960]: 2025-11-26 01:45:02.652517657 +0000 UTC m=+1.709867908 container remove 14f63784d55dfb82d83699a9efb5abb990ec62cb7912b9ef1c1770960c1c91a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 01:45:02 compute-0 systemd[1]: libpod-conmon-14f63784d55dfb82d83699a9efb5abb990ec62cb7912b9ef1c1770960c1c91a8.scope: Deactivated successfully.
Nov 26 01:45:03 compute-0 podman[403156]: 2025-11-26 01:45:03.921150698 +0000 UTC m=+0.098875831 container create 2240dc33508a7aa709e7c5f51d5d1480263f14cbe82d395048a2ec483968ca53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wescoff, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 01:45:03 compute-0 podman[403156]: 2025-11-26 01:45:03.882673499 +0000 UTC m=+0.060398682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:45:03 compute-0 systemd[1]: Started libpod-conmon-2240dc33508a7aa709e7c5f51d5d1480263f14cbe82d395048a2ec483968ca53.scope.
Nov 26 01:45:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:45:04 compute-0 podman[403156]: 2025-11-26 01:45:04.058472658 +0000 UTC m=+0.236197761 container init 2240dc33508a7aa709e7c5f51d5d1480263f14cbe82d395048a2ec483968ca53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 01:45:04 compute-0 podman[403156]: 2025-11-26 01:45:04.070814947 +0000 UTC m=+0.248540050 container start 2240dc33508a7aa709e7c5f51d5d1480263f14cbe82d395048a2ec483968ca53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wescoff, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:45:04 compute-0 podman[403156]: 2025-11-26 01:45:04.076632972 +0000 UTC m=+0.254358075 container attach 2240dc33508a7aa709e7c5f51d5d1480263f14cbe82d395048a2ec483968ca53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wescoff, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:45:04 compute-0 cranky_wescoff[403171]: 167 167
Nov 26 01:45:04 compute-0 systemd[1]: libpod-2240dc33508a7aa709e7c5f51d5d1480263f14cbe82d395048a2ec483968ca53.scope: Deactivated successfully.
Nov 26 01:45:04 compute-0 podman[403156]: 2025-11-26 01:45:04.081216452 +0000 UTC m=+0.258941565 container died 2240dc33508a7aa709e7c5f51d5d1480263f14cbe82d395048a2ec483968ca53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 01:45:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a5884e9ab22cf4e520dfffe9dfbf70f0062d5078dff18b9bf0a3c7758ead2cf-merged.mount: Deactivated successfully.
Nov 26 01:45:04 compute-0 podman[403156]: 2025-11-26 01:45:04.138056902 +0000 UTC m=+0.315782005 container remove 2240dc33508a7aa709e7c5f51d5d1480263f14cbe82d395048a2ec483968ca53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wescoff, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 01:45:04 compute-0 systemd[1]: libpod-conmon-2240dc33508a7aa709e7c5f51d5d1480263f14cbe82d395048a2ec483968ca53.scope: Deactivated successfully.
Nov 26 01:45:04 compute-0 podman[403168]: 2025-11-26 01:45:04.160532448 +0000 UTC m=+0.164550201 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:45:04 compute-0 podman[403215]: 2025-11-26 01:45:04.33678198 +0000 UTC m=+0.071889067 container create 47f10d38dfca34c793c09aa7100f347ee599d4d5ae0614307370da652282d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mahavira, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 01:45:04 compute-0 podman[403215]: 2025-11-26 01:45:04.304757983 +0000 UTC m=+0.039865120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:45:04 compute-0 systemd[1]: Started libpod-conmon-47f10d38dfca34c793c09aa7100f347ee599d4d5ae0614307370da652282d9d3.scope.
Nov 26 01:45:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3c54fa65117d0bc9d3b2eb8106cbf53dd089a5655a19c864d104f0bddd3213/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3c54fa65117d0bc9d3b2eb8106cbf53dd089a5655a19c864d104f0bddd3213/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3c54fa65117d0bc9d3b2eb8106cbf53dd089a5655a19c864d104f0bddd3213/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3c54fa65117d0bc9d3b2eb8106cbf53dd089a5655a19c864d104f0bddd3213/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:45:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:04 compute-0 podman[403215]: 2025-11-26 01:45:04.499458028 +0000 UTC m=+0.234565115 container init 47f10d38dfca34c793c09aa7100f347ee599d4d5ae0614307370da652282d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 01:45:04 compute-0 podman[403215]: 2025-11-26 01:45:04.515983016 +0000 UTC m=+0.251090083 container start 47f10d38dfca34c793c09aa7100f347ee599d4d5ae0614307370da652282d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mahavira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:45:04 compute-0 podman[403215]: 2025-11-26 01:45:04.521294336 +0000 UTC m=+0.256401473 container attach 47f10d38dfca34c793c09aa7100f347ee599d4d5ae0614307370da652282d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:45:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]: {
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:    "0": [
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:        {
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "devices": [
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "/dev/loop3"
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            ],
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "lv_name": "ceph_lv0",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "lv_size": "21470642176",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "name": "ceph_lv0",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "tags": {
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.cluster_name": "ceph",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.crush_device_class": "",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.encrypted": "0",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.osd_id": "0",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.type": "block",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.vdo": "0"
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            },
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "type": "block",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "vg_name": "ceph_vg0"
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:        }
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:    ],
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:    "1": [
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:        {
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "devices": [
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "/dev/loop4"
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            ],
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "lv_name": "ceph_lv1",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "lv_size": "21470642176",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "name": "ceph_lv1",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "tags": {
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.cluster_name": "ceph",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.crush_device_class": "",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.encrypted": "0",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.osd_id": "1",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.type": "block",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.vdo": "0"
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            },
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "type": "block",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "vg_name": "ceph_vg1"
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:        }
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:    ],
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:    "2": [
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:        {
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "devices": [
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "/dev/loop5"
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            ],
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "lv_name": "ceph_lv2",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "lv_size": "21470642176",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "name": "ceph_lv2",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "tags": {
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.cluster_name": "ceph",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.crush_device_class": "",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.encrypted": "0",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.osd_id": "2",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.type": "block",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:                "ceph.vdo": "0"
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            },
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "type": "block",
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:            "vg_name": "ceph_vg2"
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:        }
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]:    ]
Nov 26 01:45:05 compute-0 recursing_mahavira[403232]: }
Nov 26 01:45:05 compute-0 systemd[1]: libpod-47f10d38dfca34c793c09aa7100f347ee599d4d5ae0614307370da652282d9d3.scope: Deactivated successfully.
Nov 26 01:45:05 compute-0 podman[403241]: 2025-11-26 01:45:05.434254143 +0000 UTC m=+0.060656879 container died 47f10d38dfca34c793c09aa7100f347ee599d4d5ae0614307370da652282d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 01:45:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f3c54fa65117d0bc9d3b2eb8106cbf53dd089a5655a19c864d104f0bddd3213-merged.mount: Deactivated successfully.
Nov 26 01:45:05 compute-0 podman[403241]: 2025-11-26 01:45:05.538032822 +0000 UTC m=+0.164435528 container remove 47f10d38dfca34c793c09aa7100f347ee599d4d5ae0614307370da652282d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 01:45:05 compute-0 systemd[1]: libpod-conmon-47f10d38dfca34c793c09aa7100f347ee599d4d5ae0614307370da652282d9d3.scope: Deactivated successfully.
Nov 26 01:45:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:06 compute-0 podman[403394]: 2025-11-26 01:45:06.86267067 +0000 UTC m=+0.098094970 container create fe700f6aab8b53e21136f114176ca110d45c8f307003fb55d599c32f110c5939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_curran, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:45:06 compute-0 podman[403394]: 2025-11-26 01:45:06.825471216 +0000 UTC m=+0.060895566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:45:06 compute-0 systemd[1]: Started libpod-conmon-fe700f6aab8b53e21136f114176ca110d45c8f307003fb55d599c32f110c5939.scope.
Nov 26 01:45:06 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:45:07 compute-0 podman[403394]: 2025-11-26 01:45:07.005240248 +0000 UTC m=+0.240664598 container init fe700f6aab8b53e21136f114176ca110d45c8f307003fb55d599c32f110c5939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:45:07 compute-0 podman[403394]: 2025-11-26 01:45:07.02157307 +0000 UTC m=+0.256997360 container start fe700f6aab8b53e21136f114176ca110d45c8f307003fb55d599c32f110c5939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_curran, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:45:07 compute-0 confident_curran[403409]: 167 167
Nov 26 01:45:07 compute-0 podman[403394]: 2025-11-26 01:45:07.028439285 +0000 UTC m=+0.263863585 container attach fe700f6aab8b53e21136f114176ca110d45c8f307003fb55d599c32f110c5939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:45:07 compute-0 systemd[1]: libpod-fe700f6aab8b53e21136f114176ca110d45c8f307003fb55d599c32f110c5939.scope: Deactivated successfully.
Nov 26 01:45:07 compute-0 podman[403394]: 2025-11-26 01:45:07.031724268 +0000 UTC m=+0.267148558 container died fe700f6aab8b53e21136f114176ca110d45c8f307003fb55d599c32f110c5939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_curran, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 01:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-68786f0ea828aa15c634e5edbbe2ba23ad210ff5c0889024fe28aec3743cb96a-merged.mount: Deactivated successfully.
Nov 26 01:45:07 compute-0 podman[403394]: 2025-11-26 01:45:07.099296281 +0000 UTC m=+0.334720551 container remove fe700f6aab8b53e21136f114176ca110d45c8f307003fb55d599c32f110c5939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 01:45:07 compute-0 systemd[1]: libpod-conmon-fe700f6aab8b53e21136f114176ca110d45c8f307003fb55d599c32f110c5939.scope: Deactivated successfully.
Nov 26 01:45:07 compute-0 podman[403432]: 2025-11-26 01:45:07.372314434 +0000 UTC m=+0.086161961 container create 9848c4de68c4bde10c7eb4942f2f6c7dbd89c82457a21c810e3888ec9c5c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_taussig, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 01:45:07 compute-0 podman[403432]: 2025-11-26 01:45:07.343686443 +0000 UTC m=+0.057534020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:45:07 compute-0 systemd[1]: Started libpod-conmon-9848c4de68c4bde10c7eb4942f2f6c7dbd89c82457a21c810e3888ec9c5c5723.scope.
Nov 26 01:45:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8982ae631d4633cbe405f1c6300d9413b2d5c59538227b28b6efeac6a4cda4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8982ae631d4633cbe405f1c6300d9413b2d5c59538227b28b6efeac6a4cda4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8982ae631d4633cbe405f1c6300d9413b2d5c59538227b28b6efeac6a4cda4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8982ae631d4633cbe405f1c6300d9413b2d5c59538227b28b6efeac6a4cda4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:45:07 compute-0 podman[403432]: 2025-11-26 01:45:07.51904649 +0000 UTC m=+0.232894097 container init 9848c4de68c4bde10c7eb4942f2f6c7dbd89c82457a21c810e3888ec9c5c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 01:45:07 compute-0 podman[403432]: 2025-11-26 01:45:07.542497824 +0000 UTC m=+0.256345361 container start 9848c4de68c4bde10c7eb4942f2f6c7dbd89c82457a21c810e3888ec9c5c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 01:45:07 compute-0 podman[403432]: 2025-11-26 01:45:07.548167465 +0000 UTC m=+0.262015002 container attach 9848c4de68c4bde10c7eb4942f2f6c7dbd89c82457a21c810e3888ec9c5c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 01:45:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:08 compute-0 great_taussig[403448]: {
Nov 26 01:45:08 compute-0 great_taussig[403448]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:45:08 compute-0 great_taussig[403448]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:45:08 compute-0 great_taussig[403448]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:45:08 compute-0 great_taussig[403448]:        "osd_id": 0,
Nov 26 01:45:08 compute-0 great_taussig[403448]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:45:08 compute-0 great_taussig[403448]:        "type": "bluestore"
Nov 26 01:45:08 compute-0 great_taussig[403448]:    },
Nov 26 01:45:08 compute-0 great_taussig[403448]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:45:08 compute-0 great_taussig[403448]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:45:08 compute-0 great_taussig[403448]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:45:08 compute-0 great_taussig[403448]:        "osd_id": 2,
Nov 26 01:45:08 compute-0 great_taussig[403448]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:45:08 compute-0 great_taussig[403448]:        "type": "bluestore"
Nov 26 01:45:08 compute-0 great_taussig[403448]:    },
Nov 26 01:45:08 compute-0 great_taussig[403448]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:45:08 compute-0 great_taussig[403448]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:45:08 compute-0 great_taussig[403448]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:45:08 compute-0 great_taussig[403448]:        "osd_id": 1,
Nov 26 01:45:08 compute-0 great_taussig[403448]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:45:08 compute-0 great_taussig[403448]:        "type": "bluestore"
Nov 26 01:45:08 compute-0 great_taussig[403448]:    }
Nov 26 01:45:08 compute-0 great_taussig[403448]: }
Nov 26 01:45:08 compute-0 systemd[1]: libpod-9848c4de68c4bde10c7eb4942f2f6c7dbd89c82457a21c810e3888ec9c5c5723.scope: Deactivated successfully.
Nov 26 01:45:08 compute-0 podman[403432]: 2025-11-26 01:45:08.698240247 +0000 UTC m=+1.412087894 container died 9848c4de68c4bde10c7eb4942f2f6c7dbd89c82457a21c810e3888ec9c5c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 01:45:08 compute-0 systemd[1]: libpod-9848c4de68c4bde10c7eb4942f2f6c7dbd89c82457a21c810e3888ec9c5c5723.scope: Consumed 1.162s CPU time.
Nov 26 01:45:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d8982ae631d4633cbe405f1c6300d9413b2d5c59538227b28b6efeac6a4cda4-merged.mount: Deactivated successfully.
Nov 26 01:45:08 compute-0 podman[403432]: 2025-11-26 01:45:08.826807038 +0000 UTC m=+1.540654595 container remove 9848c4de68c4bde10c7eb4942f2f6c7dbd89c82457a21c810e3888ec9c5c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 01:45:08 compute-0 systemd[1]: libpod-conmon-9848c4de68c4bde10c7eb4942f2f6c7dbd89c82457a21c810e3888ec9c5c5723.scope: Deactivated successfully.
Nov 26 01:45:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:45:08 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:45:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:45:08 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:45:08 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 180f3a7f-b2a1-4e34-9ffc-e15c70b2bbdd does not exist
Nov 26 01:45:08 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev bb009c3e-e350-41bf-a4e7-b2e24081040e does not exist
Nov 26 01:45:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:45:09 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:45:09 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:45:10 compute-0 nova_compute[350387]: 2025-11-26 01:45:10.174 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:45:10 compute-0 nova_compute[350387]: 2025-11-26 01:45:10.175 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:45:10 compute-0 nova_compute[350387]: 2025-11-26 01:45:10.176 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:45:10 compute-0 nova_compute[350387]: 2025-11-26 01:45:10.176 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 01:45:10 compute-0 nova_compute[350387]: 2025-11-26 01:45:10.189 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 01:45:10 compute-0 nova_compute[350387]: 2025-11-26 01:45:10.190 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:45:10 compute-0 nova_compute[350387]: 2025-11-26 01:45:10.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:45:10 compute-0 nova_compute[350387]: 2025-11-26 01:45:10.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:45:10 compute-0 nova_compute[350387]: 2025-11-26 01:45:10.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:45:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:45:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:45:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:45:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:45:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:45:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:45:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:13 compute-0 nova_compute[350387]: 2025-11-26 01:45:13.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:45:13 compute-0 nova_compute[350387]: 2025-11-26 01:45:13.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:45:13 compute-0 nova_compute[350387]: 2025-11-26 01:45:13.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:45:13 compute-0 nova_compute[350387]: 2025-11-26 01:45:13.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:45:13 compute-0 nova_compute[350387]: 2025-11-26 01:45:13.365 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:45:13 compute-0 nova_compute[350387]: 2025-11-26 01:45:13.366 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:45:13 compute-0 nova_compute[350387]: 2025-11-26 01:45:13.366 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:45:13 compute-0 nova_compute[350387]: 2025-11-26 01:45:13.367 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:45:13 compute-0 nova_compute[350387]: 2025-11-26 01:45:13.367 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:45:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:45:13 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2522084466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:45:13 compute-0 nova_compute[350387]: 2025-11-26 01:45:13.877 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:45:14 compute-0 nova_compute[350387]: 2025-11-26 01:45:14.446 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:45:14 compute-0 nova_compute[350387]: 2025-11-26 01:45:14.448 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4548MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:45:14 compute-0 nova_compute[350387]: 2025-11-26 01:45:14.449 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:45:14 compute-0 nova_compute[350387]: 2025-11-26 01:45:14.449 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:45:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:45:14 compute-0 nova_compute[350387]: 2025-11-26 01:45:14.686 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:45:14 compute-0 nova_compute[350387]: 2025-11-26 01:45:14.687 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:45:14 compute-0 nova_compute[350387]: 2025-11-26 01:45:14.733 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:45:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:45:15 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1233962132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:45:15 compute-0 nova_compute[350387]: 2025-11-26 01:45:15.314 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:45:15 compute-0 nova_compute[350387]: 2025-11-26 01:45:15.328 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:45:15 compute-0 nova_compute[350387]: 2025-11-26 01:45:15.349 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:45:15 compute-0 nova_compute[350387]: 2025-11-26 01:45:15.353 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:45:15 compute-0 nova_compute[350387]: 2025-11-26 01:45:15.353 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:45:15 compute-0 podman[403586]: 2025-11-26 01:45:15.57666076 +0000 UTC m=+0.125003040 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 26 01:45:15 compute-0 podman[403588]: 2025-11-26 01:45:15.580939662 +0000 UTC m=+0.124778885 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 01:45:15 compute-0 podman[403587]: 2025-11-26 01:45:15.602713878 +0000 UTC m=+0.150224304 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 01:45:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 26 01:45:17 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3926513399' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 26 01:45:17 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14383 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 26 01:45:17 compute-0 ceph-mgr[193049]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 01:45:17 compute-0 ceph-mgr[193049]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 01:45:18 compute-0 nova_compute[350387]: 2025-11-26 01:45:18.350 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:45:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:45:19 compute-0 podman[403645]: 2025-11-26 01:45:19.599616401 +0000 UTC m=+0.148854907 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 01:45:19 compute-0 podman[403646]: 2025-11-26 01:45:19.646744865 +0000 UTC m=+0.189741875 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 26 01:45:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:21 compute-0 podman[403688]: 2025-11-26 01:45:21.623943225 +0000 UTC m=+0.164582782 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_id=edpm, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.29.0)
Nov 26 01:45:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:45:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:45:24.957 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:45:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:45:24.957 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:45:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:45:24.958 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:45:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:27 compute-0 podman[403706]: 2025-11-26 01:45:27.576639049 +0000 UTC m=+0.128608363 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 01:45:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:45:29 compute-0 podman[403726]: 2025-11-26 01:45:29.583160679 +0000 UTC m=+0.132401551 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, architecture=x86_64, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc.)
Nov 26 01:45:29 compute-0 podman[158021]: time="2025-11-26T01:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:45:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 01:45:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8098 "" "Go-http-client/1.1"
Nov 26 01:45:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:31 compute-0 openstack_network_exporter[367323]: ERROR   01:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:45:31 compute-0 openstack_network_exporter[367323]: ERROR   01:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:45:31 compute-0 openstack_network_exporter[367323]: ERROR   01:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:45:31 compute-0 openstack_network_exporter[367323]: ERROR   01:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:45:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:45:31 compute-0 openstack_network_exporter[367323]: ERROR   01:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:45:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:45:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:45:34 compute-0 podman[403746]: 2025-11-26 01:45:34.589605904 +0000 UTC m=+0.141138158 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:45:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 26 01:45:39 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1557707606' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 26 01:45:39 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.14385 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 26 01:45:39 compute-0 ceph-mgr[193049]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 01:45:39 compute-0 ceph-mgr[193049]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 26 01:45:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:45:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:45:41
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'vms', 'images', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta']
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:45:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:45:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:45:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:46 compute-0 podman[403769]: 2025-11-26 01:45:46.558085445 +0000 UTC m=+0.112636441 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Nov 26 01:45:46 compute-0 podman[403774]: 2025-11-26 01:45:46.583364661 +0000 UTC m=+0.120885745 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:45:46 compute-0 podman[403770]: 2025-11-26 01:45:46.612170327 +0000 UTC m=+0.150433612 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 26 01:45:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:50 compute-0 podman[403829]: 2025-11-26 01:45:50.561258795 +0000 UTC m=+0.113353652 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:45:50 compute-0 podman[403830]: 2025-11-26 01:45:50.646112288 +0000 UTC m=+0.185351921 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:45:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:45:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:52 compute-0 podman[403873]: 2025-11-26 01:45:52.589760226 +0000 UTC m=+0.139691338 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, release=1214.1726694543, vcs-type=git, version=9.4, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 26 01:45:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:45:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5892 writes, 24K keys, 5892 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5892 writes, 1002 syncs, 5.88 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 01:45:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:45:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:45:58 compute-0 podman[403893]: 2025-11-26 01:45:58.567981444 +0000 UTC m=+0.127565124 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Nov 26 01:45:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:45:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 7129 writes, 29K keys, 7129 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7129 writes, 1361 syncs, 5.24 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 01:45:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:45:59 compute-0 podman[158021]: time="2025-11-26T01:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:45:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 01:45:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8103 "" "Go-http-client/1.1"
Nov 26 01:46:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:00 compute-0 podman[403913]: 2025-11-26 01:46:00.548993171 +0000 UTC m=+0.106646231 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Nov 26 01:46:01 compute-0 openstack_network_exporter[367323]: ERROR   01:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:46:01 compute-0 openstack_network_exporter[367323]: ERROR   01:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:46:01 compute-0 openstack_network_exporter[367323]: ERROR   01:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:46:01 compute-0 openstack_network_exporter[367323]: ERROR   01:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:46:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:46:01 compute-0 openstack_network_exporter[367323]: ERROR   01:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:46:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:46:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:46:05 compute-0 podman[403933]: 2025-11-26 01:46:05.552134843 +0000 UTC m=+0.096615798 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:46:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:46:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 5946 writes, 24K keys, 5946 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5946 writes, 1004 syncs, 5.92 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 178 writes, 270 keys, 178 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 178 writes, 88 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 01:46:06 compute-0 ceph-mgr[193049]: [devicehealth INFO root] Check health
Nov 26 01:46:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:09 compute-0 nova_compute[350387]: 2025-11-26 01:46:09.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:46:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:46:10 compute-0 nova_compute[350387]: 2025-11-26 01:46:10.294 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:46:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:46:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:46:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:46:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:46:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:46:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:46:11 compute-0 nova_compute[350387]: 2025-11-26 01:46:11.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:46:11 compute-0 nova_compute[350387]: 2025-11-26 01:46:11.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:46:11 compute-0 nova_compute[350387]: 2025-11-26 01:46:11.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 01:46:11 compute-0 nova_compute[350387]: 2025-11-26 01:46:11.314 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 01:46:11 compute-0 nova_compute[350387]: 2025-11-26 01:46:11.315 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:46:11 compute-0 nova_compute[350387]: 2025-11-26 01:46:11.316 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:46:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:46:11 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:46:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:46:11 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:46:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:46:11 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:46:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:46:11 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:46:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:46:11 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:46:11 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 626ee8c3-c9eb-4f1f-b236-002bf3b2655e does not exist
Nov 26 01:46:11 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev fa6ce024-8b3a-44d4-8bcb-159cbd9d92ca does not exist
Nov 26 01:46:11 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev cdb8ac73-7d93-47dd-9645-559e3bbb0481 does not exist
Nov 26 01:46:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:46:11 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:46:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:46:11 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:46:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:46:11 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:46:11 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:46:11 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:46:11 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:46:11 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:46:11 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:46:12 compute-0 nova_compute[350387]: 2025-11-26 01:46:12.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:46:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:12 compute-0 podman[404339]: 2025-11-26 01:46:12.571735985 +0000 UTC m=+0.090942736 container create 7d7d9ca65dfbba5964ed7b238a6c1e237c958b29f60341169a64a289f6a1afe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_proskuriakova, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:46:12 compute-0 podman[404339]: 2025-11-26 01:46:12.535314684 +0000 UTC m=+0.054521475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:46:12 compute-0 systemd[1]: Started libpod-conmon-7d7d9ca65dfbba5964ed7b238a6c1e237c958b29f60341169a64a289f6a1afe7.scope.
Nov 26 01:46:12 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:46:12 compute-0 podman[404339]: 2025-11-26 01:46:12.725064267 +0000 UTC m=+0.244271008 container init 7d7d9ca65dfbba5964ed7b238a6c1e237c958b29f60341169a64a289f6a1afe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_proskuriakova, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 01:46:12 compute-0 podman[404339]: 2025-11-26 01:46:12.741643917 +0000 UTC m=+0.260850658 container start 7d7d9ca65dfbba5964ed7b238a6c1e237c958b29f60341169a64a289f6a1afe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_proskuriakova, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:46:12 compute-0 podman[404339]: 2025-11-26 01:46:12.747683078 +0000 UTC m=+0.266889819 container attach 7d7d9ca65dfbba5964ed7b238a6c1e237c958b29f60341169a64a289f6a1afe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 01:46:12 compute-0 nervous_proskuriakova[404355]: 167 167
Nov 26 01:46:12 compute-0 systemd[1]: libpod-7d7d9ca65dfbba5964ed7b238a6c1e237c958b29f60341169a64a289f6a1afe7.scope: Deactivated successfully.
Nov 26 01:46:12 compute-0 podman[404339]: 2025-11-26 01:46:12.755427217 +0000 UTC m=+0.274633968 container died 7d7d9ca65dfbba5964ed7b238a6c1e237c958b29f60341169a64a289f6a1afe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:46:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9e0f2966aee09045c3f3a67487ab146dd24dc61f815e91f7e15982836a3fa59-merged.mount: Deactivated successfully.
Nov 26 01:46:12 compute-0 podman[404339]: 2025-11-26 01:46:12.826959843 +0000 UTC m=+0.346166564 container remove 7d7d9ca65dfbba5964ed7b238a6c1e237c958b29f60341169a64a289f6a1afe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_proskuriakova, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 01:46:12 compute-0 systemd[1]: libpod-conmon-7d7d9ca65dfbba5964ed7b238a6c1e237c958b29f60341169a64a289f6a1afe7.scope: Deactivated successfully.
Nov 26 01:46:13 compute-0 podman[404378]: 2025-11-26 01:46:13.101573441 +0000 UTC m=+0.077837646 container create 06654cd5fd927fcc5af21d8bdc48327a86b62d932b608249661bf1f0c3c34835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kepler, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 01:46:13 compute-0 podman[404378]: 2025-11-26 01:46:13.071283603 +0000 UTC m=+0.047547788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:46:13 compute-0 systemd[1]: Started libpod-conmon-06654cd5fd927fcc5af21d8bdc48327a86b62d932b608249661bf1f0c3c34835.scope.
Nov 26 01:46:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becad455dc4c9a173604a5c56305b1fff218a0bd798809c8b70f3f39a002c487/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becad455dc4c9a173604a5c56305b1fff218a0bd798809c8b70f3f39a002c487/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becad455dc4c9a173604a5c56305b1fff218a0bd798809c8b70f3f39a002c487/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becad455dc4c9a173604a5c56305b1fff218a0bd798809c8b70f3f39a002c487/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becad455dc4c9a173604a5c56305b1fff218a0bd798809c8b70f3f39a002c487/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:46:13 compute-0 podman[404378]: 2025-11-26 01:46:13.282207247 +0000 UTC m=+0.258471522 container init 06654cd5fd927fcc5af21d8bdc48327a86b62d932b608249661bf1f0c3c34835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Nov 26 01:46:13 compute-0 nova_compute[350387]: 2025-11-26 01:46:13.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:46:13 compute-0 podman[404378]: 2025-11-26 01:46:13.302756869 +0000 UTC m=+0.279021084 container start 06654cd5fd927fcc5af21d8bdc48327a86b62d932b608249661bf1f0c3c34835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kepler, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:46:13 compute-0 podman[404378]: 2025-11-26 01:46:13.310080376 +0000 UTC m=+0.286344631 container attach 06654cd5fd927fcc5af21d8bdc48327a86b62d932b608249661bf1f0c3c34835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kepler, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:46:14 compute-0 nova_compute[350387]: 2025-11-26 01:46:14.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:46:14 compute-0 nova_compute[350387]: 2025-11-26 01:46:14.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:46:14 compute-0 nova_compute[350387]: 2025-11-26 01:46:14.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:46:14 compute-0 nova_compute[350387]: 2025-11-26 01:46:14.341 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:46:14 compute-0 nova_compute[350387]: 2025-11-26 01:46:14.343 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:46:14 compute-0 nova_compute[350387]: 2025-11-26 01:46:14.343 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:46:14 compute-0 nova_compute[350387]: 2025-11-26 01:46:14.343 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:46:14 compute-0 nova_compute[350387]: 2025-11-26 01:46:14.344 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:46:14 compute-0 ecstatic_kepler[404394]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:46:14 compute-0 ecstatic_kepler[404394]: --> relative data size: 1.0
Nov 26 01:46:14 compute-0 ecstatic_kepler[404394]: --> All data devices are unavailable
Nov 26 01:46:14 compute-0 systemd[1]: libpod-06654cd5fd927fcc5af21d8bdc48327a86b62d932b608249661bf1f0c3c34835.scope: Deactivated successfully.
Nov 26 01:46:14 compute-0 podman[404378]: 2025-11-26 01:46:14.538947741 +0000 UTC m=+1.515211956 container died 06654cd5fd927fcc5af21d8bdc48327a86b62d932b608249661bf1f0c3c34835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 01:46:14 compute-0 systemd[1]: libpod-06654cd5fd927fcc5af21d8bdc48327a86b62d932b608249661bf1f0c3c34835.scope: Consumed 1.190s CPU time.
Nov 26 01:46:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-becad455dc4c9a173604a5c56305b1fff218a0bd798809c8b70f3f39a002c487-merged.mount: Deactivated successfully.
Nov 26 01:46:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:46:14 compute-0 podman[404378]: 2025-11-26 01:46:14.647153506 +0000 UTC m=+1.623417701 container remove 06654cd5fd927fcc5af21d8bdc48327a86b62d932b608249661bf1f0c3c34835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 01:46:14 compute-0 systemd[1]: libpod-conmon-06654cd5fd927fcc5af21d8bdc48327a86b62d932b608249661bf1f0c3c34835.scope: Deactivated successfully.
Nov 26 01:46:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:46:14 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2930977416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:46:14 compute-0 nova_compute[350387]: 2025-11-26 01:46:14.880 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:46:15 compute-0 nova_compute[350387]: 2025-11-26 01:46:15.320 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:46:15 compute-0 nova_compute[350387]: 2025-11-26 01:46:15.321 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4576MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:46:15 compute-0 nova_compute[350387]: 2025-11-26 01:46:15.321 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:46:15 compute-0 nova_compute[350387]: 2025-11-26 01:46:15.322 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:46:15 compute-0 nova_compute[350387]: 2025-11-26 01:46:15.469 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:46:15 compute-0 nova_compute[350387]: 2025-11-26 01:46:15.469 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:46:15 compute-0 nova_compute[350387]: 2025-11-26 01:46:15.516 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:46:15 compute-0 podman[404599]: 2025-11-26 01:46:15.736813268 +0000 UTC m=+0.071043663 container create 2b3528d779c61cbb1d2c14e6084c212327eb54ee405568274f4c44d2be55b21f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:46:15 compute-0 systemd[1]: Started libpod-conmon-2b3528d779c61cbb1d2c14e6084c212327eb54ee405568274f4c44d2be55b21f.scope.
Nov 26 01:46:15 compute-0 podman[404599]: 2025-11-26 01:46:15.71110396 +0000 UTC m=+0.045334415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:46:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:46:15 compute-0 podman[404599]: 2025-11-26 01:46:15.86999831 +0000 UTC m=+0.204228775 container init 2b3528d779c61cbb1d2c14e6084c212327eb54ee405568274f4c44d2be55b21f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:46:15 compute-0 podman[404599]: 2025-11-26 01:46:15.881719822 +0000 UTC m=+0.215950227 container start 2b3528d779c61cbb1d2c14e6084c212327eb54ee405568274f4c44d2be55b21f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 01:46:15 compute-0 podman[404599]: 2025-11-26 01:46:15.88799686 +0000 UTC m=+0.222227315 container attach 2b3528d779c61cbb1d2c14e6084c212327eb54ee405568274f4c44d2be55b21f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_neumann, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:46:15 compute-0 jovial_neumann[404629]: 167 167
Nov 26 01:46:15 compute-0 systemd[1]: libpod-2b3528d779c61cbb1d2c14e6084c212327eb54ee405568274f4c44d2be55b21f.scope: Deactivated successfully.
Nov 26 01:46:15 compute-0 podman[404599]: 2025-11-26 01:46:15.900787822 +0000 UTC m=+0.235018257 container died 2b3528d779c61cbb1d2c14e6084c212327eb54ee405568274f4c44d2be55b21f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 01:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-77c25e18511926dbbcf74cbbebd80ef5ea43d5c34f7d436909f0524f28160e6b-merged.mount: Deactivated successfully.
Nov 26 01:46:15 compute-0 podman[404599]: 2025-11-26 01:46:15.986253153 +0000 UTC m=+0.320483568 container remove 2b3528d779c61cbb1d2c14e6084c212327eb54ee405568274f4c44d2be55b21f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_neumann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:46:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:46:16 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/15966498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:46:16 compute-0 systemd[1]: libpod-conmon-2b3528d779c61cbb1d2c14e6084c212327eb54ee405568274f4c44d2be55b21f.scope: Deactivated successfully.
Nov 26 01:46:16 compute-0 nova_compute[350387]: 2025-11-26 01:46:16.071 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:46:16 compute-0 nova_compute[350387]: 2025-11-26 01:46:16.082 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:46:16 compute-0 nova_compute[350387]: 2025-11-26 01:46:16.098 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:46:16 compute-0 nova_compute[350387]: 2025-11-26 01:46:16.102 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:46:16 compute-0 nova_compute[350387]: 2025-11-26 01:46:16.103 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:46:16 compute-0 podman[404654]: 2025-11-26 01:46:16.247031869 +0000 UTC m=+0.084240467 container create aa77f2b067fa35e0f1fb717731d4c9a4b4955d52c7371ebefebbe5f8950c6480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khayyam, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:46:16 compute-0 podman[404654]: 2025-11-26 01:46:16.211352078 +0000 UTC m=+0.048560726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:46:16 compute-0 systemd[1]: Started libpod-conmon-aa77f2b067fa35e0f1fb717731d4c9a4b4955d52c7371ebefebbe5f8950c6480.scope.
Nov 26 01:46:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e567e17a1a3f8e79cb588a1678e2d349b7c435781ff9ccfec821a33cb2621fdf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e567e17a1a3f8e79cb588a1678e2d349b7c435781ff9ccfec821a33cb2621fdf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e567e17a1a3f8e79cb588a1678e2d349b7c435781ff9ccfec821a33cb2621fdf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e567e17a1a3f8e79cb588a1678e2d349b7c435781ff9ccfec821a33cb2621fdf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:46:16 compute-0 podman[404654]: 2025-11-26 01:46:16.444157661 +0000 UTC m=+0.281366299 container init aa77f2b067fa35e0f1fb717731d4c9a4b4955d52c7371ebefebbe5f8950c6480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 26 01:46:16 compute-0 podman[404654]: 2025-11-26 01:46:16.464931439 +0000 UTC m=+0.302140037 container start aa77f2b067fa35e0f1fb717731d4c9a4b4955d52c7371ebefebbe5f8950c6480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khayyam, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 01:46:16 compute-0 podman[404654]: 2025-11-26 01:46:16.470905779 +0000 UTC m=+0.308114377 container attach aa77f2b067fa35e0f1fb717731d4c9a4b4955d52c7371ebefebbe5f8950c6480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khayyam, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:46:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:17 compute-0 keen_khayyam[404670]: {
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:    "0": [
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:        {
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "devices": [
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "/dev/loop3"
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            ],
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "lv_name": "ceph_lv0",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "lv_size": "21470642176",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "name": "ceph_lv0",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "tags": {
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.cluster_name": "ceph",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.crush_device_class": "",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.encrypted": "0",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.osd_id": "0",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.type": "block",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.vdo": "0"
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            },
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "type": "block",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "vg_name": "ceph_vg0"
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:        }
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:    ],
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:    "1": [
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:        {
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "devices": [
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "/dev/loop4"
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            ],
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "lv_name": "ceph_lv1",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "lv_size": "21470642176",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "name": "ceph_lv1",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "tags": {
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.cluster_name": "ceph",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.crush_device_class": "",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.encrypted": "0",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.osd_id": "1",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.type": "block",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.vdo": "0"
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            },
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "type": "block",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "vg_name": "ceph_vg1"
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:        }
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:    ],
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:    "2": [
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:        {
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "devices": [
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "/dev/loop5"
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            ],
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "lv_name": "ceph_lv2",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "lv_size": "21470642176",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "name": "ceph_lv2",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "tags": {
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.cluster_name": "ceph",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.crush_device_class": "",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.encrypted": "0",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.osd_id": "2",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.type": "block",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:                "ceph.vdo": "0"
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            },
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "type": "block",
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:            "vg_name": "ceph_vg2"
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:        }
Nov 26 01:46:17 compute-0 keen_khayyam[404670]:    ]
Nov 26 01:46:17 compute-0 keen_khayyam[404670]: }
Nov 26 01:46:17 compute-0 systemd[1]: libpod-aa77f2b067fa35e0f1fb717731d4c9a4b4955d52c7371ebefebbe5f8950c6480.scope: Deactivated successfully.
Nov 26 01:46:17 compute-0 podman[404654]: 2025-11-26 01:46:17.367134472 +0000 UTC m=+1.204343080 container died aa77f2b067fa35e0f1fb717731d4c9a4b4955d52c7371ebefebbe5f8950c6480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 01:46:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e567e17a1a3f8e79cb588a1678e2d349b7c435781ff9ccfec821a33cb2621fdf-merged.mount: Deactivated successfully.
Nov 26 01:46:17 compute-0 podman[404654]: 2025-11-26 01:46:17.483285132 +0000 UTC m=+1.320493700 container remove aa77f2b067fa35e0f1fb717731d4c9a4b4955d52c7371ebefebbe5f8950c6480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:46:17 compute-0 systemd[1]: libpod-conmon-aa77f2b067fa35e0f1fb717731d4c9a4b4955d52c7371ebefebbe5f8950c6480.scope: Deactivated successfully.
Nov 26 01:46:17 compute-0 podman[404682]: 2025-11-26 01:46:17.519091906 +0000 UTC m=+0.102167504 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:46:17 compute-0 podman[404681]: 2025-11-26 01:46:17.529094459 +0000 UTC m=+0.111471098 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 01:46:17 compute-0 podman[404679]: 2025-11-26 01:46:17.533783512 +0000 UTC m=+0.110571322 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 26 01:46:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:18 compute-0 podman[404887]: 2025-11-26 01:46:18.613477712 +0000 UTC m=+0.087934901 container create f45364d08902717f9ef8533dd5bdb4a24aef54a48ddccbd31bba1dcfc246f618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dewdney, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:46:18 compute-0 podman[404887]: 2025-11-26 01:46:18.57598639 +0000 UTC m=+0.050443639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:46:18 compute-0 systemd[1]: Started libpod-conmon-f45364d08902717f9ef8533dd5bdb4a24aef54a48ddccbd31bba1dcfc246f618.scope.
Nov 26 01:46:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:46:18 compute-0 podman[404887]: 2025-11-26 01:46:18.761607688 +0000 UTC m=+0.236064887 container init f45364d08902717f9ef8533dd5bdb4a24aef54a48ddccbd31bba1dcfc246f618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 01:46:18 compute-0 podman[404887]: 2025-11-26 01:46:18.779189996 +0000 UTC m=+0.253647185 container start f45364d08902717f9ef8533dd5bdb4a24aef54a48ddccbd31bba1dcfc246f618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dewdney, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:46:18 compute-0 podman[404887]: 2025-11-26 01:46:18.786308097 +0000 UTC m=+0.260765296 container attach f45364d08902717f9ef8533dd5bdb4a24aef54a48ddccbd31bba1dcfc246f618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 01:46:18 compute-0 jolly_dewdney[404903]: 167 167
Nov 26 01:46:18 compute-0 systemd[1]: libpod-f45364d08902717f9ef8533dd5bdb4a24aef54a48ddccbd31bba1dcfc246f618.scope: Deactivated successfully.
Nov 26 01:46:18 compute-0 podman[404887]: 2025-11-26 01:46:18.791125624 +0000 UTC m=+0.265582823 container died f45364d08902717f9ef8533dd5bdb4a24aef54a48ddccbd31bba1dcfc246f618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dewdney, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:46:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-56b17bed9ff254428ba9e0a9a990c41b55997f62d8f92fcb4533e47338767cf2-merged.mount: Deactivated successfully.
Nov 26 01:46:18 compute-0 podman[404887]: 2025-11-26 01:46:18.864272206 +0000 UTC m=+0.338729375 container remove f45364d08902717f9ef8533dd5bdb4a24aef54a48ddccbd31bba1dcfc246f618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 26 01:46:18 compute-0 systemd[1]: libpod-conmon-f45364d08902717f9ef8533dd5bdb4a24aef54a48ddccbd31bba1dcfc246f618.scope: Deactivated successfully.
Nov 26 01:46:19 compute-0 podman[404925]: 2025-11-26 01:46:19.150910504 +0000 UTC m=+0.089577468 container create dc4f4ea8f7219846f8f4f9fb119948929eb4ad75b747caca5348512fa84a7266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_sutherland, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 01:46:19 compute-0 podman[404925]: 2025-11-26 01:46:19.113519565 +0000 UTC m=+0.052186569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:46:19 compute-0 systemd[1]: Started libpod-conmon-dc4f4ea8f7219846f8f4f9fb119948929eb4ad75b747caca5348512fa84a7266.scope.
Nov 26 01:46:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae025e477f243be44ad0438654ad07486ca64e66540c5133eb7aa7ac2c36a18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae025e477f243be44ad0438654ad07486ca64e66540c5133eb7aa7ac2c36a18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae025e477f243be44ad0438654ad07486ca64e66540c5133eb7aa7ac2c36a18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae025e477f243be44ad0438654ad07486ca64e66540c5133eb7aa7ac2c36a18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:46:19 compute-0 podman[404925]: 2025-11-26 01:46:19.321021912 +0000 UTC m=+0.259688916 container init dc4f4ea8f7219846f8f4f9fb119948929eb4ad75b747caca5348512fa84a7266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_sutherland, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 01:46:19 compute-0 podman[404925]: 2025-11-26 01:46:19.341132622 +0000 UTC m=+0.279799586 container start dc4f4ea8f7219846f8f4f9fb119948929eb4ad75b747caca5348512fa84a7266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_sutherland, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 01:46:19 compute-0 podman[404925]: 2025-11-26 01:46:19.346815823 +0000 UTC m=+0.285482787 container attach dc4f4ea8f7219846f8f4f9fb119948929eb4ad75b747caca5348512fa84a7266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_sutherland, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 26 01:46:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]: {
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:        "osd_id": 0,
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:        "type": "bluestore"
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:    },
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:        "osd_id": 2,
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:        "type": "bluestore"
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:    },
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:        "osd_id": 1,
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:        "type": "bluestore"
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]:    }
Nov 26 01:46:20 compute-0 peaceful_sutherland[404941]: }
Nov 26 01:46:20 compute-0 systemd[1]: libpod-dc4f4ea8f7219846f8f4f9fb119948929eb4ad75b747caca5348512fa84a7266.scope: Deactivated successfully.
Nov 26 01:46:20 compute-0 podman[404925]: 2025-11-26 01:46:20.493286303 +0000 UTC m=+1.431953267 container died dc4f4ea8f7219846f8f4f9fb119948929eb4ad75b747caca5348512fa84a7266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:46:20 compute-0 systemd[1]: libpod-dc4f4ea8f7219846f8f4f9fb119948929eb4ad75b747caca5348512fa84a7266.scope: Consumed 1.159s CPU time.
Nov 26 01:46:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ae025e477f243be44ad0438654ad07486ca64e66540c5133eb7aa7ac2c36a18-merged.mount: Deactivated successfully.
Nov 26 01:46:20 compute-0 podman[404925]: 2025-11-26 01:46:20.590775024 +0000 UTC m=+1.529441958 container remove dc4f4ea8f7219846f8f4f9fb119948929eb4ad75b747caca5348512fa84a7266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_sutherland, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 01:46:20 compute-0 systemd[1]: libpod-conmon-dc4f4ea8f7219846f8f4f9fb119948929eb4ad75b747caca5348512fa84a7266.scope: Deactivated successfully.
Nov 26 01:46:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:46:20 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:46:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:46:20 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:46:20 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev dd70cca6-a364-4d88-a46d-0babfe3ffa0e does not exist
Nov 26 01:46:20 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 23dd5233-99d1-4a8f-b2fb-445012e3d848 does not exist
Nov 26 01:46:20 compute-0 podman[404989]: 2025-11-26 01:46:20.725110229 +0000 UTC m=+0.082137468 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 01:46:20 compute-0 podman[405012]: 2025-11-26 01:46:20.918754263 +0000 UTC m=+0.158697485 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 26 01:46:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:46:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:46:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:23 compute-0 podman[405080]: 2025-11-26 01:46:23.585577415 +0000 UTC m=+0.129256641 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler)
Nov 26 01:46:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:46:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:46:24.959 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:46:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:46:24.960 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:46:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:46:24.960 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:46:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:46:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2999913958' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:46:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:46:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2999913958' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:46:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:29 compute-0 podman[405099]: 2025-11-26 01:46:29.540282757 +0000 UTC m=+0.096275458 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 01:46:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:46:29 compute-0 podman[158021]: time="2025-11-26T01:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:46:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 01:46:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8101 "" "Go-http-client/1.1"
Nov 26 01:46:30 compute-0 systemd-logind[800]: New session 60 of user zuul.
Nov 26 01:46:30 compute-0 systemd[1]: Started Session 60 of User zuul.
Nov 26 01:46:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:31 compute-0 openstack_network_exporter[367323]: ERROR   01:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:46:31 compute-0 openstack_network_exporter[367323]: ERROR   01:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:46:31 compute-0 openstack_network_exporter[367323]: ERROR   01:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:46:31 compute-0 openstack_network_exporter[367323]: ERROR   01:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:46:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:46:31 compute-0 openstack_network_exporter[367323]: ERROR   01:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:46:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:46:31 compute-0 podman[405272]: 2025-11-26 01:46:31.583209346 +0000 UTC m=+0.128080598 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 26 01:46:31 compute-0 python3[405313]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:46:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:34 compute-0 python3[405550]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:46:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:46:35 compute-0 podman[405703]: 2025-11-26 01:46:35.835479201 +0000 UTC m=+0.122442829 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:46:35 compute-0 python3[405704]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "nova_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:46:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:38 compute-0 python3[405878]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 01:46:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:46:40 compute-0 python3[406031]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 01:46:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:46:41
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.meta', 'volumes', 'images', 'default.rgw.log', 'vms']
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:46:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:46:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.857 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.858 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.858 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.859 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.860 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.861 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.861 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.862 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.864 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.865 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.865 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.865 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.865 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.864 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.866 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.866 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.868 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.866 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.868 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab7c0080>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.868 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.871 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.871 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.871 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.872 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.872 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.872 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.872 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.872 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.873 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.873 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.873 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.873 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.873 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.874 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.874 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.874 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.874 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.874 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.874 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.875 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.875 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.875 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.875 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.875 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.876 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.876 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.876 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.876 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.876 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.876 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.877 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.877 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.877 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.877 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.877 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.878 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.878 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.879 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.879 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.879 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.880 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.880 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.880 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.881 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.881 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.881 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.881 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.882 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.883 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.883 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.883 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.883 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.883 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.883 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.884 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.884 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.884 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.884 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.884 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.884 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.884 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.885 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.885 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:46:42.885 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:46:43 compute-0 python3[406268]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:46:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:46:44 compute-0 python3[406433]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 01:46:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:48 compute-0 podman[406472]: 2025-11-26 01:46:48.587625443 +0000 UTC m=+0.128842879 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true)
Nov 26 01:46:48 compute-0 podman[406473]: 2025-11-26 01:46:48.591639367 +0000 UTC m=+0.129402945 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:46:48 compute-0 podman[406471]: 2025-11-26 01:46:48.601469736 +0000 UTC m=+0.145420159 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Nov 26 01:46:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:46:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:46:51 compute-0 podman[406532]: 2025-11-26 01:46:51.61935776 +0000 UTC m=+0.155798574 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 26 01:46:51 compute-0 podman[406533]: 2025-11-26 01:46:51.639966124 +0000 UTC m=+0.169691797 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 01:46:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:54 compute-0 podman[406576]: 2025-11-26 01:46:54.582927165 +0000 UTC m=+0.120894545 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, name=ubi9, architecture=x86_64, container_name=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc.)
Nov 26 01:46:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:46:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:46:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:46:59 compute-0 podman[158021]: time="2025-11-26T01:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:46:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 01:46:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8101 "" "Go-http-client/1.1"
Nov 26 01:47:00 compute-0 podman[406595]: 2025-11-26 01:47:00.546677703 +0000 UTC m=+0.102903136 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 26 01:47:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:01 compute-0 openstack_network_exporter[367323]: ERROR   01:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:47:01 compute-0 openstack_network_exporter[367323]: ERROR   01:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:47:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:47:01 compute-0 openstack_network_exporter[367323]: ERROR   01:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:47:01 compute-0 openstack_network_exporter[367323]: ERROR   01:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:47:01 compute-0 openstack_network_exporter[367323]: ERROR   01:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:47:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:47:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:02 compute-0 podman[406614]: 2025-11-26 01:47:02.578677124 +0000 UTC m=+0.127636886 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 26 01:47:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:47:06 compute-0 podman[406634]: 2025-11-26 01:47:06.552255044 +0000 UTC m=+0.107217017 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 01:47:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:47:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:47:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:47:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:47:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:47:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:47:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:47:12 compute-0 nova_compute[350387]: 2025-11-26 01:47:12.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:47:12 compute-0 nova_compute[350387]: 2025-11-26 01:47:12.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:47:12 compute-0 nova_compute[350387]: 2025-11-26 01:47:12.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:47:12 compute-0 nova_compute[350387]: 2025-11-26 01:47:12.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 01:47:12 compute-0 nova_compute[350387]: 2025-11-26 01:47:12.322 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 01:47:12 compute-0 nova_compute[350387]: 2025-11-26 01:47:12.323 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:47:12 compute-0 nova_compute[350387]: 2025-11-26 01:47:12.323 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:47:12 compute-0 nova_compute[350387]: 2025-11-26 01:47:12.323 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:47:12 compute-0 nova_compute[350387]: 2025-11-26 01:47:12.324 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:47:12 compute-0 nova_compute[350387]: 2025-11-26 01:47:12.324 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:47:12 compute-0 nova_compute[350387]: 2025-11-26 01:47:12.324 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 01:47:12 compute-0 nova_compute[350387]: 2025-11-26 01:47:12.351 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 01:47:12 compute-0 nova_compute[350387]: 2025-11-26 01:47:12.351 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:47:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:13 compute-0 nova_compute[350387]: 2025-11-26 01:47:13.348 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:47:14 compute-0 nova_compute[350387]: 2025-11-26 01:47:14.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:47:14 compute-0 nova_compute[350387]: 2025-11-26 01:47:14.336 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:47:14 compute-0 nova_compute[350387]: 2025-11-26 01:47:14.337 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:47:14 compute-0 nova_compute[350387]: 2025-11-26 01:47:14.338 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:47:14 compute-0 nova_compute[350387]: 2025-11-26 01:47:14.338 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:47:14 compute-0 nova_compute[350387]: 2025-11-26 01:47:14.339 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:47:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:47:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:47:14 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2495304474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:47:14 compute-0 nova_compute[350387]: 2025-11-26 01:47:14.790 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:47:15 compute-0 nova_compute[350387]: 2025-11-26 01:47:15.291 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:47:15 compute-0 nova_compute[350387]: 2025-11-26 01:47:15.293 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4591MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:47:15 compute-0 nova_compute[350387]: 2025-11-26 01:47:15.294 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:47:15 compute-0 nova_compute[350387]: 2025-11-26 01:47:15.295 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:47:15 compute-0 nova_compute[350387]: 2025-11-26 01:47:15.854 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:47:15 compute-0 nova_compute[350387]: 2025-11-26 01:47:15.855 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:47:15 compute-0 nova_compute[350387]: 2025-11-26 01:47:15.920 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing inventories for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 01:47:15 compute-0 nova_compute[350387]: 2025-11-26 01:47:15.979 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating ProviderTree inventory for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 01:47:15 compute-0 nova_compute[350387]: 2025-11-26 01:47:15.979 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating inventory in ProviderTree for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 01:47:16 compute-0 nova_compute[350387]: 2025-11-26 01:47:16.013 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing aggregate associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 01:47:16 compute-0 nova_compute[350387]: 2025-11-26 01:47:16.032 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing trait associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, traits: COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 01:47:16 compute-0 nova_compute[350387]: 2025-11-26 01:47:16.052 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:47:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:47:16 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2182072169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:47:16 compute-0 nova_compute[350387]: 2025-11-26 01:47:16.554 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:47:16 compute-0 nova_compute[350387]: 2025-11-26 01:47:16.565 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:47:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:16 compute-0 nova_compute[350387]: 2025-11-26 01:47:16.580 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:47:16 compute-0 nova_compute[350387]: 2025-11-26 01:47:16.582 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:47:16 compute-0 nova_compute[350387]: 2025-11-26 01:47:16.582 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:47:17 compute-0 nova_compute[350387]: 2025-11-26 01:47:17.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:47:17 compute-0 nova_compute[350387]: 2025-11-26 01:47:17.316 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:47:17 compute-0 nova_compute[350387]: 2025-11-26 01:47:17.317 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:47:17 compute-0 nova_compute[350387]: 2025-11-26 01:47:17.318 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:47:17 compute-0 nova_compute[350387]: 2025-11-26 01:47:17.318 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 01:47:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:19 compute-0 podman[406702]: 2025-11-26 01:47:19.585106516 +0000 UTC m=+0.116300345 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 26 01:47:19 compute-0 podman[406703]: 2025-11-26 01:47:19.587771201 +0000 UTC m=+0.112484847 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:47:19 compute-0 podman[406701]: 2025-11-26 01:47:19.596621822 +0000 UTC m=+0.137030682 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 01:47:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:47:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:47:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:47:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:47:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:47:21 compute-0 podman[406873]: 2025-11-26 01:47:21.948156823 +0000 UTC m=+0.134221043 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:47:21 compute-0 podman[406874]: 2025-11-26 01:47:21.979607384 +0000 UTC m=+0.167436494 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 26 01:47:22 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:47:22 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:47:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:24 compute-0 podman[407191]: 2025-11-26 01:47:24.330147536 +0000 UTC m=+0.107141404 container create cc29754b6467874b745281986eb911594caa60295fd3be306e78b784d3d8e80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bhabha, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 01:47:24 compute-0 podman[407191]: 2025-11-26 01:47:24.275417227 +0000 UTC m=+0.052411095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:47:24 compute-0 systemd[1]: Started libpod-conmon-cc29754b6467874b745281986eb911594caa60295fd3be306e78b784d3d8e80d.scope.
Nov 26 01:47:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:47:24 compute-0 podman[407191]: 2025-11-26 01:47:24.50926292 +0000 UTC m=+0.286256828 container init cc29754b6467874b745281986eb911594caa60295fd3be306e78b784d3d8e80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bhabha, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:47:24 compute-0 podman[407191]: 2025-11-26 01:47:24.527062784 +0000 UTC m=+0.304056652 container start cc29754b6467874b745281986eb911594caa60295fd3be306e78b784d3d8e80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 01:47:24 compute-0 stupefied_bhabha[407208]: 167 167
Nov 26 01:47:24 compute-0 systemd[1]: libpod-cc29754b6467874b745281986eb911594caa60295fd3be306e78b784d3d8e80d.scope: Deactivated successfully.
Nov 26 01:47:24 compute-0 podman[407191]: 2025-11-26 01:47:24.574153588 +0000 UTC m=+0.351147506 container attach cc29754b6467874b745281986eb911594caa60295fd3be306e78b784d3d8e80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 01:47:24 compute-0 podman[407191]: 2025-11-26 01:47:24.575466855 +0000 UTC m=+0.352460713 container died cc29754b6467874b745281986eb911594caa60295fd3be306e78b784d3d8e80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 01:47:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:47:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7743667fd4c49e1291638e1436adb1b4365b1d75c141f5b7fd0e34785761c192-merged.mount: Deactivated successfully.
Nov 26 01:47:24 compute-0 podman[407191]: 2025-11-26 01:47:24.861938008 +0000 UTC m=+0.638931866 container remove cc29754b6467874b745281986eb911594caa60295fd3be306e78b784d3d8e80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:47:24 compute-0 systemd[1]: libpod-conmon-cc29754b6467874b745281986eb911594caa60295fd3be306e78b784d3d8e80d.scope: Deactivated successfully.
Nov 26 01:47:24 compute-0 podman[407227]: 2025-11-26 01:47:24.960899651 +0000 UTC m=+0.281894865 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, com.redhat.component=ubi9-container, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Nov 26 01:47:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:47:24.961 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:47:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:47:24.961 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:47:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:47:24.961 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:47:25 compute-0 podman[407252]: 2025-11-26 01:47:25.174792369 +0000 UTC m=+0.105090797 container create f9ff670b292937e9c690d53aa7cd85357825c7991d8fd0c782865017580b6350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 26 01:47:25 compute-0 podman[407252]: 2025-11-26 01:47:25.133330415 +0000 UTC m=+0.063628913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:47:25 compute-0 systemd[1]: Started libpod-conmon-f9ff670b292937e9c690d53aa7cd85357825c7991d8fd0c782865017580b6350.scope.
Nov 26 01:47:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9407c877e5451766be4b9479b75882db105418efe7dd69557a0b117b95bbe0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9407c877e5451766be4b9479b75882db105418efe7dd69557a0b117b95bbe0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9407c877e5451766be4b9479b75882db105418efe7dd69557a0b117b95bbe0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9407c877e5451766be4b9479b75882db105418efe7dd69557a0b117b95bbe0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:25 compute-0 podman[407252]: 2025-11-26 01:47:25.337110427 +0000 UTC m=+0.267408915 container init f9ff670b292937e9c690d53aa7cd85357825c7991d8fd0c782865017580b6350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 01:47:25 compute-0 podman[407252]: 2025-11-26 01:47:25.357871275 +0000 UTC m=+0.288169673 container start f9ff670b292937e9c690d53aa7cd85357825c7991d8fd0c782865017580b6350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:47:25 compute-0 podman[407252]: 2025-11-26 01:47:25.36229874 +0000 UTC m=+0.292597218 container attach f9ff670b292937e9c690d53aa7cd85357825c7991d8fd0c782865017580b6350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 01:47:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:47:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3028528923' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:47:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:47:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3028528923' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:47:27 compute-0 jovial_margulis[407268]: [
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:    {
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:        "available": false,
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:        "ceph_device": false,
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:        "lsm_data": {},
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:        "lvs": [],
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:        "path": "/dev/sr0",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:        "rejected_reasons": [
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "Insufficient space (<5GB)",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "Has a FileSystem"
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:        ],
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:        "sys_api": {
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "actuators": null,
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "device_nodes": "sr0",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "devname": "sr0",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "human_readable_size": "482.00 KB",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "id_bus": "ata",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "model": "QEMU DVD-ROM",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "nr_requests": "2",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "parent": "/dev/sr0",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "partitions": {},
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "path": "/dev/sr0",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "removable": "1",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "rev": "2.5+",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "ro": "0",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "rotational": "1",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "sas_address": "",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "sas_device_handle": "",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "scheduler_mode": "mq-deadline",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "sectors": 0,
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "sectorsize": "2048",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "size": 493568.0,
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "support_discard": "2048",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "type": "disk",
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:            "vendor": "QEMU"
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:        }
Nov 26 01:47:27 compute-0 jovial_margulis[407268]:    }
Nov 26 01:47:27 compute-0 jovial_margulis[407268]: ]
Nov 26 01:47:27 compute-0 systemd[1]: libpod-f9ff670b292937e9c690d53aa7cd85357825c7991d8fd0c782865017580b6350.scope: Deactivated successfully.
Nov 26 01:47:27 compute-0 systemd[1]: libpod-f9ff670b292937e9c690d53aa7cd85357825c7991d8fd0c782865017580b6350.scope: Consumed 2.421s CPU time.
Nov 26 01:47:27 compute-0 podman[409336]: 2025-11-26 01:47:27.736544465 +0000 UTC m=+0.037065311 container died f9ff670b292937e9c690d53aa7cd85357825c7991d8fd0c782865017580b6350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 01:47:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd9407c877e5451766be4b9479b75882db105418efe7dd69557a0b117b95bbe0-merged.mount: Deactivated successfully.
Nov 26 01:47:27 compute-0 podman[409336]: 2025-11-26 01:47:27.818346142 +0000 UTC m=+0.118866918 container remove f9ff670b292937e9c690d53aa7cd85357825c7991d8fd0c782865017580b6350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_margulis, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 01:47:27 compute-0 systemd[1]: libpod-conmon-f9ff670b292937e9c690d53aa7cd85357825c7991d8fd0c782865017580b6350.scope: Deactivated successfully.
Nov 26 01:47:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:47:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:47:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:47:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:47:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:47:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:47:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:47:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:47:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:47:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:47:27 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 6645ee64-0eab-4eba-a0cf-39d2a1e4be2d does not exist
Nov 26 01:47:27 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 900568bf-c8b6-408f-a7c4-3cc07f4596d9 does not exist
Nov 26 01:47:27 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 8d8328d8-6e60-40b4-a223-39c4544f1470 does not exist
Nov 26 01:47:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:47:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:47:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:47:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:47:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:47:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:47:28 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:47:28 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:47:28 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:47:28 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:47:28 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:47:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:29 compute-0 podman[409491]: 2025-11-26 01:47:29.16598668 +0000 UTC m=+0.084216256 container create c1134241729d93c840cfc9b31bf886cecde7ad68fb2d39a0d5f31ee072e9759f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mendel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:47:29 compute-0 podman[409491]: 2025-11-26 01:47:29.13738834 +0000 UTC m=+0.055617956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:47:29 compute-0 systemd[1]: Started libpod-conmon-c1134241729d93c840cfc9b31bf886cecde7ad68fb2d39a0d5f31ee072e9759f.scope.
Nov 26 01:47:29 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:47:29 compute-0 podman[409491]: 2025-11-26 01:47:29.308148647 +0000 UTC m=+0.226378263 container init c1134241729d93c840cfc9b31bf886cecde7ad68fb2d39a0d5f31ee072e9759f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:47:29 compute-0 podman[409491]: 2025-11-26 01:47:29.322504593 +0000 UTC m=+0.240734139 container start c1134241729d93c840cfc9b31bf886cecde7ad68fb2d39a0d5f31ee072e9759f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 01:47:29 compute-0 podman[409491]: 2025-11-26 01:47:29.327621108 +0000 UTC m=+0.245850734 container attach c1134241729d93c840cfc9b31bf886cecde7ad68fb2d39a0d5f31ee072e9759f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mendel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:47:29 compute-0 festive_mendel[409507]: 167 167
Nov 26 01:47:29 compute-0 systemd[1]: libpod-c1134241729d93c840cfc9b31bf886cecde7ad68fb2d39a0d5f31ee072e9759f.scope: Deactivated successfully.
Nov 26 01:47:29 compute-0 podman[409491]: 2025-11-26 01:47:29.335685206 +0000 UTC m=+0.253914782 container died c1134241729d93c840cfc9b31bf886cecde7ad68fb2d39a0d5f31ee072e9759f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mendel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:47:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-242a9ae714ce803e17080dc60f90078262d60d55cf6e2d0211d32097c519b6cf-merged.mount: Deactivated successfully.
Nov 26 01:47:29 compute-0 podman[409491]: 2025-11-26 01:47:29.417252257 +0000 UTC m=+0.335481833 container remove c1134241729d93c840cfc9b31bf886cecde7ad68fb2d39a0d5f31ee072e9759f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mendel, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 01:47:29 compute-0 systemd[1]: libpod-conmon-c1134241729d93c840cfc9b31bf886cecde7ad68fb2d39a0d5f31ee072e9759f.scope: Deactivated successfully.
Nov 26 01:47:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:47:29 compute-0 podman[409530]: 2025-11-26 01:47:29.682048806 +0000 UTC m=+0.078684389 container create 306d7fcb8dd0457f151ef05c58ae97599792fbac724b8825ae7a9aee6deefecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:47:29 compute-0 podman[409530]: 2025-11-26 01:47:29.650545134 +0000 UTC m=+0.047180757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:47:29 compute-0 podman[158021]: time="2025-11-26T01:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:47:29 compute-0 systemd[1]: Started libpod-conmon-306d7fcb8dd0457f151ef05c58ae97599792fbac724b8825ae7a9aee6deefecc.scope.
Nov 26 01:47:29 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4131d23792b44702f69165f68a6cbc9b5330229de0b1c5497e80920c69e4e16f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4131d23792b44702f69165f68a6cbc9b5330229de0b1c5497e80920c69e4e16f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4131d23792b44702f69165f68a6cbc9b5330229de0b1c5497e80920c69e4e16f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4131d23792b44702f69165f68a6cbc9b5330229de0b1c5497e80920c69e4e16f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4131d23792b44702f69165f68a6cbc9b5330229de0b1c5497e80920c69e4e16f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:29 compute-0 podman[409530]: 2025-11-26 01:47:29.844092826 +0000 UTC m=+0.240728469 container init 306d7fcb8dd0457f151ef05c58ae97599792fbac724b8825ae7a9aee6deefecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:47:29 compute-0 nova_compute[350387]: 2025-11-26 01:47:29.851 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:47:29 compute-0 podman[409530]: 2025-11-26 01:47:29.891495648 +0000 UTC m=+0.288131251 container start 306d7fcb8dd0457f151ef05c58ae97599792fbac724b8825ae7a9aee6deefecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_visvesvaraya, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:47:29 compute-0 podman[409530]: 2025-11-26 01:47:29.898468186 +0000 UTC m=+0.295103799 container attach 306d7fcb8dd0457f151ef05c58ae97599792fbac724b8825ae7a9aee6deefecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_visvesvaraya, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:47:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44290 "" "Go-http-client/1.1"
Nov 26 01:47:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8534 "" "Go-http-client/1.1"
Nov 26 01:47:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:31 compute-0 beautiful_visvesvaraya[409546]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:47:31 compute-0 beautiful_visvesvaraya[409546]: --> relative data size: 1.0
Nov 26 01:47:31 compute-0 beautiful_visvesvaraya[409546]: --> All data devices are unavailable
Nov 26 01:47:31 compute-0 systemd[1]: libpod-306d7fcb8dd0457f151ef05c58ae97599792fbac724b8825ae7a9aee6deefecc.scope: Deactivated successfully.
Nov 26 01:47:31 compute-0 podman[409530]: 2025-11-26 01:47:31.228674031 +0000 UTC m=+1.625309654 container died 306d7fcb8dd0457f151ef05c58ae97599792fbac724b8825ae7a9aee6deefecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:47:31 compute-0 systemd[1]: libpod-306d7fcb8dd0457f151ef05c58ae97599792fbac724b8825ae7a9aee6deefecc.scope: Consumed 1.223s CPU time.
Nov 26 01:47:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4131d23792b44702f69165f68a6cbc9b5330229de0b1c5497e80920c69e4e16f-merged.mount: Deactivated successfully.
Nov 26 01:47:31 compute-0 podman[409530]: 2025-11-26 01:47:31.341505487 +0000 UTC m=+1.738141070 container remove 306d7fcb8dd0457f151ef05c58ae97599792fbac724b8825ae7a9aee6deefecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_visvesvaraya, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 01:47:31 compute-0 systemd[1]: libpod-conmon-306d7fcb8dd0457f151ef05c58ae97599792fbac724b8825ae7a9aee6deefecc.scope: Deactivated successfully.
Nov 26 01:47:31 compute-0 openstack_network_exporter[367323]: ERROR   01:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:47:31 compute-0 openstack_network_exporter[367323]: ERROR   01:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:47:31 compute-0 openstack_network_exporter[367323]: ERROR   01:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:47:31 compute-0 openstack_network_exporter[367323]: ERROR   01:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:47:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:47:31 compute-0 openstack_network_exporter[367323]: ERROR   01:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:47:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:47:31 compute-0 podman[409576]: 2025-11-26 01:47:31.45600315 +0000 UTC m=+0.176482830 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 01:47:32 compute-0 podman[409739]: 2025-11-26 01:47:32.467740884 +0000 UTC m=+0.088690423 container create 156d917c5262d6b3919ae87da0a5cb4efe5690fc9cf1bd4558968439e13abe1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 01:47:32 compute-0 podman[409739]: 2025-11-26 01:47:32.428639796 +0000 UTC m=+0.049589385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:47:32 compute-0 systemd[1]: Started libpod-conmon-156d917c5262d6b3919ae87da0a5cb4efe5690fc9cf1bd4558968439e13abe1c.scope.
Nov 26 01:47:32 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:47:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:32 compute-0 podman[409739]: 2025-11-26 01:47:32.596745148 +0000 UTC m=+0.217694667 container init 156d917c5262d6b3919ae87da0a5cb4efe5690fc9cf1bd4558968439e13abe1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brown, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 01:47:32 compute-0 podman[409739]: 2025-11-26 01:47:32.610898589 +0000 UTC m=+0.231848108 container start 156d917c5262d6b3919ae87da0a5cb4efe5690fc9cf1bd4558968439e13abe1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brown, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 01:47:32 compute-0 podman[409739]: 2025-11-26 01:47:32.615974952 +0000 UTC m=+0.236924531 container attach 156d917c5262d6b3919ae87da0a5cb4efe5690fc9cf1bd4558968439e13abe1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brown, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 01:47:32 compute-0 focused_brown[409754]: 167 167
Nov 26 01:47:32 compute-0 podman[409739]: 2025-11-26 01:47:32.622008603 +0000 UTC m=+0.242958132 container died 156d917c5262d6b3919ae87da0a5cb4efe5690fc9cf1bd4558968439e13abe1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 01:47:32 compute-0 systemd[1]: libpod-156d917c5262d6b3919ae87da0a5cb4efe5690fc9cf1bd4558968439e13abe1c.scope: Deactivated successfully.
Nov 26 01:47:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-64551e51557af54a5ad2d33104eaef4b10dbf00318c6ff1b0b307aa1f5b02e9a-merged.mount: Deactivated successfully.
Nov 26 01:47:32 compute-0 podman[409739]: 2025-11-26 01:47:32.689033381 +0000 UTC m=+0.309982890 container remove 156d917c5262d6b3919ae87da0a5cb4efe5690fc9cf1bd4558968439e13abe1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brown, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:47:32 compute-0 systemd[1]: libpod-conmon-156d917c5262d6b3919ae87da0a5cb4efe5690fc9cf1bd4558968439e13abe1c.scope: Deactivated successfully.
Nov 26 01:47:32 compute-0 podman[409760]: 2025-11-26 01:47:32.776634573 +0000 UTC m=+0.116692076 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, maintainer=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 26 01:47:32 compute-0 podman[409798]: 2025-11-26 01:47:32.906001467 +0000 UTC m=+0.074447690 container create 5b0b1ad950ce8f93262311237f02747dcc6bd2f5e899dd3d35e9f440d9c9be50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:47:32 compute-0 podman[409798]: 2025-11-26 01:47:32.877011736 +0000 UTC m=+0.045458029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:47:32 compute-0 systemd[1]: Started libpod-conmon-5b0b1ad950ce8f93262311237f02747dcc6bd2f5e899dd3d35e9f440d9c9be50.scope.
Nov 26 01:47:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:47:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601a286c2b8845d51e1e039bfcca428ad96c7d4132440cb6f604641e05c65595/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601a286c2b8845d51e1e039bfcca428ad96c7d4132440cb6f604641e05c65595/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601a286c2b8845d51e1e039bfcca428ad96c7d4132440cb6f604641e05c65595/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601a286c2b8845d51e1e039bfcca428ad96c7d4132440cb6f604641e05c65595/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:33 compute-0 podman[409798]: 2025-11-26 01:47:33.063972331 +0000 UTC m=+0.232418584 container init 5b0b1ad950ce8f93262311237f02747dcc6bd2f5e899dd3d35e9f440d9c9be50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 01:47:33 compute-0 podman[409798]: 2025-11-26 01:47:33.096164422 +0000 UTC m=+0.264610645 container start 5b0b1ad950ce8f93262311237f02747dcc6bd2f5e899dd3d35e9f440d9c9be50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:47:33 compute-0 podman[409798]: 2025-11-26 01:47:33.101208755 +0000 UTC m=+0.269654978 container attach 5b0b1ad950ce8f93262311237f02747dcc6bd2f5e899dd3d35e9f440d9c9be50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:47:33 compute-0 kind_feynman[409815]: {
Nov 26 01:47:33 compute-0 kind_feynman[409815]:    "0": [
Nov 26 01:47:33 compute-0 kind_feynman[409815]:        {
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "devices": [
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "/dev/loop3"
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            ],
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "lv_name": "ceph_lv0",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "lv_size": "21470642176",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "name": "ceph_lv0",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "tags": {
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.cluster_name": "ceph",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.crush_device_class": "",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.encrypted": "0",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.osd_id": "0",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.type": "block",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.vdo": "0"
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            },
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "type": "block",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "vg_name": "ceph_vg0"
Nov 26 01:47:33 compute-0 kind_feynman[409815]:        }
Nov 26 01:47:33 compute-0 kind_feynman[409815]:    ],
Nov 26 01:47:33 compute-0 kind_feynman[409815]:    "1": [
Nov 26 01:47:33 compute-0 kind_feynman[409815]:        {
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "devices": [
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "/dev/loop4"
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            ],
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "lv_name": "ceph_lv1",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "lv_size": "21470642176",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "name": "ceph_lv1",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "tags": {
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.cluster_name": "ceph",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.crush_device_class": "",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.encrypted": "0",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.osd_id": "1",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.type": "block",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.vdo": "0"
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            },
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "type": "block",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "vg_name": "ceph_vg1"
Nov 26 01:47:33 compute-0 kind_feynman[409815]:        }
Nov 26 01:47:33 compute-0 kind_feynman[409815]:    ],
Nov 26 01:47:33 compute-0 kind_feynman[409815]:    "2": [
Nov 26 01:47:33 compute-0 kind_feynman[409815]:        {
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "devices": [
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "/dev/loop5"
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            ],
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "lv_name": "ceph_lv2",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "lv_size": "21470642176",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "name": "ceph_lv2",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "tags": {
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.cluster_name": "ceph",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.crush_device_class": "",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.encrypted": "0",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.osd_id": "2",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.type": "block",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:                "ceph.vdo": "0"
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            },
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "type": "block",
Nov 26 01:47:33 compute-0 kind_feynman[409815]:            "vg_name": "ceph_vg2"
Nov 26 01:47:33 compute-0 kind_feynman[409815]:        }
Nov 26 01:47:33 compute-0 kind_feynman[409815]:    ]
Nov 26 01:47:33 compute-0 kind_feynman[409815]: }
Nov 26 01:47:33 compute-0 systemd[1]: libpod-5b0b1ad950ce8f93262311237f02747dcc6bd2f5e899dd3d35e9f440d9c9be50.scope: Deactivated successfully.
Nov 26 01:47:33 compute-0 podman[409798]: 2025-11-26 01:47:33.935446413 +0000 UTC m=+1.103892676 container died 5b0b1ad950ce8f93262311237f02747dcc6bd2f5e899dd3d35e9f440d9c9be50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:47:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-601a286c2b8845d51e1e039bfcca428ad96c7d4132440cb6f604641e05c65595-merged.mount: Deactivated successfully.
Nov 26 01:47:34 compute-0 podman[409798]: 2025-11-26 01:47:34.037706659 +0000 UTC m=+1.206152882 container remove 5b0b1ad950ce8f93262311237f02747dcc6bd2f5e899dd3d35e9f440d9c9be50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:47:34 compute-0 systemd[1]: libpod-conmon-5b0b1ad950ce8f93262311237f02747dcc6bd2f5e899dd3d35e9f440d9c9be50.scope: Deactivated successfully.
Nov 26 01:47:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Nov 26 01:47:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:47:34 compute-0 podman[409972]: 2025-11-26 01:47:34.948171636 +0000 UTC m=+0.072446893 container create 79e9734fd693cba160391155bb7111b17be7a46afd58742efe9dc131c5fa53c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bell, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 01:47:35 compute-0 podman[409972]: 2025-11-26 01:47:34.923181209 +0000 UTC m=+0.047456466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:47:35 compute-0 systemd[1]: Started libpod-conmon-79e9734fd693cba160391155bb7111b17be7a46afd58742efe9dc131c5fa53c0.scope.
Nov 26 01:47:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:47:35 compute-0 podman[409972]: 2025-11-26 01:47:35.101250651 +0000 UTC m=+0.225525938 container init 79e9734fd693cba160391155bb7111b17be7a46afd58742efe9dc131c5fa53c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:47:35 compute-0 podman[409972]: 2025-11-26 01:47:35.121203036 +0000 UTC m=+0.245478293 container start 79e9734fd693cba160391155bb7111b17be7a46afd58742efe9dc131c5fa53c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:47:35 compute-0 podman[409972]: 2025-11-26 01:47:35.128264626 +0000 UTC m=+0.252539923 container attach 79e9734fd693cba160391155bb7111b17be7a46afd58742efe9dc131c5fa53c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 01:47:35 compute-0 inspiring_bell[409986]: 167 167
Nov 26 01:47:35 compute-0 systemd[1]: libpod-79e9734fd693cba160391155bb7111b17be7a46afd58742efe9dc131c5fa53c0.scope: Deactivated successfully.
Nov 26 01:47:35 compute-0 podman[409972]: 2025-11-26 01:47:35.134261266 +0000 UTC m=+0.258536523 container died 79e9734fd693cba160391155bb7111b17be7a46afd58742efe9dc131c5fa53c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:47:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcd13e8a9226801ed09754ba616e74c846d797cf01481c1fb1be93b47e1647d8-merged.mount: Deactivated successfully.
Nov 26 01:47:35 compute-0 podman[409972]: 2025-11-26 01:47:35.222996239 +0000 UTC m=+0.347271486 container remove 79e9734fd693cba160391155bb7111b17be7a46afd58742efe9dc131c5fa53c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:47:35 compute-0 systemd[1]: libpod-conmon-79e9734fd693cba160391155bb7111b17be7a46afd58742efe9dc131c5fa53c0.scope: Deactivated successfully.
Nov 26 01:47:35 compute-0 podman[410009]: 2025-11-26 01:47:35.510935494 +0000 UTC m=+0.090985098 container create b4e1d1c33221e7a8a849cf2879f2fce3ddbc8e92f3118dcc46b1bfc99518ea85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 01:47:35 compute-0 podman[410009]: 2025-11-26 01:47:35.478704821 +0000 UTC m=+0.058754435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:47:35 compute-0 systemd[1]: Started libpod-conmon-b4e1d1c33221e7a8a849cf2879f2fce3ddbc8e92f3118dcc46b1bfc99518ea85.scope.
Nov 26 01:47:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28116746a0b824235ab1d31e182d42986c014272d29b28192a7cd062ba6eba9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28116746a0b824235ab1d31e182d42986c014272d29b28192a7cd062ba6eba9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28116746a0b824235ab1d31e182d42986c014272d29b28192a7cd062ba6eba9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28116746a0b824235ab1d31e182d42986c014272d29b28192a7cd062ba6eba9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:47:35 compute-0 podman[410009]: 2025-11-26 01:47:35.695683877 +0000 UTC m=+0.275733491 container init b4e1d1c33221e7a8a849cf2879f2fce3ddbc8e92f3118dcc46b1bfc99518ea85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 26 01:47:35 compute-0 podman[410009]: 2025-11-26 01:47:35.715886549 +0000 UTC m=+0.295936173 container start b4e1d1c33221e7a8a849cf2879f2fce3ddbc8e92f3118dcc46b1bfc99518ea85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:47:35 compute-0 podman[410009]: 2025-11-26 01:47:35.723224947 +0000 UTC m=+0.303274601 container attach b4e1d1c33221e7a8a849cf2879f2fce3ddbc8e92f3118dcc46b1bfc99518ea85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_elion, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 01:47:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 26 01:47:36 compute-0 epic_elion[410025]: {
Nov 26 01:47:36 compute-0 epic_elion[410025]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:47:36 compute-0 epic_elion[410025]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:47:36 compute-0 epic_elion[410025]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:47:36 compute-0 epic_elion[410025]:        "osd_id": 0,
Nov 26 01:47:36 compute-0 epic_elion[410025]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:47:36 compute-0 epic_elion[410025]:        "type": "bluestore"
Nov 26 01:47:36 compute-0 epic_elion[410025]:    },
Nov 26 01:47:36 compute-0 epic_elion[410025]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:47:36 compute-0 epic_elion[410025]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:47:36 compute-0 epic_elion[410025]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:47:36 compute-0 epic_elion[410025]:        "osd_id": 2,
Nov 26 01:47:36 compute-0 epic_elion[410025]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:47:36 compute-0 epic_elion[410025]:        "type": "bluestore"
Nov 26 01:47:36 compute-0 epic_elion[410025]:    },
Nov 26 01:47:36 compute-0 epic_elion[410025]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:47:36 compute-0 epic_elion[410025]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:47:36 compute-0 epic_elion[410025]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:47:36 compute-0 epic_elion[410025]:        "osd_id": 1,
Nov 26 01:47:36 compute-0 epic_elion[410025]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:47:36 compute-0 epic_elion[410025]:        "type": "bluestore"
Nov 26 01:47:36 compute-0 epic_elion[410025]:    }
Nov 26 01:47:36 compute-0 epic_elion[410025]: }
Nov 26 01:47:36 compute-0 systemd[1]: libpod-b4e1d1c33221e7a8a849cf2879f2fce3ddbc8e92f3118dcc46b1bfc99518ea85.scope: Deactivated successfully.
Nov 26 01:47:36 compute-0 podman[410009]: 2025-11-26 01:47:36.860422455 +0000 UTC m=+1.440472029 container died b4e1d1c33221e7a8a849cf2879f2fce3ddbc8e92f3118dcc46b1bfc99518ea85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_elion, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:47:36 compute-0 systemd[1]: libpod-b4e1d1c33221e7a8a849cf2879f2fce3ddbc8e92f3118dcc46b1bfc99518ea85.scope: Consumed 1.148s CPU time.
Nov 26 01:47:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-28116746a0b824235ab1d31e182d42986c014272d29b28192a7cd062ba6eba9c-merged.mount: Deactivated successfully.
Nov 26 01:47:36 compute-0 podman[410009]: 2025-11-26 01:47:36.970489712 +0000 UTC m=+1.550539316 container remove b4e1d1c33221e7a8a849cf2879f2fce3ddbc8e92f3118dcc46b1bfc99518ea85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:47:36 compute-0 systemd[1]: libpod-conmon-b4e1d1c33221e7a8a849cf2879f2fce3ddbc8e92f3118dcc46b1bfc99518ea85.scope: Deactivated successfully.
Nov 26 01:47:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:47:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:47:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:47:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:47:37 compute-0 podman[410059]: 2025-11-26 01:47:37.042742138 +0000 UTC m=+0.137182956 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:47:37 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 71f0c9df-c003-48c4-b366-b1a900dd56a2 does not exist
Nov 26 01:47:37 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 8c7e8170-8334-4d90-9f17-36c1d01ada68 does not exist
Nov 26 01:47:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:47:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:47:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 01:47:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:47:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.071944) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121661072001, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2055, "num_deletes": 251, "total_data_size": 3494934, "memory_usage": 3547424, "flush_reason": "Manual Compaction"}
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121661096013, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3396632, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20880, "largest_seqno": 22934, "table_properties": {"data_size": 3387336, "index_size": 5854, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18600, "raw_average_key_size": 19, "raw_value_size": 3368805, "raw_average_value_size": 3614, "num_data_blocks": 266, "num_entries": 932, "num_filter_entries": 932, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764121437, "oldest_key_time": 1764121437, "file_creation_time": 1764121661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 24152 microseconds, and 16575 cpu microseconds.
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.096096) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3396632 bytes OK
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.096122) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.099149) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.099169) EVENT_LOG_v1 {"time_micros": 1764121661099162, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.099191) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3486322, prev total WAL file size 3486322, number of live WAL files 2.
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:47:41
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.101090) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3317KB)], [50(7303KB)]
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121661101133, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10874948, "oldest_snapshot_seqno": -1}
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'backups', 'vms', 'volumes']
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4683 keys, 9145643 bytes, temperature: kUnknown
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121661164266, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9145643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9111918, "index_size": 20943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11717, "raw_key_size": 114633, "raw_average_key_size": 24, "raw_value_size": 9024730, "raw_average_value_size": 1927, "num_data_blocks": 884, "num_entries": 4683, "num_filter_entries": 4683, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764121661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.164581) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9145643 bytes
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.166770) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.0 rd, 144.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5197, records dropped: 514 output_compression: NoCompression
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.166799) EVENT_LOG_v1 {"time_micros": 1764121661166785, "job": 26, "event": "compaction_finished", "compaction_time_micros": 63241, "compaction_time_cpu_micros": 39426, "output_level": 6, "num_output_files": 1, "total_output_size": 9145643, "num_input_records": 5197, "num_output_records": 4683, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121661168223, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121661171021, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.100929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.171178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.171188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.171192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.171197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:47:41 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:47:41.171201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:47:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:47:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 01:47:44 compute-0 systemd[1]: session-60.scope: Deactivated successfully.
Nov 26 01:47:44 compute-0 systemd[1]: session-60.scope: Consumed 12.404s CPU time.
Nov 26 01:47:44 compute-0 systemd-logind[800]: Session 60 logged out. Waiting for processes to exit.
Nov 26 01:47:44 compute-0 systemd-logind[800]: Removed session 60.
Nov 26 01:47:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 01:47:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:47:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Nov 26 01:47:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 26 01:47:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:47:50 compute-0 podman[410149]: 2025-11-26 01:47:50.585384696 +0000 UTC m=+0.123842118 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:47:50 compute-0 podman[410147]: 2025-11-26 01:47:50.588890556 +0000 UTC m=+0.137206158 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 26 01:47:50 compute-0 podman[410148]: 2025-11-26 01:47:50.593329671 +0000 UTC m=+0.137312710 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:47:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:47:52 compute-0 podman[410205]: 2025-11-26 01:47:52.583812327 +0000 UTC m=+0.131488535 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Nov 26 01:47:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:52 compute-0 podman[410206]: 2025-11-26 01:47:52.632804885 +0000 UTC m=+0.175237144 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Nov 26 01:47:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:47:55 compute-0 podman[410247]: 2025-11-26 01:47:55.581748665 +0000 UTC m=+0.125594209 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.tags=base rhel9, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_id=edpm, container_name=kepler)
Nov 26 01:47:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:47:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:47:59 compute-0 podman[158021]: time="2025-11-26T01:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:47:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 01:47:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8105 "" "Go-http-client/1.1"
Nov 26 01:48:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:01 compute-0 openstack_network_exporter[367323]: ERROR   01:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:48:01 compute-0 openstack_network_exporter[367323]: ERROR   01:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:48:01 compute-0 openstack_network_exporter[367323]: ERROR   01:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:48:01 compute-0 openstack_network_exporter[367323]: ERROR   01:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:48:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:48:01 compute-0 openstack_network_exporter[367323]: ERROR   01:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:48:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:48:02 compute-0 podman[410268]: 2025-11-26 01:48:02.552230585 +0000 UTC m=+0.099492609 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 26 01:48:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:03 compute-0 podman[410287]: 2025-11-26 01:48:03.575734874 +0000 UTC m=+0.128121300 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1755695350, version=9.6, name=ubi9-minimal, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 26 01:48:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:48:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:07 compute-0 podman[410308]: 2025-11-26 01:48:07.534983039 +0000 UTC m=+0.097223145 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 01:48:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:48:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:48:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:48:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:48:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:48:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:48:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:48:11 compute-0 nova_compute[350387]: 2025-11-26 01:48:11.321 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:48:12 compute-0 nova_compute[350387]: 2025-11-26 01:48:12.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:48:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:13 compute-0 nova_compute[350387]: 2025-11-26 01:48:13.293 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:48:13 compute-0 nova_compute[350387]: 2025-11-26 01:48:13.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:48:13 compute-0 nova_compute[350387]: 2025-11-26 01:48:13.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:48:14 compute-0 nova_compute[350387]: 2025-11-26 01:48:14.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:48:14 compute-0 nova_compute[350387]: 2025-11-26 01:48:14.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:48:14 compute-0 nova_compute[350387]: 2025-11-26 01:48:14.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 01:48:14 compute-0 nova_compute[350387]: 2025-11-26 01:48:14.317 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 01:48:14 compute-0 nova_compute[350387]: 2025-11-26 01:48:14.318 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:48:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:48:15 compute-0 nova_compute[350387]: 2025-11-26 01:48:15.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:48:15 compute-0 nova_compute[350387]: 2025-11-26 01:48:15.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:48:16 compute-0 nova_compute[350387]: 2025-11-26 01:48:16.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:48:16 compute-0 nova_compute[350387]: 2025-11-26 01:48:16.341 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:48:16 compute-0 nova_compute[350387]: 2025-11-26 01:48:16.341 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:48:16 compute-0 nova_compute[350387]: 2025-11-26 01:48:16.342 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:48:16 compute-0 nova_compute[350387]: 2025-11-26 01:48:16.342 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:48:16 compute-0 nova_compute[350387]: 2025-11-26 01:48:16.342 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:48:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:48:16 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1717382531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:48:16 compute-0 nova_compute[350387]: 2025-11-26 01:48:16.829 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:48:17 compute-0 nova_compute[350387]: 2025-11-26 01:48:17.454 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:48:17 compute-0 nova_compute[350387]: 2025-11-26 01:48:17.456 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4571MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:48:17 compute-0 nova_compute[350387]: 2025-11-26 01:48:17.457 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:48:17 compute-0 nova_compute[350387]: 2025-11-26 01:48:17.458 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:48:17 compute-0 nova_compute[350387]: 2025-11-26 01:48:17.542 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:48:17 compute-0 nova_compute[350387]: 2025-11-26 01:48:17.543 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:48:17 compute-0 nova_compute[350387]: 2025-11-26 01:48:17.569 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:48:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:48:17 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3907957025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:48:18 compute-0 nova_compute[350387]: 2025-11-26 01:48:18.005 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:48:18 compute-0 nova_compute[350387]: 2025-11-26 01:48:18.017 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:48:18 compute-0 nova_compute[350387]: 2025-11-26 01:48:18.040 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:48:18 compute-0 nova_compute[350387]: 2025-11-26 01:48:18.043 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:48:18 compute-0 nova_compute[350387]: 2025-11-26 01:48:18.043 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:48:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:48:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:21 compute-0 podman[410374]: 2025-11-26 01:48:21.574926243 +0000 UTC m=+0.118329211 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 01:48:21 compute-0 podman[410375]: 2025-11-26 01:48:21.582984391 +0000 UTC m=+0.117938590 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:48:21 compute-0 podman[410376]: 2025-11-26 01:48:21.60235072 +0000 UTC m=+0.132071711 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 01:48:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:23 compute-0 podman[410433]: 2025-11-26 01:48:23.595532482 +0000 UTC m=+0.131784104 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 01:48:23 compute-0 podman[410434]: 2025-11-26 01:48:23.659595856 +0000 UTC m=+0.190485626 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:48:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:48:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:48:24.963 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:48:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:48:24.963 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:48:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:48:24.963 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:48:26 compute-0 podman[410479]: 2025-11-26 01:48:26.609472334 +0000 UTC m=+0.156657968 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, version=9.4, io.openshift.tags=base rhel9, io.openshift.expose-services=, name=ubi9, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release=1214.1726694543, io.buildah.version=1.29.0, vendor=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 26 01:48:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:48:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3970046531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:48:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:48:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3970046531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:48:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:48:29 compute-0 podman[158021]: time="2025-11-26T01:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:48:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 01:48:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8103 "" "Go-http-client/1.1"
Nov 26 01:48:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:31 compute-0 openstack_network_exporter[367323]: ERROR   01:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:48:31 compute-0 openstack_network_exporter[367323]: ERROR   01:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:48:31 compute-0 openstack_network_exporter[367323]: ERROR   01:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:48:31 compute-0 openstack_network_exporter[367323]: ERROR   01:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:48:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:48:31 compute-0 openstack_network_exporter[367323]: ERROR   01:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:48:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:48:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:33 compute-0 podman[410498]: 2025-11-26 01:48:33.565048632 +0000 UTC m=+0.109860442 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 26 01:48:34 compute-0 podman[410518]: 2025-11-26 01:48:34.59205293 +0000 UTC m=+0.140801529 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, version=9.6, io.buildah.version=1.33.7)
Nov 26 01:48:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:48:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:37 compute-0 podman[410587]: 2025-11-26 01:48:37.762862924 +0000 UTC m=+0.111884540 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 01:48:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 26 01:48:38 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 01:48:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:48:38 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:48:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:48:38 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:48:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:48:38 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:48:38 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 1c5bd8d8-b8bf-4474-b41b-0f2daa6687e9 does not exist
Nov 26 01:48:38 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 477b02f8-2e75-4258-be24-9042a523c604 does not exist
Nov 26 01:48:38 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 8d5f8b39-0bb5-4894-91ae-e58ca86e4182 does not exist
Nov 26 01:48:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:48:38 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:48:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:48:38 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:48:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:48:38 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:48:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 01:48:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:48:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:48:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:48:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:48:39 compute-0 podman[410830]: 2025-11-26 01:48:39.773741467 +0000 UTC m=+0.091999167 container create 2eaaf85487952b7eb51c689406f94020907894fd0d936e7264b879292c8832db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 01:48:39 compute-0 podman[410830]: 2025-11-26 01:48:39.739349513 +0000 UTC m=+0.057607233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:48:39 compute-0 systemd[1]: Started libpod-conmon-2eaaf85487952b7eb51c689406f94020907894fd0d936e7264b879292c8832db.scope.
Nov 26 01:48:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:48:39 compute-0 podman[410830]: 2025-11-26 01:48:39.924666311 +0000 UTC m=+0.242924081 container init 2eaaf85487952b7eb51c689406f94020907894fd0d936e7264b879292c8832db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:48:39 compute-0 podman[410830]: 2025-11-26 01:48:39.944998037 +0000 UTC m=+0.263255747 container start 2eaaf85487952b7eb51c689406f94020907894fd0d936e7264b879292c8832db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 01:48:39 compute-0 podman[410830]: 2025-11-26 01:48:39.952070137 +0000 UTC m=+0.270327907 container attach 2eaaf85487952b7eb51c689406f94020907894fd0d936e7264b879292c8832db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:48:39 compute-0 crazy_shockley[410846]: 167 167
Nov 26 01:48:39 compute-0 systemd[1]: libpod-2eaaf85487952b7eb51c689406f94020907894fd0d936e7264b879292c8832db.scope: Deactivated successfully.
Nov 26 01:48:39 compute-0 podman[410830]: 2025-11-26 01:48:39.95817513 +0000 UTC m=+0.276432830 container died 2eaaf85487952b7eb51c689406f94020907894fd0d936e7264b879292c8832db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:48:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-a42ec70fda63e5e2a0f4aed11bc304098d3ce5726a9d10742eb45763d1cfb8b5-merged.mount: Deactivated successfully.
Nov 26 01:48:40 compute-0 podman[410830]: 2025-11-26 01:48:40.037037694 +0000 UTC m=+0.355295404 container remove 2eaaf85487952b7eb51c689406f94020907894fd0d936e7264b879292c8832db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:48:40 compute-0 systemd[1]: libpod-conmon-2eaaf85487952b7eb51c689406f94020907894fd0d936e7264b879292c8832db.scope: Deactivated successfully.
Nov 26 01:48:40 compute-0 podman[410869]: 2025-11-26 01:48:40.286356005 +0000 UTC m=+0.076957030 container create 04cdda41d1c04a114733c22f8ca5fa326bb3ab4cdaefd0bd206febb51e70dc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kepler, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:48:40 compute-0 podman[410869]: 2025-11-26 01:48:40.254174154 +0000 UTC m=+0.044775159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:48:40 compute-0 systemd[1]: Started libpod-conmon-04cdda41d1c04a114733c22f8ca5fa326bb3ab4cdaefd0bd206febb51e70dc6f.scope.
Nov 26 01:48:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6249bd3519850a4036d9abe67c2ba307186bbe94bf03eb76328c9fb0b0d3cc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6249bd3519850a4036d9abe67c2ba307186bbe94bf03eb76328c9fb0b0d3cc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6249bd3519850a4036d9abe67c2ba307186bbe94bf03eb76328c9fb0b0d3cc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6249bd3519850a4036d9abe67c2ba307186bbe94bf03eb76328c9fb0b0d3cc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6249bd3519850a4036d9abe67c2ba307186bbe94bf03eb76328c9fb0b0d3cc2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:48:40 compute-0 podman[410869]: 2025-11-26 01:48:40.443740673 +0000 UTC m=+0.234341748 container init 04cdda41d1c04a114733c22f8ca5fa326bb3ab4cdaefd0bd206febb51e70dc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:48:40 compute-0 podman[410869]: 2025-11-26 01:48:40.459267333 +0000 UTC m=+0.249868318 container start 04cdda41d1c04a114733c22f8ca5fa326bb3ab4cdaefd0bd206febb51e70dc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kepler, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 01:48:40 compute-0 podman[410869]: 2025-11-26 01:48:40.464010917 +0000 UTC m=+0.254611982 container attach 04cdda41d1c04a114733c22f8ca5fa326bb3ab4cdaefd0bd206febb51e70dc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kepler, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:48:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:48:41
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'backups', 'volumes', 'cephfs.cephfs.data', 'vms']
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:48:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:48:41 compute-0 naughty_kepler[410886]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:48:41 compute-0 naughty_kepler[410886]: --> relative data size: 1.0
Nov 26 01:48:41 compute-0 naughty_kepler[410886]: --> All data devices are unavailable
Nov 26 01:48:41 compute-0 systemd[1]: libpod-04cdda41d1c04a114733c22f8ca5fa326bb3ab4cdaefd0bd206febb51e70dc6f.scope: Deactivated successfully.
Nov 26 01:48:41 compute-0 systemd[1]: libpod-04cdda41d1c04a114733c22f8ca5fa326bb3ab4cdaefd0bd206febb51e70dc6f.scope: Consumed 1.238s CPU time.
Nov 26 01:48:41 compute-0 podman[410869]: 2025-11-26 01:48:41.747580951 +0000 UTC m=+1.538181976 container died 04cdda41d1c04a114733c22f8ca5fa326bb3ab4cdaefd0bd206febb51e70dc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kepler, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 26 01:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6249bd3519850a4036d9abe67c2ba307186bbe94bf03eb76328c9fb0b0d3cc2-merged.mount: Deactivated successfully.
Nov 26 01:48:41 compute-0 podman[410869]: 2025-11-26 01:48:41.856688212 +0000 UTC m=+1.647289237 container remove 04cdda41d1c04a114733c22f8ca5fa326bb3ab4cdaefd0bd206febb51e70dc6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:48:41 compute-0 systemd[1]: libpod-conmon-04cdda41d1c04a114733c22f8ca5fa326bb3ab4cdaefd0bd206febb51e70dc6f.scope: Deactivated successfully.
Nov 26 01:48:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.858 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.859 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.859 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.860 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.861 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.861 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.864 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.863 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.864 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.865 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.866 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.866 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.866 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.868 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.868 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.868 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.868 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.869 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.869 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.869 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.869 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.869 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.870 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.870 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.870 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.870 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.870 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.871 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.871 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.871 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.871 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.871 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.872 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.872 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.872 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.872 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.872 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.872 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.873 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.873 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.873 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.873 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.874 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.874 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.874 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.874 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.874 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.874 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.875 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.875 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.875 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.875 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.875 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.875 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.876 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.876 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.876 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.876 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.876 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.877 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.877 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.877 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.877 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.878 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:48:42.879 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:48:43 compute-0 podman[411064]: 2025-11-26 01:48:43.22764887 +0000 UTC m=+0.120974617 container create 2f29f9133409221c82f8411d8ead0db1262bb493763c95d868047c6437e02b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 26 01:48:43 compute-0 podman[411064]: 2025-11-26 01:48:43.161560468 +0000 UTC m=+0.054886255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:48:43 compute-0 systemd[1]: Started libpod-conmon-2f29f9133409221c82f8411d8ead0db1262bb493763c95d868047c6437e02b44.scope.
Nov 26 01:48:43 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:48:43 compute-0 podman[411064]: 2025-11-26 01:48:43.370135556 +0000 UTC m=+0.263461363 container init 2f29f9133409221c82f8411d8ead0db1262bb493763c95d868047c6437e02b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chaplygin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:48:43 compute-0 podman[411064]: 2025-11-26 01:48:43.38687648 +0000 UTC m=+0.280202227 container start 2f29f9133409221c82f8411d8ead0db1262bb493763c95d868047c6437e02b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chaplygin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:48:43 compute-0 podman[411064]: 2025-11-26 01:48:43.397244043 +0000 UTC m=+0.290569940 container attach 2f29f9133409221c82f8411d8ead0db1262bb493763c95d868047c6437e02b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chaplygin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 26 01:48:43 compute-0 angry_chaplygin[411080]: 167 167
Nov 26 01:48:43 compute-0 systemd[1]: libpod-2f29f9133409221c82f8411d8ead0db1262bb493763c95d868047c6437e02b44.scope: Deactivated successfully.
Nov 26 01:48:43 compute-0 podman[411064]: 2025-11-26 01:48:43.402939775 +0000 UTC m=+0.296265502 container died 2f29f9133409221c82f8411d8ead0db1262bb493763c95d868047c6437e02b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chaplygin, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:48:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b13ec998ae6956b16b23c068c7a480aeff5a6bf2e0e6cbeed41a87966a00743c-merged.mount: Deactivated successfully.
Nov 26 01:48:43 compute-0 podman[411064]: 2025-11-26 01:48:43.499744606 +0000 UTC m=+0.393070353 container remove 2f29f9133409221c82f8411d8ead0db1262bb493763c95d868047c6437e02b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:48:43 compute-0 systemd[1]: libpod-conmon-2f29f9133409221c82f8411d8ead0db1262bb493763c95d868047c6437e02b44.scope: Deactivated successfully.
Nov 26 01:48:43 compute-0 podman[411103]: 2025-11-26 01:48:43.80105257 +0000 UTC m=+0.097138872 container create 3efcd4ce94eac51ed550522f87defb6d88b87fd98125b5d4b54e65c2ee0824f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 01:48:43 compute-0 podman[411103]: 2025-11-26 01:48:43.771300828 +0000 UTC m=+0.067387220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:48:43 compute-0 systemd[1]: Started libpod-conmon-3efcd4ce94eac51ed550522f87defb6d88b87fd98125b5d4b54e65c2ee0824f0.scope.
Nov 26 01:48:43 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:48:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ee764efa30499e6c412ccd556192fa0c5ac7e6d9f4d4e4ee46a1a0c4bac759/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:48:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ee764efa30499e6c412ccd556192fa0c5ac7e6d9f4d4e4ee46a1a0c4bac759/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:48:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ee764efa30499e6c412ccd556192fa0c5ac7e6d9f4d4e4ee46a1a0c4bac759/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:48:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ee764efa30499e6c412ccd556192fa0c5ac7e6d9f4d4e4ee46a1a0c4bac759/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:48:44 compute-0 podman[411103]: 2025-11-26 01:48:44.00651969 +0000 UTC m=+0.302606072 container init 3efcd4ce94eac51ed550522f87defb6d88b87fd98125b5d4b54e65c2ee0824f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 01:48:44 compute-0 podman[411103]: 2025-11-26 01:48:44.02277429 +0000 UTC m=+0.318860622 container start 3efcd4ce94eac51ed550522f87defb6d88b87fd98125b5d4b54e65c2ee0824f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bartik, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:48:44 compute-0 podman[411103]: 2025-11-26 01:48:44.031328312 +0000 UTC m=+0.327414694 container attach 3efcd4ce94eac51ed550522f87defb6d88b87fd98125b5d4b54e65c2ee0824f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bartik, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:48:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]: {
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:    "0": [
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:        {
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "devices": [
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "/dev/loop3"
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            ],
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "lv_name": "ceph_lv0",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "lv_size": "21470642176",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "name": "ceph_lv0",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "tags": {
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.cluster_name": "ceph",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.crush_device_class": "",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.encrypted": "0",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.osd_id": "0",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.type": "block",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.vdo": "0"
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            },
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "type": "block",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "vg_name": "ceph_vg0"
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:        }
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:    ],
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:    "1": [
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:        {
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "devices": [
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "/dev/loop4"
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            ],
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "lv_name": "ceph_lv1",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "lv_size": "21470642176",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "name": "ceph_lv1",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "tags": {
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.cluster_name": "ceph",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.crush_device_class": "",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.encrypted": "0",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.osd_id": "1",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.type": "block",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.vdo": "0"
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            },
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "type": "block",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "vg_name": "ceph_vg1"
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:        }
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:    ],
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:    "2": [
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:        {
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "devices": [
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "/dev/loop5"
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            ],
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "lv_name": "ceph_lv2",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "lv_size": "21470642176",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "name": "ceph_lv2",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "tags": {
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.cluster_name": "ceph",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.crush_device_class": "",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.encrypted": "0",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.osd_id": "2",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.type": "block",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:                "ceph.vdo": "0"
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            },
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "type": "block",
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:            "vg_name": "ceph_vg2"
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:        }
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]:    ]
Nov 26 01:48:44 compute-0 quizzical_bartik[411120]: }
Nov 26 01:48:44 compute-0 systemd[1]: libpod-3efcd4ce94eac51ed550522f87defb6d88b87fd98125b5d4b54e65c2ee0824f0.scope: Deactivated successfully.
Nov 26 01:48:44 compute-0 podman[411103]: 2025-11-26 01:48:44.867756162 +0000 UTC m=+1.163842494 container died 3efcd4ce94eac51ed550522f87defb6d88b87fd98125b5d4b54e65c2ee0824f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:48:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7ee764efa30499e6c412ccd556192fa0c5ac7e6d9f4d4e4ee46a1a0c4bac759-merged.mount: Deactivated successfully.
Nov 26 01:48:44 compute-0 podman[411103]: 2025-11-26 01:48:44.98066523 +0000 UTC m=+1.276751532 container remove 3efcd4ce94eac51ed550522f87defb6d88b87fd98125b5d4b54e65c2ee0824f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:48:44 compute-0 systemd[1]: libpod-conmon-3efcd4ce94eac51ed550522f87defb6d88b87fd98125b5d4b54e65c2ee0824f0.scope: Deactivated successfully.
Nov 26 01:48:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 26 01:48:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 26 01:48:45 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 26 01:48:46 compute-0 podman[411277]: 2025-11-26 01:48:46.218954481 +0000 UTC m=+0.088449946 container create 18d2f554df6384b046d47b5bec1fc53e156235db5f03da74ee7acf646df2e57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 01:48:46 compute-0 podman[411277]: 2025-11-26 01:48:46.189582879 +0000 UTC m=+0.059078404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:48:46 compute-0 systemd[1]: Started libpod-conmon-18d2f554df6384b046d47b5bec1fc53e156235db5f03da74ee7acf646df2e57e.scope.
Nov 26 01:48:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:48:46 compute-0 podman[411277]: 2025-11-26 01:48:46.363623889 +0000 UTC m=+0.233119434 container init 18d2f554df6384b046d47b5bec1fc53e156235db5f03da74ee7acf646df2e57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:48:46 compute-0 podman[411277]: 2025-11-26 01:48:46.379604511 +0000 UTC m=+0.249099976 container start 18d2f554df6384b046d47b5bec1fc53e156235db5f03da74ee7acf646df2e57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 01:48:46 compute-0 podman[411277]: 2025-11-26 01:48:46.384812299 +0000 UTC m=+0.254307854 container attach 18d2f554df6384b046d47b5bec1fc53e156235db5f03da74ee7acf646df2e57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 01:48:46 compute-0 brave_ishizaka[411293]: 167 167
Nov 26 01:48:46 compute-0 systemd[1]: libpod-18d2f554df6384b046d47b5bec1fc53e156235db5f03da74ee7acf646df2e57e.scope: Deactivated successfully.
Nov 26 01:48:46 compute-0 podman[411277]: 2025-11-26 01:48:46.390419968 +0000 UTC m=+0.259915463 container died 18d2f554df6384b046d47b5bec1fc53e156235db5f03da74ee7acf646df2e57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 01:48:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-98ae6e1b1928324de9c5de68a07fa7faeff7940c05dfc5ffcf6d617e49c3e610-merged.mount: Deactivated successfully.
Nov 26 01:48:46 compute-0 podman[411277]: 2025-11-26 01:48:46.462356125 +0000 UTC m=+0.331851610 container remove 18d2f554df6384b046d47b5bec1fc53e156235db5f03da74ee7acf646df2e57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 01:48:46 compute-0 systemd[1]: libpod-conmon-18d2f554df6384b046d47b5bec1fc53e156235db5f03da74ee7acf646df2e57e.scope: Deactivated successfully.
Nov 26 01:48:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:46 compute-0 podman[411315]: 2025-11-26 01:48:46.745483693 +0000 UTC m=+0.094310403 container create 98653226194a679a0100c644f8f9c29a86312a6dababc3a4768ec987cdb7e189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 01:48:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 26 01:48:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 26 01:48:46 compute-0 podman[411315]: 2025-11-26 01:48:46.714745352 +0000 UTC m=+0.063572052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:48:46 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 26 01:48:46 compute-0 systemd[1]: Started libpod-conmon-98653226194a679a0100c644f8f9c29a86312a6dababc3a4768ec987cdb7e189.scope.
Nov 26 01:48:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:48:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd88cd01d25145c405a68bd5630415ca7692ca4c33a06339cfc438910348fdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:48:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd88cd01d25145c405a68bd5630415ca7692ca4c33a06339cfc438910348fdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:48:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd88cd01d25145c405a68bd5630415ca7692ca4c33a06339cfc438910348fdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:48:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd88cd01d25145c405a68bd5630415ca7692ca4c33a06339cfc438910348fdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:48:46 compute-0 podman[411315]: 2025-11-26 01:48:46.929023991 +0000 UTC m=+0.277850761 container init 98653226194a679a0100c644f8f9c29a86312a6dababc3a4768ec987cdb7e189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 01:48:46 compute-0 podman[411315]: 2025-11-26 01:48:46.957665603 +0000 UTC m=+0.306492313 container start 98653226194a679a0100c644f8f9c29a86312a6dababc3a4768ec987cdb7e189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mcnulty, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:48:46 compute-0 podman[411315]: 2025-11-26 01:48:46.96497378 +0000 UTC m=+0.313800480 container attach 98653226194a679a0100c644f8f9c29a86312a6dababc3a4768ec987cdb7e189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mcnulty, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:48:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 26 01:48:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 26 01:48:47 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]: {
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:        "osd_id": 0,
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:        "type": "bluestore"
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:    },
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:        "osd_id": 2,
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:        "type": "bluestore"
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:    },
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:        "osd_id": 1,
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:        "type": "bluestore"
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]:    }
Nov 26 01:48:48 compute-0 relaxed_mcnulty[411331]: }
Nov 26 01:48:48 compute-0 systemd[1]: libpod-98653226194a679a0100c644f8f9c29a86312a6dababc3a4768ec987cdb7e189.scope: Deactivated successfully.
Nov 26 01:48:48 compute-0 systemd[1]: libpod-98653226194a679a0100c644f8f9c29a86312a6dababc3a4768ec987cdb7e189.scope: Consumed 1.293s CPU time.
Nov 26 01:48:48 compute-0 podman[411315]: 2025-11-26 01:48:48.266295377 +0000 UTC m=+1.615122087 container died 98653226194a679a0100c644f8f9c29a86312a6dababc3a4768ec987cdb7e189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 01:48:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fd88cd01d25145c405a68bd5630415ca7692ca4c33a06339cfc438910348fdd-merged.mount: Deactivated successfully.
Nov 26 01:48:48 compute-0 podman[411315]: 2025-11-26 01:48:48.372167445 +0000 UTC m=+1.720994155 container remove 98653226194a679a0100c644f8f9c29a86312a6dababc3a4768ec987cdb7e189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mcnulty, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:48:48 compute-0 systemd[1]: libpod-conmon-98653226194a679a0100c644f8f9c29a86312a6dababc3a4768ec987cdb7e189.scope: Deactivated successfully.
Nov 26 01:48:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:48:48 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:48:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:48:48 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:48:48 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 14cfddb7-dfd9-4a8b-8a4c-65a370d46983 does not exist
Nov 26 01:48:48 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev bd35704a-0b7d-4a34-bf7e-bfbd34cf64b9 does not exist
Nov 26 01:48:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:48:48 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:48:48 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:48:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:48:50 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:48:52 compute-0 podman[411428]: 2025-11-26 01:48:52.594707336 +0000 UTC m=+0.138484224 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4)
Nov 26 01:48:52 compute-0 podman[411429]: 2025-11-26 01:48:52.597742852 +0000 UTC m=+0.143287190 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 26 01:48:52 compute-0 podman[411430]: 2025-11-26 01:48:52.603003021 +0000 UTC m=+0.134615374 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:48:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 2.3 MiB/s wr, 16 op/s
Nov 26 01:48:54 compute-0 podman[411487]: 2025-11-26 01:48:54.615490809 +0000 UTC m=+0.154785275 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 26 01:48:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.9 MiB/s wr, 13 op/s
Nov 26 01:48:54 compute-0 podman[411488]: 2025-11-26 01:48:54.652289651 +0000 UTC m=+0.188722146 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:48:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:48:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 26 01:48:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 26 01:48:54 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 26 01:48:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 1.8 MiB/s wr, 12 op/s
Nov 26 01:48:57 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:48:57.243 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 01:48:57 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:48:57.244 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 01:48:57 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:48:57.248 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:48:57 compute-0 podman[411531]: 2025-11-26 01:48:57.570336877 +0000 UTC m=+0.118570759 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, name=ubi9, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 01:48:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.6 MiB/s wr, 11 op/s
Nov 26 01:48:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:48:59 compute-0 podman[158021]: time="2025-11-26T01:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:48:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 01:48:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8111 "" "Go-http-client/1.1"
Nov 26 01:49:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.6 MiB/s wr, 10 op/s
Nov 26 01:49:01 compute-0 openstack_network_exporter[367323]: ERROR   01:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:49:01 compute-0 openstack_network_exporter[367323]: ERROR   01:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:49:01 compute-0 openstack_network_exporter[367323]: ERROR   01:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:49:01 compute-0 openstack_network_exporter[367323]: ERROR   01:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:49:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:49:01 compute-0 openstack_network_exporter[367323]: ERROR   01:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:49:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:49:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:04 compute-0 podman[411551]: 2025-11-26 01:49:04.565761875 +0000 UTC m=+0.113801173 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 01:49:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.691036) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121744691199, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1206, "num_deletes": 506, "total_data_size": 1292083, "memory_usage": 1322472, "flush_reason": "Manual Compaction"}
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121744702463, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 976394, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22935, "largest_seqno": 24140, "table_properties": {"data_size": 971582, "index_size": 1824, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14290, "raw_average_key_size": 18, "raw_value_size": 959484, "raw_average_value_size": 1267, "num_data_blocks": 82, "num_entries": 757, "num_filter_entries": 757, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764121662, "oldest_key_time": 1764121662, "file_creation_time": 1764121744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 11417 microseconds, and 6378 cpu microseconds.
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.702542) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 976394 bytes OK
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.702563) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.705259) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.705279) EVENT_LOG_v1 {"time_micros": 1764121744705273, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.705298) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1285487, prev total WAL file size 1285487, number of live WAL files 2.
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.707043) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(953KB)], [53(8931KB)]
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121744707145, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10122037, "oldest_snapshot_seqno": -1}
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4434 keys, 7042206 bytes, temperature: kUnknown
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121744763983, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7042206, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7012805, "index_size": 17195, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 111084, "raw_average_key_size": 25, "raw_value_size": 6932606, "raw_average_value_size": 1563, "num_data_blocks": 717, "num_entries": 4434, "num_filter_entries": 4434, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764121744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.764281) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7042206 bytes
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.766881) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.9 rd, 123.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.7 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(17.6) write-amplify(7.2) OK, records in: 5440, records dropped: 1006 output_compression: NoCompression
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.766909) EVENT_LOG_v1 {"time_micros": 1764121744766896, "job": 28, "event": "compaction_finished", "compaction_time_micros": 56912, "compaction_time_cpu_micros": 37763, "output_level": 6, "num_output_files": 1, "total_output_size": 7042206, "num_input_records": 5440, "num_output_records": 4434, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.706345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.767245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.767250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.767253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.767256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:49:04.767259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121744767659, "job": 0, "event": "table_file_deletion", "file_number": 55}
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:49:04 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121744771147, "job": 0, "event": "table_file_deletion", "file_number": 53}
Nov 26 01:49:05 compute-0 podman[411570]: 2025-11-26 01:49:05.574931858 +0000 UTC m=+0.125899767 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, release=1755695350, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible)
Nov 26 01:49:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:08 compute-0 podman[411590]: 2025-11-26 01:49:08.579793942 +0000 UTC m=+0.126821913 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:49:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:49:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:49:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:49:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:49:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:49:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:49:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:49:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:14 compute-0 nova_compute[350387]: 2025-11-26 01:49:14.046 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:49:14 compute-0 nova_compute[350387]: 2025-11-26 01:49:14.046 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:49:14 compute-0 nova_compute[350387]: 2025-11-26 01:49:14.294 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:49:14 compute-0 nova_compute[350387]: 2025-11-26 01:49:14.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:49:14 compute-0 nova_compute[350387]: 2025-11-26 01:49:14.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:49:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:49:15 compute-0 nova_compute[350387]: 2025-11-26 01:49:15.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:49:15 compute-0 nova_compute[350387]: 2025-11-26 01:49:15.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:49:15 compute-0 nova_compute[350387]: 2025-11-26 01:49:15.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 01:49:15 compute-0 nova_compute[350387]: 2025-11-26 01:49:15.325 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 01:49:15 compute-0 nova_compute[350387]: 2025-11-26 01:49:15.325 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:49:15 compute-0 nova_compute[350387]: 2025-11-26 01:49:15.326 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:49:15 compute-0 nova_compute[350387]: 2025-11-26 01:49:15.326 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:49:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:17 compute-0 nova_compute[350387]: 2025-11-26 01:49:17.322 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:49:18 compute-0 nova_compute[350387]: 2025-11-26 01:49:18.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:49:18 compute-0 nova_compute[350387]: 2025-11-26 01:49:18.374 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:49:18 compute-0 nova_compute[350387]: 2025-11-26 01:49:18.375 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:49:18 compute-0 nova_compute[350387]: 2025-11-26 01:49:18.375 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:49:18 compute-0 nova_compute[350387]: 2025-11-26 01:49:18.375 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:49:18 compute-0 nova_compute[350387]: 2025-11-26 01:49:18.376 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:49:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:49:18 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3069330480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:49:18 compute-0 nova_compute[350387]: 2025-11-26 01:49:18.895 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:49:19 compute-0 nova_compute[350387]: 2025-11-26 01:49:19.593 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:49:19 compute-0 nova_compute[350387]: 2025-11-26 01:49:19.595 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4560MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:49:19 compute-0 nova_compute[350387]: 2025-11-26 01:49:19.596 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:49:19 compute-0 nova_compute[350387]: 2025-11-26 01:49:19.596 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:49:19 compute-0 nova_compute[350387]: 2025-11-26 01:49:19.671 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:49:19 compute-0 nova_compute[350387]: 2025-11-26 01:49:19.672 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:49:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:49:19 compute-0 nova_compute[350387]: 2025-11-26 01:49:19.702 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:49:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:49:20 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3882277601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:49:20 compute-0 nova_compute[350387]: 2025-11-26 01:49:20.204 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:49:20 compute-0 nova_compute[350387]: 2025-11-26 01:49:20.217 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:49:20 compute-0 nova_compute[350387]: 2025-11-26 01:49:20.241 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:49:20 compute-0 nova_compute[350387]: 2025-11-26 01:49:20.244 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:49:20 compute-0 nova_compute[350387]: 2025-11-26 01:49:20.244 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:49:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:23 compute-0 podman[411660]: 2025-11-26 01:49:23.573666035 +0000 UTC m=+0.110064448 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 01:49:23 compute-0 podman[411659]: 2025-11-26 01:49:23.579014457 +0000 UTC m=+0.117987863 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 26 01:49:23 compute-0 podman[411658]: 2025-11-26 01:49:23.61021439 +0000 UTC m=+0.161136364 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Nov 26 01:49:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:49:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:49:24.964 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:49:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:49:24.964 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:49:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:49:24.965 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:49:25 compute-0 podman[411718]: 2025-11-26 01:49:25.567797445 +0000 UTC m=+0.116479930 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:49:25 compute-0 podman[411719]: 2025-11-26 01:49:25.640478753 +0000 UTC m=+0.181408119 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 26 01:49:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:49:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2634730768' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:49:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:49:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2634730768' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:49:28 compute-0 podman[411762]: 2025-11-26 01:49:28.605418807 +0000 UTC m=+0.153287892 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, vendor=Red Hat, Inc.)
Nov 26 01:49:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:49:29 compute-0 podman[158021]: time="2025-11-26T01:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:49:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 01:49:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8106 "" "Go-http-client/1.1"
Nov 26 01:49:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:31 compute-0 openstack_network_exporter[367323]: ERROR   01:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:49:31 compute-0 openstack_network_exporter[367323]: ERROR   01:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:49:31 compute-0 openstack_network_exporter[367323]: ERROR   01:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:49:31 compute-0 openstack_network_exporter[367323]: ERROR   01:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:49:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:49:31 compute-0 openstack_network_exporter[367323]: ERROR   01:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:49:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:49:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:49:35 compute-0 podman[411780]: 2025-11-26 01:49:35.355377863 +0000 UTC m=+0.150019010 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 01:49:36 compute-0 podman[411800]: 2025-11-26 01:49:36.583563428 +0000 UTC m=+0.131037143 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9)
Nov 26 01:49:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:39 compute-0 podman[411822]: 2025-11-26 01:49:39.548022547 +0000 UTC m=+0.102817583 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:49:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:49:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:49:41
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'backups', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.rgw.root']
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:49:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:49:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:49:46 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:49:46.060 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 01:49:46 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:49:46.061 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 01:49:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:49:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:49:50 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:49:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:49:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:49:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:49:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:49:50 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev a5df3550-46c9-4bdf-be6b-af25f5537bab does not exist
Nov 26 01:49:50 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 1c06e775-3c87-45c4-a372-e039a86e0d71 does not exist
Nov 26 01:49:50 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev b000011c-a177-494a-bd4d-721c555097a6 does not exist
Nov 26 01:49:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:49:50 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:49:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:49:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:49:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:49:50 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:49:50 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:49:50 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:49:50 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:49:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:49:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:49:51 compute-0 podman[412115]: 2025-11-26 01:49:51.195237004 +0000 UTC m=+0.084626707 container create 9972013dd6d8e10d391c7c116ff37aacdaaf6bf2dcfac3b3a0b62811a4c73aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 01:49:51 compute-0 podman[412115]: 2025-11-26 01:49:51.159895243 +0000 UTC m=+0.049284986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:49:51 compute-0 systemd[1]: Started libpod-conmon-9972013dd6d8e10d391c7c116ff37aacdaaf6bf2dcfac3b3a0b62811a4c73aca.scope.
Nov 26 01:49:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:49:51 compute-0 podman[412115]: 2025-11-26 01:49:51.335473336 +0000 UTC m=+0.224863089 container init 9972013dd6d8e10d391c7c116ff37aacdaaf6bf2dcfac3b3a0b62811a4c73aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:49:51 compute-0 podman[412115]: 2025-11-26 01:49:51.352709505 +0000 UTC m=+0.242099208 container start 9972013dd6d8e10d391c7c116ff37aacdaaf6bf2dcfac3b3a0b62811a4c73aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:49:51 compute-0 podman[412115]: 2025-11-26 01:49:51.359378783 +0000 UTC m=+0.248768546 container attach 9972013dd6d8e10d391c7c116ff37aacdaaf6bf2dcfac3b3a0b62811a4c73aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 01:49:51 compute-0 serene_banzai[412129]: 167 167
Nov 26 01:49:51 compute-0 systemd[1]: libpod-9972013dd6d8e10d391c7c116ff37aacdaaf6bf2dcfac3b3a0b62811a4c73aca.scope: Deactivated successfully.
Nov 26 01:49:51 compute-0 podman[412134]: 2025-11-26 01:49:51.45460734 +0000 UTC m=+0.062685836 container died 9972013dd6d8e10d391c7c116ff37aacdaaf6bf2dcfac3b3a0b62811a4c73aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 01:49:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd0c22b0e3fb0e0349a34683be2609594e3025fb2781f31bd0d8f06612250e12-merged.mount: Deactivated successfully.
Nov 26 01:49:51 compute-0 podman[412134]: 2025-11-26 01:49:51.529259345 +0000 UTC m=+0.137337791 container remove 9972013dd6d8e10d391c7c116ff37aacdaaf6bf2dcfac3b3a0b62811a4c73aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banzai, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:49:51 compute-0 systemd[1]: libpod-conmon-9972013dd6d8e10d391c7c116ff37aacdaaf6bf2dcfac3b3a0b62811a4c73aca.scope: Deactivated successfully.
Nov 26 01:49:51 compute-0 podman[412156]: 2025-11-26 01:49:51.804212252 +0000 UTC m=+0.093842309 container create 13baf31e08bdcf2c296bd6c9305196134df3078b2d4fab2f14aa917fda5f2b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ramanujan, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:49:51 compute-0 podman[412156]: 2025-11-26 01:49:51.769655354 +0000 UTC m=+0.059285461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:49:51 compute-0 systemd[1]: Started libpod-conmon-13baf31e08bdcf2c296bd6c9305196134df3078b2d4fab2f14aa917fda5f2b51.scope.
Nov 26 01:49:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de1b9ba1b9f5ee696b4c8c3ee18d85f5d4a8dda3afa16a88d73ed0a9274dfa3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de1b9ba1b9f5ee696b4c8c3ee18d85f5d4a8dda3afa16a88d73ed0a9274dfa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de1b9ba1b9f5ee696b4c8c3ee18d85f5d4a8dda3afa16a88d73ed0a9274dfa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de1b9ba1b9f5ee696b4c8c3ee18d85f5d4a8dda3afa16a88d73ed0a9274dfa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3de1b9ba1b9f5ee696b4c8c3ee18d85f5d4a8dda3afa16a88d73ed0a9274dfa3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:49:51 compute-0 podman[412156]: 2025-11-26 01:49:51.965894411 +0000 UTC m=+0.255524528 container init 13baf31e08bdcf2c296bd6c9305196134df3078b2d4fab2f14aa917fda5f2b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ramanujan, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 01:49:51 compute-0 podman[412156]: 2025-11-26 01:49:51.987268057 +0000 UTC m=+0.276898124 container start 13baf31e08bdcf2c296bd6c9305196134df3078b2d4fab2f14aa917fda5f2b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ramanujan, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:49:51 compute-0 podman[412156]: 2025-11-26 01:49:51.994284436 +0000 UTC m=+0.283914553 container attach 13baf31e08bdcf2c296bd6c9305196134df3078b2d4fab2f14aa917fda5f2b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ramanujan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 01:49:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:53 compute-0 flamboyant_ramanujan[412172]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:49:53 compute-0 flamboyant_ramanujan[412172]: --> relative data size: 1.0
Nov 26 01:49:53 compute-0 flamboyant_ramanujan[412172]: --> All data devices are unavailable
Nov 26 01:49:53 compute-0 systemd[1]: libpod-13baf31e08bdcf2c296bd6c9305196134df3078b2d4fab2f14aa917fda5f2b51.scope: Deactivated successfully.
Nov 26 01:49:53 compute-0 systemd[1]: libpod-13baf31e08bdcf2c296bd6c9305196134df3078b2d4fab2f14aa917fda5f2b51.scope: Consumed 1.252s CPU time.
Nov 26 01:49:53 compute-0 conmon[412172]: conmon 13baf31e08bdcf2c296b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13baf31e08bdcf2c296bd6c9305196134df3078b2d4fab2f14aa917fda5f2b51.scope/container/memory.events
Nov 26 01:49:53 compute-0 podman[412156]: 2025-11-26 01:49:53.304519815 +0000 UTC m=+1.594149872 container died 13baf31e08bdcf2c296bd6c9305196134df3078b2d4fab2f14aa917fda5f2b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ramanujan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:49:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3de1b9ba1b9f5ee696b4c8c3ee18d85f5d4a8dda3afa16a88d73ed0a9274dfa3-merged.mount: Deactivated successfully.
Nov 26 01:49:53 compute-0 podman[412156]: 2025-11-26 01:49:53.418485973 +0000 UTC m=+1.708116040 container remove 13baf31e08bdcf2c296bd6c9305196134df3078b2d4fab2f14aa917fda5f2b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 01:49:53 compute-0 systemd[1]: libpod-conmon-13baf31e08bdcf2c296bd6c9305196134df3078b2d4fab2f14aa917fda5f2b51.scope: Deactivated successfully.
Nov 26 01:49:53 compute-0 podman[412240]: 2025-11-26 01:49:53.762522757 +0000 UTC m=+0.118018234 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:49:53 compute-0 podman[412239]: 2025-11-26 01:49:53.780687321 +0000 UTC m=+0.138051401 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 26 01:49:53 compute-0 podman[412241]: 2025-11-26 01:49:53.781094693 +0000 UTC m=+0.128948923 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:49:54 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:49:54.063 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:49:54 compute-0 podman[412408]: 2025-11-26 01:49:54.574757862 +0000 UTC m=+0.086056229 container create 0942407f7a027d12f898aa6ba037b2f5b7cd50deae8b8341da74e9abee148d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:49:54 compute-0 podman[412408]: 2025-11-26 01:49:54.543144086 +0000 UTC m=+0.054442533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:49:54 compute-0 systemd[1]: Started libpod-conmon-0942407f7a027d12f898aa6ba037b2f5b7cd50deae8b8341da74e9abee148d17.scope.
Nov 26 01:49:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:49:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:49:54 compute-0 podman[412408]: 2025-11-26 01:49:54.725955253 +0000 UTC m=+0.237253700 container init 0942407f7a027d12f898aa6ba037b2f5b7cd50deae8b8341da74e9abee148d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 01:49:54 compute-0 podman[412408]: 2025-11-26 01:49:54.743924932 +0000 UTC m=+0.255223319 container start 0942407f7a027d12f898aa6ba037b2f5b7cd50deae8b8341da74e9abee148d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_satoshi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:49:54 compute-0 podman[412408]: 2025-11-26 01:49:54.750736685 +0000 UTC m=+0.262035082 container attach 0942407f7a027d12f898aa6ba037b2f5b7cd50deae8b8341da74e9abee148d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 01:49:54 compute-0 magical_satoshi[412424]: 167 167
Nov 26 01:49:54 compute-0 systemd[1]: libpod-0942407f7a027d12f898aa6ba037b2f5b7cd50deae8b8341da74e9abee148d17.scope: Deactivated successfully.
Nov 26 01:49:54 compute-0 podman[412408]: 2025-11-26 01:49:54.756284202 +0000 UTC m=+0.267582599 container died 0942407f7a027d12f898aa6ba037b2f5b7cd50deae8b8341da74e9abee148d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 01:49:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-edd3c6c00aaa47e5accce2bcb25e9f30c8ac79a74bd249ce5f574eec03d26062-merged.mount: Deactivated successfully.
Nov 26 01:49:54 compute-0 podman[412408]: 2025-11-26 01:49:54.838053038 +0000 UTC m=+0.349351435 container remove 0942407f7a027d12f898aa6ba037b2f5b7cd50deae8b8341da74e9abee148d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_satoshi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 01:49:54 compute-0 systemd[1]: libpod-conmon-0942407f7a027d12f898aa6ba037b2f5b7cd50deae8b8341da74e9abee148d17.scope: Deactivated successfully.
Nov 26 01:49:55 compute-0 podman[412447]: 2025-11-26 01:49:55.076936044 +0000 UTC m=+0.073488833 container create 137de11e076578701ef960d3773bf2a3b82dac34d65d10068175f085b0b5646a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:49:55 compute-0 systemd[1]: Started libpod-conmon-137de11e076578701ef960d3773bf2a3b82dac34d65d10068175f085b0b5646a.scope.
Nov 26 01:49:55 compute-0 podman[412447]: 2025-11-26 01:49:55.049790735 +0000 UTC m=+0.046343534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:49:55 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:49:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457da973fb068d2d2d9223578eb74a5a87e8b75a5b1e9293df61c5fd691fb50b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:49:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457da973fb068d2d2d9223578eb74a5a87e8b75a5b1e9293df61c5fd691fb50b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:49:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457da973fb068d2d2d9223578eb74a5a87e8b75a5b1e9293df61c5fd691fb50b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:49:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457da973fb068d2d2d9223578eb74a5a87e8b75a5b1e9293df61c5fd691fb50b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:49:55 compute-0 podman[412447]: 2025-11-26 01:49:55.217788843 +0000 UTC m=+0.214341612 container init 137de11e076578701ef960d3773bf2a3b82dac34d65d10068175f085b0b5646a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:49:55 compute-0 podman[412447]: 2025-11-26 01:49:55.259023611 +0000 UTC m=+0.255576410 container start 137de11e076578701ef960d3773bf2a3b82dac34d65d10068175f085b0b5646a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_borg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:49:55 compute-0 podman[412447]: 2025-11-26 01:49:55.265558416 +0000 UTC m=+0.262111215 container attach 137de11e076578701ef960d3773bf2a3b82dac34d65d10068175f085b0b5646a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_borg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:49:56 compute-0 compassionate_borg[412462]: {
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:    "0": [
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:        {
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "devices": [
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "/dev/loop3"
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            ],
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "lv_name": "ceph_lv0",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "lv_size": "21470642176",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "name": "ceph_lv0",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "tags": {
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.cluster_name": "ceph",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.crush_device_class": "",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.encrypted": "0",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.osd_id": "0",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.type": "block",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.vdo": "0"
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            },
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "type": "block",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "vg_name": "ceph_vg0"
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:        }
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:    ],
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:    "1": [
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:        {
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "devices": [
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "/dev/loop4"
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            ],
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "lv_name": "ceph_lv1",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "lv_size": "21470642176",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "name": "ceph_lv1",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "tags": {
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.cluster_name": "ceph",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.crush_device_class": "",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.encrypted": "0",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.osd_id": "1",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.type": "block",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.vdo": "0"
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            },
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "type": "block",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "vg_name": "ceph_vg1"
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:        }
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:    ],
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:    "2": [
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:        {
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "devices": [
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "/dev/loop5"
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            ],
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "lv_name": "ceph_lv2",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "lv_size": "21470642176",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "name": "ceph_lv2",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "tags": {
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.cluster_name": "ceph",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.crush_device_class": "",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.encrypted": "0",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.osd_id": "2",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.type": "block",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:                "ceph.vdo": "0"
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            },
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "type": "block",
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:            "vg_name": "ceph_vg2"
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:        }
Nov 26 01:49:56 compute-0 compassionate_borg[412462]:    ]
Nov 26 01:49:56 compute-0 compassionate_borg[412462]: }
Nov 26 01:49:56 compute-0 systemd[1]: libpod-137de11e076578701ef960d3773bf2a3b82dac34d65d10068175f085b0b5646a.scope: Deactivated successfully.
Nov 26 01:49:56 compute-0 podman[412447]: 2025-11-26 01:49:56.054176652 +0000 UTC m=+1.050729451 container died 137de11e076578701ef960d3773bf2a3b82dac34d65d10068175f085b0b5646a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 01:49:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-457da973fb068d2d2d9223578eb74a5a87e8b75a5b1e9293df61c5fd691fb50b-merged.mount: Deactivated successfully.
Nov 26 01:49:56 compute-0 podman[412447]: 2025-11-26 01:49:56.140937639 +0000 UTC m=+1.137490408 container remove 137de11e076578701ef960d3773bf2a3b82dac34d65d10068175f085b0b5646a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:49:56 compute-0 systemd[1]: libpod-conmon-137de11e076578701ef960d3773bf2a3b82dac34d65d10068175f085b0b5646a.scope: Deactivated successfully.
Nov 26 01:49:56 compute-0 podman[412472]: 2025-11-26 01:49:56.238195114 +0000 UTC m=+0.122971594 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 01:49:56 compute-0 podman[412480]: 2025-11-26 01:49:56.276783347 +0000 UTC m=+0.166163037 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 26 01:49:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:56 compute-0 nova_compute[350387]: 2025-11-26 01:49:56.730 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "b1c088bc-7a6b-4580-93ff-685731747189" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:49:56 compute-0 nova_compute[350387]: 2025-11-26 01:49:56.731 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:49:56 compute-0 nova_compute[350387]: 2025-11-26 01:49:56.751 350391 DEBUG nova.compute.manager [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 01:49:56 compute-0 nova_compute[350387]: 2025-11-26 01:49:56.890 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:49:56 compute-0 nova_compute[350387]: 2025-11-26 01:49:56.891 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:49:56 compute-0 nova_compute[350387]: 2025-11-26 01:49:56.902 350391 DEBUG nova.virt.hardware [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 01:49:56 compute-0 nova_compute[350387]: 2025-11-26 01:49:56.902 350391 INFO nova.compute.claims [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 01:49:57 compute-0 nova_compute[350387]: 2025-11-26 01:49:57.009 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:49:57 compute-0 podman[412671]: 2025-11-26 01:49:57.239908734 +0000 UTC m=+0.108877664 container create e2cf7f0e1a92eac69a38a57337d3745ef81869216dead85b9e10e3d05c4060cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:49:57 compute-0 podman[412671]: 2025-11-26 01:49:57.202265708 +0000 UTC m=+0.071234698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:49:57 compute-0 systemd[1]: Started libpod-conmon-e2cf7f0e1a92eac69a38a57337d3745ef81869216dead85b9e10e3d05c4060cf.scope.
Nov 26 01:49:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:49:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:49:57 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2176016230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:49:57 compute-0 podman[412671]: 2025-11-26 01:49:57.442715958 +0000 UTC m=+0.311684938 container init e2cf7f0e1a92eac69a38a57337d3745ef81869216dead85b9e10e3d05c4060cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:49:57 compute-0 podman[412671]: 2025-11-26 01:49:57.459890095 +0000 UTC m=+0.328859025 container start e2cf7f0e1a92eac69a38a57337d3745ef81869216dead85b9e10e3d05c4060cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:49:57 compute-0 podman[412671]: 2025-11-26 01:49:57.466293476 +0000 UTC m=+0.335262446 container attach e2cf7f0e1a92eac69a38a57337d3745ef81869216dead85b9e10e3d05c4060cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 01:49:57 compute-0 nova_compute[350387]: 2025-11-26 01:49:57.465 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:49:57 compute-0 kind_murdock[412705]: 167 167
Nov 26 01:49:57 compute-0 systemd[1]: libpod-e2cf7f0e1a92eac69a38a57337d3745ef81869216dead85b9e10e3d05c4060cf.scope: Deactivated successfully.
Nov 26 01:49:57 compute-0 podman[412671]: 2025-11-26 01:49:57.475039814 +0000 UTC m=+0.344008744 container died e2cf7f0e1a92eac69a38a57337d3745ef81869216dead85b9e10e3d05c4060cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:49:57 compute-0 nova_compute[350387]: 2025-11-26 01:49:57.483 350391 DEBUG nova.compute.provider_tree [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:49:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbb3518c40c2a26c7aea787d8001bce1bb799c8a6f9a4df9fb4d603265b9210f-merged.mount: Deactivated successfully.
Nov 26 01:49:57 compute-0 podman[412671]: 2025-11-26 01:49:57.554543716 +0000 UTC m=+0.423512636 container remove e2cf7f0e1a92eac69a38a57337d3745ef81869216dead85b9e10e3d05c4060cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:49:57 compute-0 systemd[1]: libpod-conmon-e2cf7f0e1a92eac69a38a57337d3745ef81869216dead85b9e10e3d05c4060cf.scope: Deactivated successfully.
Nov 26 01:49:57 compute-0 nova_compute[350387]: 2025-11-26 01:49:57.620 350391 DEBUG nova.scheduler.client.report [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:49:57 compute-0 nova_compute[350387]: 2025-11-26 01:49:57.651 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:49:57 compute-0 nova_compute[350387]: 2025-11-26 01:49:57.653 350391 DEBUG nova.compute.manager [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 01:49:57 compute-0 nova_compute[350387]: 2025-11-26 01:49:57.700 350391 DEBUG nova.compute.manager [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 01:49:57 compute-0 nova_compute[350387]: 2025-11-26 01:49:57.701 350391 DEBUG nova.network.neutron [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 01:49:57 compute-0 nova_compute[350387]: 2025-11-26 01:49:57.729 350391 INFO nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 01:49:57 compute-0 nova_compute[350387]: 2025-11-26 01:49:57.789 350391 DEBUG nova.compute.manager [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 01:49:57 compute-0 podman[412730]: 2025-11-26 01:49:57.856599071 +0000 UTC m=+0.094175558 container create 333f73c62755ee4b25289ee8e264cc88d15e85e589893b596277a513992dd462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:49:57 compute-0 podman[412730]: 2025-11-26 01:49:57.825018267 +0000 UTC m=+0.062594794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:49:57 compute-0 nova_compute[350387]: 2025-11-26 01:49:57.913 350391 DEBUG nova.compute.manager [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 01:49:57 compute-0 nova_compute[350387]: 2025-11-26 01:49:57.916 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 01:49:57 compute-0 nova_compute[350387]: 2025-11-26 01:49:57.917 350391 INFO nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Creating image(s)#033[00m
Nov 26 01:49:57 compute-0 systemd[1]: Started libpod-conmon-333f73c62755ee4b25289ee8e264cc88d15e85e589893b596277a513992dd462.scope.
Nov 26 01:49:57 compute-0 nova_compute[350387]: 2025-11-26 01:49:57.975 350391 DEBUG nova.storage.rbd_utils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image b1c088bc-7a6b-4580-93ff-685731747189_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:49:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5786d780572a2d8d2337acffc500b776ada073fdf51172ac297e63c1230da1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5786d780572a2d8d2337acffc500b776ada073fdf51172ac297e63c1230da1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5786d780572a2d8d2337acffc500b776ada073fdf51172ac297e63c1230da1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:49:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5786d780572a2d8d2337acffc500b776ada073fdf51172ac297e63c1230da1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:49:58 compute-0 podman[412730]: 2025-11-26 01:49:58.0221592 +0000 UTC m=+0.259735667 container init 333f73c62755ee4b25289ee8e264cc88d15e85e589893b596277a513992dd462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nightingale, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:49:58 compute-0 podman[412730]: 2025-11-26 01:49:58.031764962 +0000 UTC m=+0.269341419 container start 333f73c62755ee4b25289ee8e264cc88d15e85e589893b596277a513992dd462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nightingale, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 01:49:58 compute-0 podman[412730]: 2025-11-26 01:49:58.036722723 +0000 UTC m=+0.274299260 container attach 333f73c62755ee4b25289ee8e264cc88d15e85e589893b596277a513992dd462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nightingale, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 01:49:58 compute-0 nova_compute[350387]: 2025-11-26 01:49:58.062 350391 DEBUG nova.storage.rbd_utils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image b1c088bc-7a6b-4580-93ff-685731747189_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:49:58 compute-0 nova_compute[350387]: 2025-11-26 01:49:58.110 350391 DEBUG nova.storage.rbd_utils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image b1c088bc-7a6b-4580-93ff-685731747189_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:49:58 compute-0 nova_compute[350387]: 2025-11-26 01:49:58.117 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "f456d938eec6117407d48c9debbc5604edb4194e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:49:58 compute-0 nova_compute[350387]: 2025-11-26 01:49:58.118 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "f456d938eec6117407d48c9debbc5604edb4194e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:49:58 compute-0 nova_compute[350387]: 2025-11-26 01:49:58.579 350391 WARNING oslo_policy.policy [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 26 01:49:58 compute-0 nova_compute[350387]: 2025-11-26 01:49:58.580 350391 WARNING oslo_policy.policy [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 26 01:49:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:49:59 compute-0 nova_compute[350387]: 2025-11-26 01:49:59.228 350391 DEBUG nova.virt.libvirt.imagebackend [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Image locations are: [{'url': 'rbd://36901f64-240e-5c29-a2e2-29b56f2c329c/images/48e08d00-37a3-4465-a949-ff0b8afe4def/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://36901f64-240e-5c29-a2e2-29b56f2c329c/images/48e08d00-37a3-4465-a949-ff0b8afe4def/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]: {
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:        "osd_id": 0,
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:        "type": "bluestore"
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:    },
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:        "osd_id": 2,
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:        "type": "bluestore"
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:    },
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:        "osd_id": 1,
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:        "type": "bluestore"
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]:    }
Nov 26 01:49:59 compute-0 goofy_nightingale[412756]: }
Nov 26 01:49:59 compute-0 systemd[1]: libpod-333f73c62755ee4b25289ee8e264cc88d15e85e589893b596277a513992dd462.scope: Deactivated successfully.
Nov 26 01:49:59 compute-0 podman[412730]: 2025-11-26 01:49:59.300770952 +0000 UTC m=+1.538347469 container died 333f73c62755ee4b25289ee8e264cc88d15e85e589893b596277a513992dd462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nightingale, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 01:49:59 compute-0 systemd[1]: libpod-333f73c62755ee4b25289ee8e264cc88d15e85e589893b596277a513992dd462.scope: Consumed 1.245s CPU time.
Nov 26 01:49:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee5786d780572a2d8d2337acffc500b776ada073fdf51172ac297e63c1230da1-merged.mount: Deactivated successfully.
Nov 26 01:49:59 compute-0 podman[412730]: 2025-11-26 01:49:59.403039009 +0000 UTC m=+1.640615476 container remove 333f73c62755ee4b25289ee8e264cc88d15e85e589893b596277a513992dd462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nightingale, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 01:49:59 compute-0 systemd[1]: libpod-conmon-333f73c62755ee4b25289ee8e264cc88d15e85e589893b596277a513992dd462.scope: Deactivated successfully.
Nov 26 01:49:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:49:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:49:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:49:59 compute-0 podman[412834]: 2025-11-26 01:49:59.473898036 +0000 UTC m=+0.112661672 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, version=9.4, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_id=edpm)
Nov 26 01:49:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:49:59 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 4d1f84be-097f-4c42-9fee-62566b321191 does not exist
Nov 26 01:49:59 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 12701319-3600-43cc-af38-fbcaf4e1b317 does not exist
Nov 26 01:49:59 compute-0 nova_compute[350387]: 2025-11-26 01:49:59.557 350391 DEBUG nova.network.neutron [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Successfully created port: a47ff2b9-72e9-48d0-9756-5fe939cf4b29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 01:49:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:49:59 compute-0 podman[158021]: time="2025-11-26T01:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:49:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 01:49:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8095 "" "Go-http-client/1.1"
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.447 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:50:00 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:50:00 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.545 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e.part --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.547 350391 DEBUG nova.virt.images [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] 48e08d00-37a3-4465-a949-ff0b8afe4def was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.548 350391 DEBUG nova.privsep.utils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.548 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e.part /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:50:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.732 350391 DEBUG nova.network.neutron [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Successfully updated port: a47ff2b9-72e9-48d0-9756-5fe939cf4b29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.749 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.749 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquired lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.749 350391 DEBUG nova.network.neutron [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.784 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e.part /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e.converted" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.789 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.858 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e.converted --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.859 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "f456d938eec6117407d48c9debbc5604edb4194e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.901 350391 DEBUG nova.storage.rbd_utils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image b1c088bc-7a6b-4580-93ff-685731747189_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.911 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e b1c088bc-7a6b-4580-93ff-685731747189_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:50:00 compute-0 nova_compute[350387]: 2025-11-26 01:50:00.938 350391 DEBUG nova.network.neutron [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 01:50:01 compute-0 openstack_network_exporter[367323]: ERROR   01:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:50:01 compute-0 openstack_network_exporter[367323]: ERROR   01:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:50:01 compute-0 openstack_network_exporter[367323]: ERROR   01:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:50:01 compute-0 openstack_network_exporter[367323]: ERROR   01:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:50:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:50:01 compute-0 openstack_network_exporter[367323]: ERROR   01:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:50:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:50:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 26 01:50:01 compute-0 nova_compute[350387]: 2025-11-26 01:50:01.473 350391 DEBUG nova.compute.manager [req-0f0ba8c6-ce01-4732-9dd1-7a355baa09f9 req-c4cb3eed-ce52-4a08-93c6-4e2c0953888b 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Received event network-changed-a47ff2b9-72e9-48d0-9756-5fe939cf4b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 01:50:01 compute-0 nova_compute[350387]: 2025-11-26 01:50:01.474 350391 DEBUG nova.compute.manager [req-0f0ba8c6-ce01-4732-9dd1-7a355baa09f9 req-c4cb3eed-ce52-4a08-93c6-4e2c0953888b 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Refreshing instance network info cache due to event network-changed-a47ff2b9-72e9-48d0-9756-5fe939cf4b29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 01:50:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 26 01:50:01 compute-0 nova_compute[350387]: 2025-11-26 01:50:01.475 350391 DEBUG oslo_concurrency.lockutils [req-0f0ba8c6-ce01-4732-9dd1-7a355baa09f9 req-c4cb3eed-ce52-4a08-93c6-4e2c0953888b 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:50:01 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 26 01:50:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 26 01:50:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 26 01:50:02 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 26 01:50:02 compute-0 nova_compute[350387]: 2025-11-26 01:50:02.510 350391 DEBUG nova.network.neutron [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updating instance_info_cache with network_info: [{"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:50:02 compute-0 nova_compute[350387]: 2025-11-26 01:50:02.536 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Releasing lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:50:02 compute-0 nova_compute[350387]: 2025-11-26 01:50:02.536 350391 DEBUG nova.compute.manager [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Instance network_info: |[{"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 01:50:02 compute-0 nova_compute[350387]: 2025-11-26 01:50:02.538 350391 DEBUG oslo_concurrency.lockutils [req-0f0ba8c6-ce01-4732-9dd1-7a355baa09f9 req-c4cb3eed-ce52-4a08-93c6-4e2c0953888b 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:50:02 compute-0 nova_compute[350387]: 2025-11-26 01:50:02.538 350391 DEBUG nova.network.neutron [req-0f0ba8c6-ce01-4732-9dd1-7a355baa09f9 req-c4cb3eed-ce52-4a08-93c6-4e2c0953888b 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Refreshing network info cache for port a47ff2b9-72e9-48d0-9756-5fe939cf4b29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 01:50:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 9 op/s
Nov 26 01:50:02 compute-0 nova_compute[350387]: 2025-11-26 01:50:02.943 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e b1c088bc-7a6b-4580-93ff-685731747189_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.124 350391 DEBUG nova.storage.rbd_utils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] resizing rbd image b1c088bc-7a6b-4580-93ff-685731747189_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.383 350391 DEBUG nova.objects.instance [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lazy-loading 'migration_context' on Instance uuid b1c088bc-7a6b-4580-93ff-685731747189 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.467 350391 DEBUG nova.storage.rbd_utils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image b1c088bc-7a6b-4580-93ff-685731747189_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.525 350391 DEBUG nova.storage.rbd_utils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image b1c088bc-7a6b-4580-93ff-685731747189_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.538 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.540 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.540 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.585 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.586 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.611 350391 DEBUG nova.network.neutron [req-0f0ba8c6-ce01-4732-9dd1-7a355baa09f9 req-c4cb3eed-ce52-4a08-93c6-4e2c0953888b 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updated VIF entry in instance network info cache for port a47ff2b9-72e9-48d0-9756-5fe939cf4b29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.611 350391 DEBUG nova.network.neutron [req-0f0ba8c6-ce01-4732-9dd1-7a355baa09f9 req-c4cb3eed-ce52-4a08-93c6-4e2c0953888b 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updating instance_info_cache with network_info: [{"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.635 350391 DEBUG oslo_concurrency.lockutils [req-0f0ba8c6-ce01-4732-9dd1-7a355baa09f9 req-c4cb3eed-ce52-4a08-93c6-4e2c0953888b 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.638 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.640 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.683 350391 DEBUG nova.storage.rbd_utils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image b1c088bc-7a6b-4580-93ff-685731747189_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:50:03 compute-0 nova_compute[350387]: 2025-11-26 01:50:03.692 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b1c088bc-7a6b-4580-93ff-685731747189_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:50:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 33 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 32 op/s
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.689 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b1c088bc-7a6b-4580-93ff-685731747189_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.997s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:50:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.920 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.921 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Ensure instance console log exists: /var/lib/nova/instances/b1c088bc-7a6b-4580-93ff-685731747189/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.922 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.923 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.924 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.930 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Start _get_guest_xml network_info=[{"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T01:48:44Z,direct_url=<?>,disk_format='qcow2',id=48e08d00-37a3-4465-a949-ff0b8afe4def,min_disk=0,min_ram=0,name='cirros',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T01:48:48Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}], 'ephemerals': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'size': 1, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.943 350391 WARNING nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.954 350391 DEBUG nova.virt.libvirt.host [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.955 350391 DEBUG nova.virt.libvirt.host [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.963 350391 DEBUG nova.virt.libvirt.host [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.964 350391 DEBUG nova.virt.libvirt.host [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.965 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.966 350391 DEBUG nova.virt.hardware [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T01:48:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='030e95e2-5458-42ef-a5df-79a19c0b681d',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T01:48:44Z,direct_url=<?>,disk_format='qcow2',id=48e08d00-37a3-4465-a949-ff0b8afe4def,min_disk=0,min_ram=0,name='cirros',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T01:48:48Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.968 350391 DEBUG nova.virt.hardware [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.969 350391 DEBUG nova.virt.hardware [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.970 350391 DEBUG nova.virt.hardware [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.970 350391 DEBUG nova.virt.hardware [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.971 350391 DEBUG nova.virt.hardware [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.972 350391 DEBUG nova.virt.hardware [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.974 350391 DEBUG nova.virt.hardware [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.975 350391 DEBUG nova.virt.hardware [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.976 350391 DEBUG nova.virt.hardware [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.977 350391 DEBUG nova.virt.hardware [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.987 350391 DEBUG nova.privsep.utils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 26 01:50:04 compute-0 nova_compute[350387]: 2025-11-26 01:50:04.990 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:50:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 01:50:05 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3883993143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 01:50:05 compute-0 podman[413185]: 2025-11-26 01:50:05.570460174 +0000 UTC m=+0.127475291 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:50:05 compute-0 nova_compute[350387]: 2025-11-26 01:50:05.606 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:50:05 compute-0 nova_compute[350387]: 2025-11-26 01:50:05.609 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:50:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 01:50:06 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1303031869' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.066 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.110 350391 DEBUG nova.storage.rbd_utils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image b1c088bc-7a6b-4580-93ff-685731747189_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.123 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:50:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 01:50:06 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2732938126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.655 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.659 350391 DEBUG nova.virt.libvirt.vif [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T01:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d902f6105ab4c81a51a4751fa89a83e',ramdisk_id='',reservation_id='r-sw0o23i9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T01:49:57Z,user_data=None,user_id='b130e7a8bed3424f9f5ff63b35cd2b28',uuid=b1c088bc-7a6b-4580-93ff-685731747189,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.659 350391 DEBUG nova.network.os_vif_util [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converting VIF {"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.661 350391 DEBUG nova.network.os_vif_util [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:66:48,bridge_name='br-int',has_traffic_filtering=True,id=a47ff2b9-72e9-48d0-9756-5fe939cf4b29,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa47ff2b9-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.664 350391 DEBUG nova.objects.instance [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lazy-loading 'pci_devices' on Instance uuid b1c088bc-7a6b-4580-93ff-685731747189 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 01:50:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 42 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 35 op/s
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.696 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] End _get_guest_xml xml=<domain type="kvm">
Nov 26 01:50:06 compute-0 nova_compute[350387]:  <uuid>b1c088bc-7a6b-4580-93ff-685731747189</uuid>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  <name>instance-00000001</name>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  <memory>524288</memory>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  <metadata>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <nova:name>test_0</nova:name>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 01:50:04</nova:creationTime>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <nova:flavor name="m1.small">
Nov 26 01:50:06 compute-0 nova_compute[350387]:        <nova:memory>512</nova:memory>
Nov 26 01:50:06 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 01:50:06 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 01:50:06 compute-0 nova_compute[350387]:        <nova:ephemeral>1</nova:ephemeral>
Nov 26 01:50:06 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 01:50:06 compute-0 nova_compute[350387]:        <nova:user uuid="b130e7a8bed3424f9f5ff63b35cd2b28">admin</nova:user>
Nov 26 01:50:06 compute-0 nova_compute[350387]:        <nova:project uuid="4d902f6105ab4c81a51a4751fa89a83e">admin</nova:project>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="48e08d00-37a3-4465-a949-ff0b8afe4def"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <nova:ports>
Nov 26 01:50:06 compute-0 nova_compute[350387]:        <nova:port uuid="a47ff2b9-72e9-48d0-9756-5fe939cf4b29">
Nov 26 01:50:06 compute-0 nova_compute[350387]:          <nova:ip type="fixed" address="192.168.0.29" ipVersion="4"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:        </nova:port>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      </nova:ports>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  </metadata>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <system>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <entry name="serial">b1c088bc-7a6b-4580-93ff-685731747189</entry>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <entry name="uuid">b1c088bc-7a6b-4580-93ff-685731747189</entry>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    </system>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  <os>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  </os>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  <features>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <apic/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  </features>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  </clock>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  </cpu>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  <devices>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/b1c088bc-7a6b-4580-93ff-685731747189_disk">
Nov 26 01:50:06 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      </source>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 01:50:06 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      </auth>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/b1c088bc-7a6b-4580-93ff-685731747189_disk.eph0">
Nov 26 01:50:06 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      </source>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 01:50:06 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      </auth>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <target dev="vdb" bus="virtio"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/b1c088bc-7a6b-4580-93ff-685731747189_disk.config">
Nov 26 01:50:06 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      </source>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 01:50:06 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      </auth>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <interface type="ethernet">
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <mac address="fa:16:3e:0f:66:48"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <mtu size="1442"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <target dev="tapa47ff2b9-72"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    </interface>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/b1c088bc-7a6b-4580-93ff-685731747189/console.log" append="off"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    </serial>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <video>
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    </video>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    </rng>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 01:50:06 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 01:50:06 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 01:50:06 compute-0 nova_compute[350387]:  </devices>
Nov 26 01:50:06 compute-0 nova_compute[350387]: </domain>
Nov 26 01:50:06 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.698 350391 DEBUG nova.compute.manager [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Preparing to wait for external event network-vif-plugged-a47ff2b9-72e9-48d0-9756-5fe939cf4b29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.698 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "b1c088bc-7a6b-4580-93ff-685731747189-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.699 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.699 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.700 350391 DEBUG nova.virt.libvirt.vif [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T01:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d902f6105ab4c81a51a4751fa89a83e',ramdisk_id='',reservation_id='r-sw0o23i9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T01:49:57Z,user_data=None,user_id='b130e7a8bed3424f9f5ff63b35cd2b28',uuid=b1c088bc-7a6b-4580-93ff-685731747189,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.701 350391 DEBUG nova.network.os_vif_util [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converting VIF {"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.704 350391 DEBUG nova.network.os_vif_util [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:66:48,bridge_name='br-int',has_traffic_filtering=True,id=a47ff2b9-72e9-48d0-9756-5fe939cf4b29,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa47ff2b9-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.704 350391 DEBUG os_vif [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:66:48,bridge_name='br-int',has_traffic_filtering=True,id=a47ff2b9-72e9-48d0-9756-5fe939cf4b29,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa47ff2b9-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.764 350391 DEBUG ovsdbapp.backend.ovs_idl [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.765 350391 DEBUG ovsdbapp.backend.ovs_idl [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.765 350391 DEBUG ovsdbapp.backend.ovs_idl [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.765 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.766 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.766 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.767 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.769 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.773 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.790 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.791 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.791 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 01:50:06 compute-0 nova_compute[350387]: 2025-11-26 01:50:06.792 350391 INFO oslo.privsep.daemon [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp71y3c6_o/privsep.sock']#033[00m
Nov 26 01:50:07 compute-0 podman[413273]: 2025-11-26 01:50:07.564016397 +0000 UTC m=+0.120423741 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9)
Nov 26 01:50:07 compute-0 nova_compute[350387]: 2025-11-26 01:50:07.619 350391 INFO oslo.privsep.daemon [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 26 01:50:07 compute-0 nova_compute[350387]: 2025-11-26 01:50:07.503 413288 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 26 01:50:07 compute-0 nova_compute[350387]: 2025-11-26 01:50:07.521 413288 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 26 01:50:07 compute-0 nova_compute[350387]: 2025-11-26 01:50:07.525 413288 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Nov 26 01:50:07 compute-0 nova_compute[350387]: 2025-11-26 01:50:07.525 413288 INFO oslo.privsep.daemon [-] privsep daemon running as pid 413288#033[00m
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.002 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.003 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa47ff2b9-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.004 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa47ff2b9-72, col_values=(('external_ids', {'iface-id': 'a47ff2b9-72e9-48d0-9756-5fe939cf4b29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:66:48', 'vm-uuid': 'b1c088bc-7a6b-4580-93ff-685731747189'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.007 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:08 compute-0 NetworkManager[48886]: <info>  [1764121808.0094] manager: (tapa47ff2b9-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.011 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.020 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.022 350391 INFO os_vif [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:66:48,bridge_name='br-int',has_traffic_filtering=True,id=a47ff2b9-72e9-48d0-9756-5fe939cf4b29,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa47ff2b9-72')#033[00m
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.103 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.103 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.104 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.104 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No VIF found with MAC fa:16:3e:0f:66:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.105 350391 INFO nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Using config drive#033[00m
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.154 350391 DEBUG nova.storage.rbd_utils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image b1c088bc-7a6b-4580-93ff-685731747189_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:50:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 55 op/s
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.777 350391 INFO nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Creating config drive at /var/lib/nova/instances/b1c088bc-7a6b-4580-93ff-685731747189/disk.config#033[00m
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.786 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b1c088bc-7a6b-4580-93ff-685731747189/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphq6bekk2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:50:08 compute-0 nova_compute[350387]: 2025-11-26 01:50:08.946 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b1c088bc-7a6b-4580-93ff-685731747189/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphq6bekk2" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:50:09 compute-0 nova_compute[350387]: 2025-11-26 01:50:09.007 350391 DEBUG nova.storage.rbd_utils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image b1c088bc-7a6b-4580-93ff-685731747189_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:50:09 compute-0 nova_compute[350387]: 2025-11-26 01:50:09.019 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b1c088bc-7a6b-4580-93ff-685731747189/disk.config b1c088bc-7a6b-4580-93ff-685731747189_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:50:09 compute-0 nova_compute[350387]: 2025-11-26 01:50:09.337 350391 DEBUG oslo_concurrency.processutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b1c088bc-7a6b-4580-93ff-685731747189/disk.config b1c088bc-7a6b-4580-93ff-685731747189_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:50:09 compute-0 nova_compute[350387]: 2025-11-26 01:50:09.338 350391 INFO nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Deleting local config drive /var/lib/nova/instances/b1c088bc-7a6b-4580-93ff-685731747189/disk.config because it was imported into RBD.#033[00m
Nov 26 01:50:09 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 26 01:50:09 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 26 01:50:09 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 26 01:50:09 compute-0 kernel: tapa47ff2b9-72: entered promiscuous mode
Nov 26 01:50:09 compute-0 NetworkManager[48886]: <info>  [1764121809.5378] manager: (tapa47ff2b9-72): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Nov 26 01:50:09 compute-0 ovn_controller[89102]: 2025-11-26T01:50:09Z|00027|binding|INFO|Claiming lport a47ff2b9-72e9-48d0-9756-5fe939cf4b29 for this chassis.
Nov 26 01:50:09 compute-0 ovn_controller[89102]: 2025-11-26T01:50:09Z|00028|binding|INFO|a47ff2b9-72e9-48d0-9756-5fe939cf4b29: Claiming fa:16:3e:0f:66:48 192.168.0.29
Nov 26 01:50:09 compute-0 nova_compute[350387]: 2025-11-26 01:50:09.542 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:09 compute-0 nova_compute[350387]: 2025-11-26 01:50:09.551 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:09 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:09.572 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:66:48 192.168.0.29'], port_security=['fa:16:3e:0f:66:48 192.168.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.29/24', 'neutron:device_id': 'b1c088bc-7a6b-4580-93ff-685731747189', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97f5f89-70be-4349-beb5-5f8e6065072e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d902f6105ab4c81a51a4751fa89a83e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd3202a1a-8d71-42b1-ae70-18469fa18607', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5f5986b-4ad4-4edf-b238-68c26c7002dd, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=a47ff2b9-72e9-48d0-9756-5fe939cf4b29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 01:50:09 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:09.574 286844 INFO neutron.agent.ovn.metadata.agent [-] Port a47ff2b9-72e9-48d0-9756-5fe939cf4b29 in datapath c97f5f89-70be-4349-beb5-5f8e6065072e bound to our chassis#033[00m
Nov 26 01:50:09 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:09.576 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97f5f89-70be-4349-beb5-5f8e6065072e#033[00m
Nov 26 01:50:09 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:09.578 286844 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp0eop8pga/privsep.sock']#033[00m
Nov 26 01:50:09 compute-0 systemd-udevd[413403]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 01:50:09 compute-0 systemd-machined[138512]: New machine qemu-1-instance-00000001.
Nov 26 01:50:09 compute-0 NetworkManager[48886]: <info>  [1764121809.6423] device (tapa47ff2b9-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 01:50:09 compute-0 NetworkManager[48886]: <info>  [1764121809.6434] device (tapa47ff2b9-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 01:50:09 compute-0 nova_compute[350387]: 2025-11-26 01:50:09.665 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:09 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 26 01:50:09 compute-0 ovn_controller[89102]: 2025-11-26T01:50:09Z|00029|binding|INFO|Setting lport a47ff2b9-72e9-48d0-9756-5fe939cf4b29 ovn-installed in OVS
Nov 26 01:50:09 compute-0 ovn_controller[89102]: 2025-11-26T01:50:09Z|00030|binding|INFO|Setting lport a47ff2b9-72e9-48d0-9756-5fe939cf4b29 up in Southbound
Nov 26 01:50:09 compute-0 nova_compute[350387]: 2025-11-26 01:50:09.677 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:50:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 26 01:50:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 26 01:50:09 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 26 01:50:09 compute-0 podman[413389]: 2025-11-26 01:50:09.734207662 +0000 UTC m=+0.120691539 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:50:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:10.333 286844 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 26 01:50:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:10.335 286844 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp0eop8pga/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 26 01:50:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:10.186 413433 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 26 01:50:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:10.192 413433 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 26 01:50:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:10.197 413433 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 26 01:50:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:10.197 413433 INFO oslo.privsep.daemon [-] privsep daemon running as pid 413433#033[00m
Nov 26 01:50:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:10.342 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8d2b2e-74df-4b2b-be1a-a5866b4b7729]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.563 350391 DEBUG nova.compute.manager [req-846c89ea-7780-4eb3-92fc-4749045c618b req-bb2894ae-65af-421c-96bc-203fbb56fceb 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Received event network-vif-plugged-a47ff2b9-72e9-48d0-9756-5fe939cf4b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.564 350391 DEBUG oslo_concurrency.lockutils [req-846c89ea-7780-4eb3-92fc-4749045c618b req-bb2894ae-65af-421c-96bc-203fbb56fceb 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "b1c088bc-7a6b-4580-93ff-685731747189-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.564 350391 DEBUG oslo_concurrency.lockutils [req-846c89ea-7780-4eb3-92fc-4749045c618b req-bb2894ae-65af-421c-96bc-203fbb56fceb 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.565 350391 DEBUG oslo_concurrency.lockutils [req-846c89ea-7780-4eb3-92fc-4749045c618b req-bb2894ae-65af-421c-96bc-203fbb56fceb 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.565 350391 DEBUG nova.compute.manager [req-846c89ea-7780-4eb3-92fc-4749045c618b req-bb2894ae-65af-421c-96bc-203fbb56fceb 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Processing event network-vif-plugged-a47ff2b9-72e9-48d0-9756-5fe939cf4b29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 01:50:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.708 350391 DEBUG nova.compute.manager [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.710 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764121810.7077043, b1c088bc-7a6b-4580-93ff-685731747189 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.711 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] VM Started (Lifecycle Event)#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.718 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.726 350391 INFO nova.virt.libvirt.driver [-] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Instance spawned successfully.#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.727 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.773 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.784 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.812 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.813 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764121810.7080917, b1c088bc-7a6b-4580-93ff-685731747189 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.813 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] VM Paused (Lifecycle Event)#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.854 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.860 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764121810.718911, b1c088bc-7a6b-4580-93ff-685731747189 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.860 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] VM Resumed (Lifecycle Event)#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.874 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.874 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.875 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.876 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.876 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.877 350391 DEBUG nova.virt.libvirt.driver [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.881 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.887 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.914 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.937 350391 INFO nova.compute.manager [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Took 13.02 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 01:50:10 compute-0 nova_compute[350387]: 2025-11-26 01:50:10.938 350391 DEBUG nova.compute.manager [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:50:11 compute-0 nova_compute[350387]: 2025-11-26 01:50:11.023 350391 INFO nova.compute.manager [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Took 14.18 seconds to build instance.#033[00m
Nov 26 01:50:11 compute-0 nova_compute[350387]: 2025-11-26 01:50:11.045 350391 DEBUG oslo_concurrency.lockutils [None req-185ac750-7e1c-46d6-93b5-4b9cbd4950fe b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:50:11 compute-0 nova_compute[350387]: 2025-11-26 01:50:11.070 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:50:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:50:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:50:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:50:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:50:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:50:11 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:11.159 413433 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:50:11 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:11.160 413433 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:50:11 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:11.160 413433 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:50:11 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:11.956 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[90055bc1-f5bd-43e5-98f1-a3d9cb0ebb13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:11 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:11.959 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc97f5f89-71 in ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 01:50:11 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:11.962 413433 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc97f5f89-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 01:50:11 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:11.962 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[4a53a4ab-695b-4eef-820b-ae57eac42f03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:11 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:11.971 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e4d0d0-4d65-4d9c-8c30-dbabd2e7c457]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:12.010 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba68007-7f76-49a3-871b-c63ba696bff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:12.055 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[bb9ffebd-c3a2-483a-bf45-f539a1d27446]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:12.058 286844 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpzo75pv7s/privsep.sock']#033[00m
Nov 26 01:50:12 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 26 01:50:12 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 26 01:50:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Nov 26 01:50:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:12.856 286844 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 26 01:50:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:12.859 286844 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpzo75pv7s/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 26 01:50:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:12.733 413526 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 26 01:50:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:12.742 413526 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 26 01:50:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:12.746 413526 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 26 01:50:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:12.747 413526 INFO oslo.privsep.daemon [-] privsep daemon running as pid 413526#033[00m
Nov 26 01:50:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:12.866 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[1c0bef04-6f82-4227-9840-3890e91f491f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:13 compute-0 nova_compute[350387]: 2025-11-26 01:50:13.009 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:13.379 413526 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:50:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:13.379 413526 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:50:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:13.379 413526 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:50:13 compute-0 nova_compute[350387]: 2025-11-26 01:50:13.547 350391 DEBUG nova.compute.manager [req-3c0dd4e4-389e-4def-be80-043284c50b9e req-2864d207-6aad-496a-a822-1e7e90377e55 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Received event network-vif-plugged-a47ff2b9-72e9-48d0-9756-5fe939cf4b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 01:50:13 compute-0 nova_compute[350387]: 2025-11-26 01:50:13.548 350391 DEBUG oslo_concurrency.lockutils [req-3c0dd4e4-389e-4def-be80-043284c50b9e req-2864d207-6aad-496a-a822-1e7e90377e55 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "b1c088bc-7a6b-4580-93ff-685731747189-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:50:13 compute-0 nova_compute[350387]: 2025-11-26 01:50:13.548 350391 DEBUG oslo_concurrency.lockutils [req-3c0dd4e4-389e-4def-be80-043284c50b9e req-2864d207-6aad-496a-a822-1e7e90377e55 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:50:13 compute-0 nova_compute[350387]: 2025-11-26 01:50:13.549 350391 DEBUG oslo_concurrency.lockutils [req-3c0dd4e4-389e-4def-be80-043284c50b9e req-2864d207-6aad-496a-a822-1e7e90377e55 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:50:13 compute-0 nova_compute[350387]: 2025-11-26 01:50:13.550 350391 DEBUG nova.compute.manager [req-3c0dd4e4-389e-4def-be80-043284c50b9e req-2864d207-6aad-496a-a822-1e7e90377e55 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] No waiting events found dispatching network-vif-plugged-a47ff2b9-72e9-48d0-9756-5fe939cf4b29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 01:50:13 compute-0 nova_compute[350387]: 2025-11-26 01:50:13.550 350391 WARNING nova.compute.manager [req-3c0dd4e4-389e-4def-be80-043284c50b9e req-2864d207-6aad-496a-a822-1e7e90377e55 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Received unexpected event network-vif-plugged-a47ff2b9-72e9-48d0-9756-5fe939cf4b29 for instance with vm_state active and task_state None.#033[00m
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.061 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[579d98e9-fcda-468e-94ab-25112dc53aae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:14 compute-0 NetworkManager[48886]: <info>  [1764121814.1200] manager: (tapc97f5f89-70): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.119 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[6961e948-26f2-4c97-a0ec-413df1b1e08c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.161 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[528a1a3b-6c7f-458d-a246-72adeaa1cbaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.166 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[d08f5162-9415-4282-82c5-d8f9835aface]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:14 compute-0 systemd-udevd[413539]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 01:50:14 compute-0 NetworkManager[48886]: <info>  [1764121814.2102] device (tapc97f5f89-70): carrier: link connected
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.225 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[3236c1c2-8fdb-4888-b159-4814d2c24b62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.253 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[a65a77c9-d7af-4fcb-9c4d-c2259a59c493]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97f5f89-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:e8:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544483, 'reachable_time': 18802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413556, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.277 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d9e72c-1a69-4b06-a5f9-dfbebee2c467]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe72:e89b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544483, 'tstamp': 544483}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413557, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.302 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[348b95d8-06ca-475e-ba44-580828d118c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97f5f89-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:e8:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544483, 'reachable_time': 18802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 413558, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.348 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[017bf370-1942-4c3b-895a-954b6d9f0e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.411 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[0ccf0b54-0b22-45fe-878b-1c139fea680b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.413 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97f5f89-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.413 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.414 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97f5f89-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:50:14 compute-0 kernel: tapc97f5f89-70: entered promiscuous mode
Nov 26 01:50:14 compute-0 nova_compute[350387]: 2025-11-26 01:50:14.416 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:14 compute-0 NetworkManager[48886]: <info>  [1764121814.4171] manager: (tapc97f5f89-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.420 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97f5f89-70, col_values=(('external_ids', {'iface-id': '3824ec63-7278-42dc-8c72-8ec8e06c2f0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.424 286844 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c97f5f89-70be-4349-beb5-5f8e6065072e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c97f5f89-70be-4349-beb5-5f8e6065072e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 01:50:14 compute-0 ovn_controller[89102]: 2025-11-26T01:50:14Z|00031|binding|INFO|Releasing lport 3824ec63-7278-42dc-8c72-8ec8e06c2f0b from this chassis (sb_readonly=0)
Nov 26 01:50:14 compute-0 nova_compute[350387]: 2025-11-26 01:50:14.425 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.427 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[a268b602-f168-4bc3-9170-09e38ea8fae9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.428 286844 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: global
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    log         /dev/log local0 debug
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    log-tag     haproxy-metadata-proxy-c97f5f89-70be-4349-beb5-5f8e6065072e
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    user        root
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    group       root
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    maxconn     1024
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    pidfile     /var/lib/neutron/external/pids/c97f5f89-70be-4349-beb5-5f8e6065072e.pid.haproxy
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    daemon
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: defaults
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    log global
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    mode http
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    option httplog
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    option dontlognull
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    option http-server-close
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    option forwardfor
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    retries                 3
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    timeout http-request    30s
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    timeout connect         30s
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    timeout client          32s
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    timeout server          32s
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    timeout http-keep-alive 30s
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: listen listener
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    bind 169.254.169.254:80
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]:    http-request add-header X-OVN-Network-ID c97f5f89-70be-4349-beb5-5f8e6065072e
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 01:50:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:14.429 286844 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'env', 'PROCESS_TAG=haproxy-c97f5f89-70be-4349-beb5-5f8e6065072e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c97f5f89-70be-4349-beb5-5f8e6065072e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 01:50:14 compute-0 nova_compute[350387]: 2025-11-26 01:50:14.441 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 924 KiB/s rd, 752 KiB/s wr, 70 op/s
Nov 26 01:50:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:50:15 compute-0 podman[413590]: 2025-11-26 01:50:15.012552938 +0000 UTC m=+0.106135137 container create ee275691fbfe4f32cb1c3d7f656e22c8d3c7f237f3c1d6d74f8461fa56bad7bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 26 01:50:15 compute-0 podman[413590]: 2025-11-26 01:50:14.962908642 +0000 UTC m=+0.056490901 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 01:50:15 compute-0 systemd[1]: Started libpod-conmon-ee275691fbfe4f32cb1c3d7f656e22c8d3c7f237f3c1d6d74f8461fa56bad7bb.scope.
Nov 26 01:50:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/531fbc26ef1368fc38f47483b7b95402bea388fed13a021a2986e31021e8082c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 01:50:15 compute-0 podman[413590]: 2025-11-26 01:50:15.170119841 +0000 UTC m=+0.263702080 container init ee275691fbfe4f32cb1c3d7f656e22c8d3c7f237f3c1d6d74f8461fa56bad7bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 01:50:15 compute-0 podman[413590]: 2025-11-26 01:50:15.179676902 +0000 UTC m=+0.273259101 container start ee275691fbfe4f32cb1c3d7f656e22c8d3c7f237f3c1d6d74f8461fa56bad7bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 26 01:50:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:15.237 286844 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c97f5f89-70be-4349-beb5-5f8e6065072e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c97f5f89-70be-4349-beb5-5f8e6065072e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 01:50:15 compute-0 neutron-haproxy-ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e[413606]: [NOTICE]   (413610) : New worker (413612) forked
Nov 26 01:50:15 compute-0 neutron-haproxy-ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e[413606]: [NOTICE]   (413610) : Loading success.
Nov 26 01:50:15 compute-0 nova_compute[350387]: 2025-11-26 01:50:15.244 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:50:15 compute-0 nova_compute[350387]: 2025-11-26 01:50:15.245 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:50:15 compute-0 nova_compute[350387]: 2025-11-26 01:50:15.246 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:50:15 compute-0 nova_compute[350387]: 2025-11-26 01:50:15.247 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:50:15 compute-0 nova_compute[350387]: 2025-11-26 01:50:15.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:50:15 compute-0 nova_compute[350387]: 2025-11-26 01:50:15.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:50:15 compute-0 nova_compute[350387]: 2025-11-26 01:50:15.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 01:50:16 compute-0 nova_compute[350387]: 2025-11-26 01:50:16.075 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:16 compute-0 nova_compute[350387]: 2025-11-26 01:50:16.079 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:50:16 compute-0 nova_compute[350387]: 2025-11-26 01:50:16.080 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:50:16 compute-0 nova_compute[350387]: 2025-11-26 01:50:16.081 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 01:50:16 compute-0 nova_compute[350387]: 2025-11-26 01:50:16.082 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid b1c088bc-7a6b-4580-93ff-685731747189 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 01:50:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 428 KiB/s wr, 81 op/s
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.012 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.098 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updating instance_info_cache with network_info: [{"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.120 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.121 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.123 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.124 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.126 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.127 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.337 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.339 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.339 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.340 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.342 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:50:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 81 op/s
Nov 26 01:50:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:50:18 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3613735074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.815 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.940 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.945 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:50:18 compute-0 nova_compute[350387]: 2025-11-26 01:50:18.946 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:50:19 compute-0 nova_compute[350387]: 2025-11-26 01:50:19.545 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:50:19 compute-0 nova_compute[350387]: 2025-11-26 01:50:19.546 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4116MB free_disk=59.97224044799805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:50:19 compute-0 nova_compute[350387]: 2025-11-26 01:50:19.546 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:50:19 compute-0 nova_compute[350387]: 2025-11-26 01:50:19.547 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:50:19 compute-0 nova_compute[350387]: 2025-11-26 01:50:19.657 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:50:19 compute-0 nova_compute[350387]: 2025-11-26 01:50:19.658 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:50:19 compute-0 nova_compute[350387]: 2025-11-26 01:50:19.658 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:50:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:50:19 compute-0 nova_compute[350387]: 2025-11-26 01:50:19.743 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:50:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:50:20 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1002463258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:50:20 compute-0 nova_compute[350387]: 2025-11-26 01:50:20.244 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:50:20 compute-0 nova_compute[350387]: 2025-11-26 01:50:20.255 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating inventory in ProviderTree for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 01:50:20 compute-0 nova_compute[350387]: 2025-11-26 01:50:20.320 350391 ERROR nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [req-7b73062c-ce23-48a9-9e65-32bb9d71b507] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 0e9e5c9b-dee2-4076-966b-e19b2697b966.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-7b73062c-ce23-48a9-9e65-32bb9d71b507"}]}#033[00m
Nov 26 01:50:20 compute-0 nova_compute[350387]: 2025-11-26 01:50:20.341 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing inventories for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 01:50:20 compute-0 nova_compute[350387]: 2025-11-26 01:50:20.376 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating ProviderTree inventory for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 01:50:20 compute-0 nova_compute[350387]: 2025-11-26 01:50:20.376 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating inventory in ProviderTree for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 01:50:20 compute-0 nova_compute[350387]: 2025-11-26 01:50:20.409 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing aggregate associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 01:50:20 compute-0 nova_compute[350387]: 2025-11-26 01:50:20.436 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing trait associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, traits: COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 01:50:20 compute-0 nova_compute[350387]: 2025-11-26 01:50:20.479 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:50:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 61 op/s
Nov 26 01:50:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:50:20 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260307526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:50:20 compute-0 nova_compute[350387]: 2025-11-26 01:50:20.948 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:50:20 compute-0 nova_compute[350387]: 2025-11-26 01:50:20.961 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating inventory in ProviderTree for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 01:50:21 compute-0 nova_compute[350387]: 2025-11-26 01:50:21.018 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updated inventory for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 26 01:50:21 compute-0 nova_compute[350387]: 2025-11-26 01:50:21.019 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 26 01:50:21 compute-0 nova_compute[350387]: 2025-11-26 01:50:21.020 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating inventory in ProviderTree for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 01:50:21 compute-0 nova_compute[350387]: 2025-11-26 01:50:21.057 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:50:21 compute-0 nova_compute[350387]: 2025-11-26 01:50:21.057 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:50:21 compute-0 nova_compute[350387]: 2025-11-26 01:50:21.075 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 56 op/s
Nov 26 01:50:23 compute-0 nova_compute[350387]: 2025-11-26 01:50:23.015 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:23 compute-0 nova_compute[350387]: 2025-11-26 01:50:23.203 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:23 compute-0 ovn_controller[89102]: 2025-11-26T01:50:23Z|00032|binding|INFO|Releasing lport 3824ec63-7278-42dc-8c72-8ec8e06c2f0b from this chassis (sb_readonly=0)
Nov 26 01:50:23 compute-0 NetworkManager[48886]: <info>  [1764121823.2059] manager: (patch-br-int-to-provnet-c19f7092-632f-4b5a-a43a-928c0892538c): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Nov 26 01:50:23 compute-0 NetworkManager[48886]: <info>  [1764121823.2133] device (patch-br-int-to-provnet-c19f7092-632f-4b5a-a43a-928c0892538c)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 01:50:23 compute-0 NetworkManager[48886]: <info>  [1764121823.2263] manager: (patch-provnet-c19f7092-632f-4b5a-a43a-928c0892538c-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Nov 26 01:50:23 compute-0 NetworkManager[48886]: <info>  [1764121823.2316] device (patch-provnet-c19f7092-632f-4b5a-a43a-928c0892538c-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 01:50:23 compute-0 NetworkManager[48886]: <info>  [1764121823.2430] manager: (patch-br-int-to-provnet-c19f7092-632f-4b5a-a43a-928c0892538c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 26 01:50:23 compute-0 NetworkManager[48886]: <info>  [1764121823.2496] manager: (patch-provnet-c19f7092-632f-4b5a-a43a-928c0892538c-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Nov 26 01:50:23 compute-0 NetworkManager[48886]: <info>  [1764121823.2531] device (patch-br-int-to-provnet-c19f7092-632f-4b5a-a43a-928c0892538c)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 26 01:50:23 compute-0 NetworkManager[48886]: <info>  [1764121823.2562] device (patch-provnet-c19f7092-632f-4b5a-a43a-928c0892538c-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 26 01:50:23 compute-0 nova_compute[350387]: 2025-11-26 01:50:23.265 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:23 compute-0 ovn_controller[89102]: 2025-11-26T01:50:23Z|00033|binding|INFO|Releasing lport 3824ec63-7278-42dc-8c72-8ec8e06c2f0b from this chassis (sb_readonly=0)
Nov 26 01:50:23 compute-0 nova_compute[350387]: 2025-11-26 01:50:23.282 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:23 compute-0 nova_compute[350387]: 2025-11-26 01:50:23.599 350391 DEBUG nova.compute.manager [req-8627328c-27cb-4e5b-9053-fc7446f0548c req-b0338af3-bc8c-40bc-9660-31fe5423b642 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Received event network-changed-a47ff2b9-72e9-48d0-9756-5fe939cf4b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 01:50:23 compute-0 nova_compute[350387]: 2025-11-26 01:50:23.599 350391 DEBUG nova.compute.manager [req-8627328c-27cb-4e5b-9053-fc7446f0548c req-b0338af3-bc8c-40bc-9660-31fe5423b642 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Refreshing instance network info cache due to event network-changed-a47ff2b9-72e9-48d0-9756-5fe939cf4b29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 01:50:23 compute-0 nova_compute[350387]: 2025-11-26 01:50:23.599 350391 DEBUG oslo_concurrency.lockutils [req-8627328c-27cb-4e5b-9053-fc7446f0548c req-b0338af3-bc8c-40bc-9660-31fe5423b642 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:50:23 compute-0 nova_compute[350387]: 2025-11-26 01:50:23.600 350391 DEBUG oslo_concurrency.lockutils [req-8627328c-27cb-4e5b-9053-fc7446f0548c req-b0338af3-bc8c-40bc-9660-31fe5423b642 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:50:23 compute-0 nova_compute[350387]: 2025-11-26 01:50:23.600 350391 DEBUG nova.network.neutron [req-8627328c-27cb-4e5b-9053-fc7446f0548c req-b0338af3-bc8c-40bc-9660-31fe5423b642 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Refreshing network info cache for port a47ff2b9-72e9-48d0-9756-5fe939cf4b29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 01:50:24 compute-0 podman[413691]: 2025-11-26 01:50:24.578103388 +0000 UTC m=+0.135669464 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 26 01:50:24 compute-0 podman[413692]: 2025-11-26 01:50:24.599272338 +0000 UTC m=+0.142203729 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:50:24 compute-0 podman[413693]: 2025-11-26 01:50:24.60078624 +0000 UTC m=+0.140404317 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:50:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 52 op/s
Nov 26 01:50:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:50:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:24.965 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:50:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:24.966 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:50:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:50:24.967 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:50:25 compute-0 nova_compute[350387]: 2025-11-26 01:50:25.099 350391 DEBUG nova.network.neutron [req-8627328c-27cb-4e5b-9053-fc7446f0548c req-b0338af3-bc8c-40bc-9660-31fe5423b642 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updated VIF entry in instance network info cache for port a47ff2b9-72e9-48d0-9756-5fe939cf4b29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 01:50:25 compute-0 nova_compute[350387]: 2025-11-26 01:50:25.100 350391 DEBUG nova.network.neutron [req-8627328c-27cb-4e5b-9053-fc7446f0548c req-b0338af3-bc8c-40bc-9660-31fe5423b642 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updating instance_info_cache with network_info: [{"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:50:25 compute-0 nova_compute[350387]: 2025-11-26 01:50:25.123 350391 DEBUG oslo_concurrency.lockutils [req-8627328c-27cb-4e5b-9053-fc7446f0548c req-b0338af3-bc8c-40bc-9660-31fe5423b642 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:50:26 compute-0 nova_compute[350387]: 2025-11-26 01:50:26.078 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:26 compute-0 podman[413746]: 2025-11-26 01:50:26.536292609 +0000 UTC m=+0.082707553 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Nov 26 01:50:26 compute-0 podman[413747]: 2025-11-26 01:50:26.603801741 +0000 UTC m=+0.141172259 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 01:50:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 767 KiB/s rd, 24 op/s
Nov 26 01:50:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:50:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/113617163' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:50:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:50:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/113617163' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:50:28 compute-0 nova_compute[350387]: 2025-11-26 01:50:28.018 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 395 KiB/s rd, 12 op/s
Nov 26 01:50:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:50:29 compute-0 podman[158021]: time="2025-11-26T01:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:50:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:50:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8601 "" "Go-http-client/1.1"
Nov 26 01:50:30 compute-0 podman[413790]: 2025-11-26 01:50:30.65181409 +0000 UTC m=+0.183319223 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.4, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, name=ubi9, io.openshift.expose-services=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, release=1214.1726694543, maintainer=Red Hat, Inc.)
Nov 26 01:50:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:50:31 compute-0 nova_compute[350387]: 2025-11-26 01:50:31.083 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:31 compute-0 openstack_network_exporter[367323]: ERROR   01:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:50:31 compute-0 openstack_network_exporter[367323]: ERROR   01:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:50:31 compute-0 openstack_network_exporter[367323]: ERROR   01:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:50:31 compute-0 openstack_network_exporter[367323]: ERROR   01:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:50:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:50:31 compute-0 openstack_network_exporter[367323]: ERROR   01:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:50:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:50:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:50:33 compute-0 nova_compute[350387]: 2025-11-26 01:50:33.022 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:50:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:50:36 compute-0 nova_compute[350387]: 2025-11-26 01:50:36.089 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:36 compute-0 podman[413808]: 2025-11-26 01:50:36.580222258 +0000 UTC m=+0.132280968 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 26 01:50:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:50:38 compute-0 nova_compute[350387]: 2025-11-26 01:50:38.027 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:38 compute-0 podman[413827]: 2025-11-26 01:50:38.582223789 +0000 UTC m=+0.137860375 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64)
Nov 26 01:50:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:50:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:50:40 compute-0 podman[413848]: 2025-11-26 01:50:40.545576107 +0000 UTC m=+0.109202064 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 01:50:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:50:41 compute-0 nova_compute[350387]: 2025-11-26 01:50:41.092 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:50:41
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.control', 'backups', 'volumes', '.mgr', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:50:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:50:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.859 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.859 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.859 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.861 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.861 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.864 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.864 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.864 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.864 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.866 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.866 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.866 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.866 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.866 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:50:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:42.871 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b1c088bc-7a6b-4580-93ff-685731747189 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 01:50:43 compute-0 nova_compute[350387]: 2025-11-26 01:50:43.032 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:43.257 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b1c088bc-7a6b-4580-93ff-685731747189 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}4e94a0ede5bb893797130fc39ee992faf1803b43b6582353b5619a442e3adefc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.057 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1849 Content-Type: application/json Date: Wed, 26 Nov 2025 01:50:43 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-2829fbc2-6323-408c-83ed-53cf7abda782 x-openstack-request-id: req-2829fbc2-6323-408c-83ed-53cf7abda782 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.057 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b1c088bc-7a6b-4580-93ff-685731747189", "name": "test_0", "status": "ACTIVE", "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "user_id": "b130e7a8bed3424f9f5ff63b35cd2b28", "metadata": {}, "hostId": "2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1", "image": {"id": "48e08d00-37a3-4465-a949-ff0b8afe4def", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/48e08d00-37a3-4465-a949-ff0b8afe4def"}]}, "flavor": {"id": "030e95e2-5458-42ef-a5df-79a19c0b681d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/030e95e2-5458-42ef-a5df-79a19c0b681d"}]}, "created": "2025-11-26T01:49:54Z", "updated": "2025-11-26T01:50:10Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.29", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0f:66:48"}, {"version": 4, "addr": "192.168.122.186", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0f:66:48"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b1c088bc-7a6b-4580-93ff-685731747189"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b1c088bc-7a6b-4580-93ff-685731747189"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-26T01:50:10.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.057 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b1c088bc-7a6b-4580-93ff-685731747189 used request id req-2829fbc2-6323-408c-83ed-53cf7abda782 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.060 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b1c088bc-7a6b-4580-93ff-685731747189', 'name': 'test_0', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.060 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.061 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.061 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.063 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T01:50:44.062297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.065 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.065 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.066 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.066 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.067 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.067 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T01:50:44.067701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.074 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b1c088bc-7a6b-4580-93ff-685731747189 / tapa47ff2b9-72 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.075 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.076 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.076 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.077 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.077 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.078 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.078 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T01:50:44.078674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.079 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.080 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.080 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.081 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.081 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.082 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.082 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.082 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T01:50:44.082184) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.083 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.084 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.084 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.085 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.085 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.086 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T01:50:44.085946) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.086 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.087 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.088 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.089 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.089 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.089 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.090 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.090 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T01:50:44.090328) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.091 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.092 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.092 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.093 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.094 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.094 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.094 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T01:50:44.094467) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.159 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/cpu volume: 32400000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.160 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.161 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.162 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.162 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.163 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T01:50:44.163295) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.164 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.165 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.165 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.166 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.166 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.167 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.167 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T01:50:44.167607) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.168 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/memory.usage volume: 33.203125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.169 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.169 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.170 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.170 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.171 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.171 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.172 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-26T01:50:44.171589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.172 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.173 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.175 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.176 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.176 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.176 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.177 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T01:50:44.177335) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.177 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.179 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.179 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.180 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.180 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.180 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.181 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T01:50:44.181318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.182 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.183 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.183 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.183 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.184 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.185 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T01:50:44.185573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.185 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.186 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.187 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.188 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.188 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.188 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.189 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.189 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T01:50:44.189875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.190 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.191 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.192 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.192 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.193 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.193 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.193 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T01:50:44.193903) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.194 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.195 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.196 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.197 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.197 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.197 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T01:50:44.197913) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.227 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.228 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.229 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.230 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.230 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.231 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.231 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.232 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.233 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T01:50:44.232526) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.232 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.330 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 18388992 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.332 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 4096 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.333 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.334 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.335 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.336 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.336 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.337 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.338 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-26T01:50:44.338011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.339 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.340 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.341 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.342 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.343 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.343 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.344 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T01:50:44.344403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.346 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 1686949747 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.347 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 1640966 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.348 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 2731747 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.349 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.350 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.350 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.351 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.352 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.353 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.354 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T01:50:44.353331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.355 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 583 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.356 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.357 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.358 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.359 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.360 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.361 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.362 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.363 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T01:50:44.362706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.364 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.366 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.367 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.369 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.370 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.370 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.370 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.371 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.371 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.371 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.372 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.372 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.373 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.373 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.373 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.374 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.374 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T01:50:44.371203) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T01:50:44.374795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.374 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.376 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.376 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.376 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.377 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.377 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.377 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.377 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.377 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.378 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.378 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.379 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.380 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.380 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.380 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.380 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.380 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.381 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.381 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T01:50:44.377491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.381 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T01:50:44.380918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.382 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.383 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.384 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.385 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.385 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.385 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.385 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.386 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.386 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.387 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.387 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T01:50:44.386108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.389 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.389 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.389 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.389 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.389 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.390 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.390 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.390 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.390 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.390 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.390 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.390 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.390 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.390 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.390 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.390 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.390 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.390 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.391 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.391 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.391 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.391 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.391 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.391 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.391 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.391 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:50:44.391 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:50:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 op/s
Nov 26 01:50:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:50:46 compute-0 nova_compute[350387]: 2025-11-26 01:50:46.094 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:46 compute-0 ovn_controller[89102]: 2025-11-26T01:50:46Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0f:66:48 192.168.0.29
Nov 26 01:50:46 compute-0 ovn_controller[89102]: 2025-11-26T01:50:46Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0f:66:48 192.168.0.29
Nov 26 01:50:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 53 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 391 KiB/s wr, 6 op/s
Nov 26 01:50:48 compute-0 nova_compute[350387]: 2025-11-26 01:50:48.037 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 54 MiB data, 183 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 609 KiB/s wr, 18 op/s
Nov 26 01:50:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:50:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 67 MiB data, 193 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 1.4 MiB/s wr, 47 op/s
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005477703571661131 of space, bias 1.0, pg target 0.16433110714983395 quantized to 32 (current 32)
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:50:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:50:51 compute-0 nova_compute[350387]: 2025-11-26 01:50:51.098 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Nov 26 01:50:53 compute-0 nova_compute[350387]: 2025-11-26 01:50:53.041 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:53 compute-0 ovn_controller[89102]: 2025-11-26T01:50:53Z|00034|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Nov 26 01:50:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Nov 26 01:50:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:50:55 compute-0 podman[413875]: 2025-11-26 01:50:55.580093937 +0000 UTC m=+0.129019084 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, io.buildah.version=1.41.4)
Nov 26 01:50:55 compute-0 podman[413876]: 2025-11-26 01:50:55.58724058 +0000 UTC m=+0.129237191 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 01:50:55 compute-0 podman[413877]: 2025-11-26 01:50:55.589599496 +0000 UTC m=+0.122720555 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:50:56 compute-0 nova_compute[350387]: 2025-11-26 01:50:56.100 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Nov 26 01:50:57 compute-0 podman[413934]: 2025-11-26 01:50:57.595800729 +0000 UTC m=+0.149404833 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:50:57 compute-0 podman[413935]: 2025-11-26 01:50:57.613772218 +0000 UTC m=+0.163080030 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:50:58 compute-0 nova_compute[350387]: 2025-11-26 01:50:58.045 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:50:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 1.1 MiB/s wr, 50 op/s
Nov 26 01:50:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:50:59 compute-0 podman[158021]: time="2025-11-26T01:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:50:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:50:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8628 "" "Go-http-client/1.1"
Nov 26 01:51:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 910 KiB/s wr, 39 op/s
Nov 26 01:51:00 compute-0 podman[414123]: 2025-11-26 01:51:00.915909702 +0000 UTC m=+0.118663002 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 26 01:51:01 compute-0 nova_compute[350387]: 2025-11-26 01:51:01.103 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:01 compute-0 podman[414169]: 2025-11-26 01:51:01.161177879 +0000 UTC m=+0.125172687 container exec 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:51:01 compute-0 podman[414169]: 2025-11-26 01:51:01.305580698 +0000 UTC m=+0.269575506 container exec_died 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:51:01 compute-0 openstack_network_exporter[367323]: ERROR   01:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:51:01 compute-0 openstack_network_exporter[367323]: ERROR   01:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:51:01 compute-0 openstack_network_exporter[367323]: ERROR   01:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:51:01 compute-0 openstack_network_exporter[367323]: ERROR   01:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:51:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:51:01 compute-0 openstack_network_exporter[367323]: ERROR   01:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:51:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:51:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:51:02 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:51:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:51:02 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:51:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 34 KiB/s wr, 10 op/s
Nov 26 01:51:03 compute-0 nova_compute[350387]: 2025-11-26 01:51:03.049 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:03 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:51:03 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:51:03 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:51:03 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:51:03 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:51:03 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:51:03 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:51:03 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:51:03 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev d377ba57-906e-4ece-952e-3ab9da4f8f7b does not exist
Nov 26 01:51:03 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 031dcff7-9c67-455a-b9fd-ad5d89c96963 does not exist
Nov 26 01:51:03 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev bc26ba71-ba1a-4cea-8ac6-381d642a29cf does not exist
Nov 26 01:51:03 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:51:03 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:51:03 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:51:03 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:51:03 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:51:03 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:51:04 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:51:04 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:51:04 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:51:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Nov 26 01:51:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:51:04 compute-0 podman[414586]: 2025-11-26 01:51:04.95077946 +0000 UTC m=+0.060027232 container create 031024ab14832727e07cdb34e182c7dfde1d27321e4f4ab388061c22f6551fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:51:05 compute-0 podman[414586]: 2025-11-26 01:51:04.916601441 +0000 UTC m=+0.025849253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:51:05 compute-0 systemd[1]: Started libpod-conmon-031024ab14832727e07cdb34e182c7dfde1d27321e4f4ab388061c22f6551fbd.scope.
Nov 26 01:51:05 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:51:05 compute-0 podman[414586]: 2025-11-26 01:51:05.116129153 +0000 UTC m=+0.225377025 container init 031024ab14832727e07cdb34e182c7dfde1d27321e4f4ab388061c22f6551fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jepsen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:51:05 compute-0 podman[414586]: 2025-11-26 01:51:05.133539356 +0000 UTC m=+0.242787158 container start 031024ab14832727e07cdb34e182c7dfde1d27321e4f4ab388061c22f6551fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:51:05 compute-0 podman[414586]: 2025-11-26 01:51:05.139551406 +0000 UTC m=+0.248799228 container attach 031024ab14832727e07cdb34e182c7dfde1d27321e4f4ab388061c22f6551fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:51:05 compute-0 quizzical_jepsen[414602]: 167 167
Nov 26 01:51:05 compute-0 systemd[1]: libpod-031024ab14832727e07cdb34e182c7dfde1d27321e4f4ab388061c22f6551fbd.scope: Deactivated successfully.
Nov 26 01:51:05 compute-0 podman[414586]: 2025-11-26 01:51:05.145242147 +0000 UTC m=+0.254489949 container died 031024ab14832727e07cdb34e182c7dfde1d27321e4f4ab388061c22f6551fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jepsen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:51:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-823180249f4363ca6a832b135afa3b974f405de4f171adb85dd7dbc1c58c514b-merged.mount: Deactivated successfully.
Nov 26 01:51:05 compute-0 podman[414586]: 2025-11-26 01:51:05.232007895 +0000 UTC m=+0.341255667 container remove 031024ab14832727e07cdb34e182c7dfde1d27321e4f4ab388061c22f6551fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:51:05 compute-0 systemd[1]: libpod-conmon-031024ab14832727e07cdb34e182c7dfde1d27321e4f4ab388061c22f6551fbd.scope: Deactivated successfully.
Nov 26 01:51:05 compute-0 podman[414625]: 2025-11-26 01:51:05.502216068 +0000 UTC m=+0.095310801 container create e815b49631fef475d8e6468c34aac12783ed5d592cfa179b86001527fd56f575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:51:05 compute-0 podman[414625]: 2025-11-26 01:51:05.470731616 +0000 UTC m=+0.063826389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:51:05 compute-0 systemd[1]: Started libpod-conmon-e815b49631fef475d8e6468c34aac12783ed5d592cfa179b86001527fd56f575.scope.
Nov 26 01:51:05 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b1a2684daaae6ecde96788cabb381ab984d841cb0ff128cd2584cb8578e7d4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b1a2684daaae6ecde96788cabb381ab984d841cb0ff128cd2584cb8578e7d4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b1a2684daaae6ecde96788cabb381ab984d841cb0ff128cd2584cb8578e7d4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b1a2684daaae6ecde96788cabb381ab984d841cb0ff128cd2584cb8578e7d4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b1a2684daaae6ecde96788cabb381ab984d841cb0ff128cd2584cb8578e7d4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:51:05 compute-0 podman[414625]: 2025-11-26 01:51:05.711584668 +0000 UTC m=+0.304679401 container init e815b49631fef475d8e6468c34aac12783ed5d592cfa179b86001527fd56f575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:51:05 compute-0 podman[414625]: 2025-11-26 01:51:05.741138005 +0000 UTC m=+0.334232728 container start e815b49631fef475d8e6468c34aac12783ed5d592cfa179b86001527fd56f575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 01:51:05 compute-0 podman[414625]: 2025-11-26 01:51:05.748284097 +0000 UTC m=+0.341378820 container attach e815b49631fef475d8e6468c34aac12783ed5d592cfa179b86001527fd56f575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 01:51:06 compute-0 nova_compute[350387]: 2025-11-26 01:51:06.107 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:51:06 compute-0 zealous_hodgkin[414641]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:51:06 compute-0 zealous_hodgkin[414641]: --> relative data size: 1.0
Nov 26 01:51:06 compute-0 zealous_hodgkin[414641]: --> All data devices are unavailable
Nov 26 01:51:07 compute-0 systemd[1]: libpod-e815b49631fef475d8e6468c34aac12783ed5d592cfa179b86001527fd56f575.scope: Deactivated successfully.
Nov 26 01:51:07 compute-0 systemd[1]: libpod-e815b49631fef475d8e6468c34aac12783ed5d592cfa179b86001527fd56f575.scope: Consumed 1.205s CPU time.
Nov 26 01:51:07 compute-0 conmon[414641]: conmon e815b49631fef475d8e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e815b49631fef475d8e6468c34aac12783ed5d592cfa179b86001527fd56f575.scope/container/memory.events
Nov 26 01:51:07 compute-0 podman[414625]: 2025-11-26 01:51:07.012103601 +0000 UTC m=+1.605198334 container died e815b49631fef475d8e6468c34aac12783ed5d592cfa179b86001527fd56f575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 01:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b1a2684daaae6ecde96788cabb381ab984d841cb0ff128cd2584cb8578e7d4d-merged.mount: Deactivated successfully.
Nov 26 01:51:07 compute-0 podman[414625]: 2025-11-26 01:51:07.112552305 +0000 UTC m=+1.705647018 container remove e815b49631fef475d8e6468c34aac12783ed5d592cfa179b86001527fd56f575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hodgkin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 01:51:07 compute-0 systemd[1]: libpod-conmon-e815b49631fef475d8e6468c34aac12783ed5d592cfa179b86001527fd56f575.scope: Deactivated successfully.
Nov 26 01:51:07 compute-0 podman[414670]: 2025-11-26 01:51:07.209043068 +0000 UTC m=+0.142666871 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 26 01:51:08 compute-0 nova_compute[350387]: 2025-11-26 01:51:08.054 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:08 compute-0 podman[414838]: 2025-11-26 01:51:08.213288671 +0000 UTC m=+0.086184682 container create 0d00c253f3a9a3016613906c5cf485f5ffd30241c947c995ace62f71bfdf03a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldstine, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 01:51:08 compute-0 podman[414838]: 2025-11-26 01:51:08.177059105 +0000 UTC m=+0.049955156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:51:08 compute-0 systemd[1]: Started libpod-conmon-0d00c253f3a9a3016613906c5cf485f5ffd30241c947c995ace62f71bfdf03a8.scope.
Nov 26 01:51:08 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:51:08 compute-0 podman[414838]: 2025-11-26 01:51:08.365527992 +0000 UTC m=+0.238424003 container init 0d00c253f3a9a3016613906c5cf485f5ffd30241c947c995ace62f71bfdf03a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 01:51:08 compute-0 podman[414838]: 2025-11-26 01:51:08.382718849 +0000 UTC m=+0.255614840 container start 0d00c253f3a9a3016613906c5cf485f5ffd30241c947c995ace62f71bfdf03a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldstine, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:51:08 compute-0 podman[414838]: 2025-11-26 01:51:08.391043995 +0000 UTC m=+0.263940046 container attach 0d00c253f3a9a3016613906c5cf485f5ffd30241c947c995ace62f71bfdf03a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldstine, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 01:51:08 compute-0 cool_goldstine[414853]: 167 167
Nov 26 01:51:08 compute-0 systemd[1]: libpod-0d00c253f3a9a3016613906c5cf485f5ffd30241c947c995ace62f71bfdf03a8.scope: Deactivated successfully.
Nov 26 01:51:08 compute-0 podman[414838]: 2025-11-26 01:51:08.398019902 +0000 UTC m=+0.270915903 container died 0d00c253f3a9a3016613906c5cf485f5ffd30241c947c995ace62f71bfdf03a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldstine, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:51:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-411a79e44ed0ab06df3eb365be4911a505346625bc494a1caebf774a1dd43a80-merged.mount: Deactivated successfully.
Nov 26 01:51:08 compute-0 podman[414838]: 2025-11-26 01:51:08.457437235 +0000 UTC m=+0.330333206 container remove 0d00c253f3a9a3016613906c5cf485f5ffd30241c947c995ace62f71bfdf03a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:51:08 compute-0 systemd[1]: libpod-conmon-0d00c253f3a9a3016613906c5cf485f5ffd30241c947c995ace62f71bfdf03a8.scope: Deactivated successfully.
Nov 26 01:51:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:51:08 compute-0 podman[414878]: 2025-11-26 01:51:08.79303861 +0000 UTC m=+0.096738341 container create 8f3d979f6dc8c01dd06d4c8ca3f70b9c86f8ad1053b8771dbc8c6646cb71d3f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:51:08 compute-0 podman[414878]: 2025-11-26 01:51:08.759117749 +0000 UTC m=+0.062817530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:51:08 compute-0 systemd[1]: Started libpod-conmon-8f3d979f6dc8c01dd06d4c8ca3f70b9c86f8ad1053b8771dbc8c6646cb71d3f5.scope.
Nov 26 01:51:08 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef7c531ca6fb756f867d6eb266c62aff347a07ca1440dd428437795b3b88001/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef7c531ca6fb756f867d6eb266c62aff347a07ca1440dd428437795b3b88001/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef7c531ca6fb756f867d6eb266c62aff347a07ca1440dd428437795b3b88001/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef7c531ca6fb756f867d6eb266c62aff347a07ca1440dd428437795b3b88001/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:51:08 compute-0 podman[414878]: 2025-11-26 01:51:08.960810601 +0000 UTC m=+0.264510382 container init 8f3d979f6dc8c01dd06d4c8ca3f70b9c86f8ad1053b8771dbc8c6646cb71d3f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cori, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 01:51:08 compute-0 podman[414878]: 2025-11-26 01:51:08.996933314 +0000 UTC m=+0.300633065 container start 8f3d979f6dc8c01dd06d4c8ca3f70b9c86f8ad1053b8771dbc8c6646cb71d3f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 01:51:09 compute-0 podman[414878]: 2025-11-26 01:51:09.005491587 +0000 UTC m=+0.309191328 container attach 8f3d979f6dc8c01dd06d4c8ca3f70b9c86f8ad1053b8771dbc8c6646cb71d3f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cori, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:51:09 compute-0 podman[414890]: 2025-11-26 01:51:09.017437945 +0000 UTC m=+0.157227024 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-type=git)
Nov 26 01:51:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:51:09 compute-0 confident_cori[414899]: {
Nov 26 01:51:09 compute-0 confident_cori[414899]:    "0": [
Nov 26 01:51:09 compute-0 confident_cori[414899]:        {
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "devices": [
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "/dev/loop3"
Nov 26 01:51:09 compute-0 confident_cori[414899]:            ],
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "lv_name": "ceph_lv0",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "lv_size": "21470642176",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "name": "ceph_lv0",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "tags": {
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.cluster_name": "ceph",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.crush_device_class": "",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.encrypted": "0",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.osd_id": "0",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.type": "block",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.vdo": "0"
Nov 26 01:51:09 compute-0 confident_cori[414899]:            },
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "type": "block",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "vg_name": "ceph_vg0"
Nov 26 01:51:09 compute-0 confident_cori[414899]:        }
Nov 26 01:51:09 compute-0 confident_cori[414899]:    ],
Nov 26 01:51:09 compute-0 confident_cori[414899]:    "1": [
Nov 26 01:51:09 compute-0 confident_cori[414899]:        {
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "devices": [
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "/dev/loop4"
Nov 26 01:51:09 compute-0 confident_cori[414899]:            ],
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "lv_name": "ceph_lv1",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "lv_size": "21470642176",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "name": "ceph_lv1",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "tags": {
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.cluster_name": "ceph",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.crush_device_class": "",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.encrypted": "0",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.osd_id": "1",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.type": "block",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.vdo": "0"
Nov 26 01:51:09 compute-0 confident_cori[414899]:            },
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "type": "block",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "vg_name": "ceph_vg1"
Nov 26 01:51:09 compute-0 confident_cori[414899]:        }
Nov 26 01:51:09 compute-0 confident_cori[414899]:    ],
Nov 26 01:51:09 compute-0 confident_cori[414899]:    "2": [
Nov 26 01:51:09 compute-0 confident_cori[414899]:        {
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "devices": [
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "/dev/loop5"
Nov 26 01:51:09 compute-0 confident_cori[414899]:            ],
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "lv_name": "ceph_lv2",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "lv_size": "21470642176",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "name": "ceph_lv2",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "tags": {
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.cluster_name": "ceph",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.crush_device_class": "",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.encrypted": "0",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.osd_id": "2",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.type": "block",
Nov 26 01:51:09 compute-0 confident_cori[414899]:                "ceph.vdo": "0"
Nov 26 01:51:09 compute-0 confident_cori[414899]:            },
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "type": "block",
Nov 26 01:51:09 compute-0 confident_cori[414899]:            "vg_name": "ceph_vg2"
Nov 26 01:51:09 compute-0 confident_cori[414899]:        }
Nov 26 01:51:09 compute-0 confident_cori[414899]:    ]
Nov 26 01:51:09 compute-0 confident_cori[414899]: }
Nov 26 01:51:09 compute-0 systemd[1]: libpod-8f3d979f6dc8c01dd06d4c8ca3f70b9c86f8ad1053b8771dbc8c6646cb71d3f5.scope: Deactivated successfully.
Nov 26 01:51:09 compute-0 conmon[414899]: conmon 8f3d979f6dc8c01dd06d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8f3d979f6dc8c01dd06d4c8ca3f70b9c86f8ad1053b8771dbc8c6646cb71d3f5.scope/container/memory.events
Nov 26 01:51:09 compute-0 podman[414878]: 2025-11-26 01:51:09.857693842 +0000 UTC m=+1.161393583 container died 8f3d979f6dc8c01dd06d4c8ca3f70b9c86f8ad1053b8771dbc8c6646cb71d3f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:51:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ef7c531ca6fb756f867d6eb266c62aff347a07ca1440dd428437795b3b88001-merged.mount: Deactivated successfully.
Nov 26 01:51:09 compute-0 podman[414878]: 2025-11-26 01:51:09.959878966 +0000 UTC m=+1.263578677 container remove 8f3d979f6dc8c01dd06d4c8ca3f70b9c86f8ad1053b8771dbc8c6646cb71d3f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cori, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:51:09 compute-0 systemd[1]: libpod-conmon-8f3d979f6dc8c01dd06d4c8ca3f70b9c86f8ad1053b8771dbc8c6646cb71d3f5.scope: Deactivated successfully.
Nov 26 01:51:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:51:10 compute-0 nova_compute[350387]: 2025-11-26 01:51:10.885 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "0e500d52-72e1-4501-b4d6-fc6ca575760f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:51:10 compute-0 nova_compute[350387]: 2025-11-26 01:51:10.888 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:51:10 compute-0 nova_compute[350387]: 2025-11-26 01:51:10.907 350391 DEBUG nova.compute.manager [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 01:51:10 compute-0 nova_compute[350387]: 2025-11-26 01:51:10.993 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:51:10 compute-0 nova_compute[350387]: 2025-11-26 01:51:10.994 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:51:11 compute-0 nova_compute[350387]: 2025-11-26 01:51:11.007 350391 DEBUG nova.virt.hardware [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 01:51:11 compute-0 nova_compute[350387]: 2025-11-26 01:51:11.007 350391 INFO nova.compute.claims [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 01:51:11 compute-0 nova_compute[350387]: 2025-11-26 01:51:11.109 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:51:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:51:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:51:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:51:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:51:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:51:11 compute-0 podman[415075]: 2025-11-26 01:51:11.171802611 +0000 UTC m=+0.087425347 container create 7ffb2038003ca8b806ee8257533fc948606c5f44a55a4856ea58163f911e0608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:51:11 compute-0 nova_compute[350387]: 2025-11-26 01:51:11.181 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:51:11 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 01:51:11 compute-0 podman[415075]: 2025-11-26 01:51:11.14069248 +0000 UTC m=+0.056315286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:51:11 compute-0 systemd[1]: Started libpod-conmon-7ffb2038003ca8b806ee8257533fc948606c5f44a55a4856ea58163f911e0608.scope.
Nov 26 01:51:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:51:11 compute-0 podman[415075]: 2025-11-26 01:51:11.312343742 +0000 UTC m=+0.227966568 container init 7ffb2038003ca8b806ee8257533fc948606c5f44a55a4856ea58163f911e0608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:51:11 compute-0 podman[415075]: 2025-11-26 01:51:11.332275576 +0000 UTC m=+0.247898322 container start 7ffb2038003ca8b806ee8257533fc948606c5f44a55a4856ea58163f911e0608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 01:51:11 compute-0 podman[415075]: 2025-11-26 01:51:11.338971266 +0000 UTC m=+0.254594032 container attach 7ffb2038003ca8b806ee8257533fc948606c5f44a55a4856ea58163f911e0608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 01:51:11 compute-0 sad_lichterman[415099]: 167 167
Nov 26 01:51:11 compute-0 systemd[1]: libpod-7ffb2038003ca8b806ee8257533fc948606c5f44a55a4856ea58163f911e0608.scope: Deactivated successfully.
Nov 26 01:51:11 compute-0 conmon[415099]: conmon 7ffb2038003ca8b806ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ffb2038003ca8b806ee8257533fc948606c5f44a55a4856ea58163f911e0608.scope/container/memory.events
Nov 26 01:51:11 compute-0 podman[415091]: 2025-11-26 01:51:11.372743042 +0000 UTC m=+0.135997122 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:51:11 compute-0 podman[415134]: 2025-11-26 01:51:11.419547258 +0000 UTC m=+0.047873677 container died 7ffb2038003ca8b806ee8257533fc948606c5f44a55a4856ea58163f911e0608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:51:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb279582383ee0022604064641f8873ca0b0d8a16b2c50f75567cfe1694bb350-merged.mount: Deactivated successfully.
Nov 26 01:51:11 compute-0 podman[415134]: 2025-11-26 01:51:11.500706477 +0000 UTC m=+0.129032816 container remove 7ffb2038003ca8b806ee8257533fc948606c5f44a55a4856ea58163f911e0608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 01:51:11 compute-0 systemd[1]: libpod-conmon-7ffb2038003ca8b806ee8257533fc948606c5f44a55a4856ea58163f911e0608.scope: Deactivated successfully.
Nov 26 01:51:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:51:11 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3059529059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:51:11 compute-0 nova_compute[350387]: 2025-11-26 01:51:11.736 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:51:11 compute-0 nova_compute[350387]: 2025-11-26 01:51:11.748 350391 DEBUG nova.compute.provider_tree [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:51:11 compute-0 nova_compute[350387]: 2025-11-26 01:51:11.768 350391 DEBUG nova.scheduler.client.report [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:51:11 compute-0 nova_compute[350387]: 2025-11-26 01:51:11.800 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:51:11 compute-0 nova_compute[350387]: 2025-11-26 01:51:11.801 350391 DEBUG nova.compute.manager [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 01:51:11 compute-0 podman[415164]: 2025-11-26 01:51:11.804091219 +0000 UTC m=+0.072356440 container create fcb70326db96123f3ad80f2e1b609c3a2b4b7bb19dc4212e726117336988f5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meninsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:51:11 compute-0 podman[415164]: 2025-11-26 01:51:11.776868958 +0000 UTC m=+0.045134179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:51:11 compute-0 nova_compute[350387]: 2025-11-26 01:51:11.877 350391 DEBUG nova.compute.manager [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 01:51:11 compute-0 nova_compute[350387]: 2025-11-26 01:51:11.877 350391 DEBUG nova.network.neutron [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 01:51:11 compute-0 systemd[1]: Started libpod-conmon-fcb70326db96123f3ad80f2e1b609c3a2b4b7bb19dc4212e726117336988f5fe.scope.
Nov 26 01:51:11 compute-0 nova_compute[350387]: 2025-11-26 01:51:11.907 350391 INFO nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 01:51:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:51:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eecbf3efdd61a1fff969804eb777b2c38ab7c0e88d2356af938c0cf08a22f305/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:51:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eecbf3efdd61a1fff969804eb777b2c38ab7c0e88d2356af938c0cf08a22f305/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:51:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eecbf3efdd61a1fff969804eb777b2c38ab7c0e88d2356af938c0cf08a22f305/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:51:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eecbf3efdd61a1fff969804eb777b2c38ab7c0e88d2356af938c0cf08a22f305/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:51:11 compute-0 nova_compute[350387]: 2025-11-26 01:51:11.957 350391 DEBUG nova.compute.manager [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 01:51:12 compute-0 podman[415164]: 2025-11-26 01:51:12.000180493 +0000 UTC m=+0.268445734 container init fcb70326db96123f3ad80f2e1b609c3a2b4b7bb19dc4212e726117336988f5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meninsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:51:12 compute-0 podman[415164]: 2025-11-26 01:51:12.020249262 +0000 UTC m=+0.288514473 container start fcb70326db96123f3ad80f2e1b609c3a2b4b7bb19dc4212e726117336988f5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:51:12 compute-0 podman[415164]: 2025-11-26 01:51:12.02656299 +0000 UTC m=+0.294828201 container attach fcb70326db96123f3ad80f2e1b609c3a2b4b7bb19dc4212e726117336988f5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:51:12 compute-0 nova_compute[350387]: 2025-11-26 01:51:12.076 350391 DEBUG nova.compute.manager [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 01:51:12 compute-0 nova_compute[350387]: 2025-11-26 01:51:12.078 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 01:51:12 compute-0 nova_compute[350387]: 2025-11-26 01:51:12.078 350391 INFO nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Creating image(s)#033[00m
Nov 26 01:51:12 compute-0 nova_compute[350387]: 2025-11-26 01:51:12.131 350391 DEBUG nova.storage.rbd_utils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:51:12 compute-0 nova_compute[350387]: 2025-11-26 01:51:12.193 350391 DEBUG nova.storage.rbd_utils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:51:12 compute-0 nova_compute[350387]: 2025-11-26 01:51:12.254 350391 DEBUG nova.storage.rbd_utils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:51:12 compute-0 nova_compute[350387]: 2025-11-26 01:51:12.271 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:51:12 compute-0 nova_compute[350387]: 2025-11-26 01:51:12.402 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e --force-share --output=json" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:51:12 compute-0 nova_compute[350387]: 2025-11-26 01:51:12.404 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "f456d938eec6117407d48c9debbc5604edb4194e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:51:12 compute-0 nova_compute[350387]: 2025-11-26 01:51:12.405 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "f456d938eec6117407d48c9debbc5604edb4194e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:51:12 compute-0 nova_compute[350387]: 2025-11-26 01:51:12.405 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "f456d938eec6117407d48c9debbc5604edb4194e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:51:12 compute-0 nova_compute[350387]: 2025-11-26 01:51:12.463 350391 DEBUG nova.storage.rbd_utils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:51:12 compute-0 nova_compute[350387]: 2025-11-26 01:51:12.473 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:51:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:51:12 compute-0 nova_compute[350387]: 2025-11-26 01:51:12.915 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:51:13 compute-0 nova_compute[350387]: 2025-11-26 01:51:13.074 350391 DEBUG nova.storage.rbd_utils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] resizing rbd image 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 26 01:51:13 compute-0 nova_compute[350387]: 2025-11-26 01:51:13.124 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]: {
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:        "osd_id": 0,
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:        "type": "bluestore"
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:    },
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:        "osd_id": 2,
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:        "type": "bluestore"
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:    },
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:        "osd_id": 1,
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:        "type": "bluestore"
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]:    }
Nov 26 01:51:13 compute-0 quirky_meninsky[415182]: }
Nov 26 01:51:13 compute-0 systemd[1]: libpod-fcb70326db96123f3ad80f2e1b609c3a2b4b7bb19dc4212e726117336988f5fe.scope: Deactivated successfully.
Nov 26 01:51:13 compute-0 systemd[1]: libpod-fcb70326db96123f3ad80f2e1b609c3a2b4b7bb19dc4212e726117336988f5fe.scope: Consumed 1.137s CPU time.
Nov 26 01:51:13 compute-0 conmon[415182]: conmon fcb70326db96123f3ad8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fcb70326db96123f3ad80f2e1b609c3a2b4b7bb19dc4212e726117336988f5fe.scope/container/memory.events
Nov 26 01:51:13 compute-0 podman[415363]: 2025-11-26 01:51:13.274873256 +0000 UTC m=+0.041887437 container died fcb70326db96123f3ad80f2e1b609c3a2b4b7bb19dc4212e726117336988f5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meninsky, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:51:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-eecbf3efdd61a1fff969804eb777b2c38ab7c0e88d2356af938c0cf08a22f305-merged.mount: Deactivated successfully.
Nov 26 01:51:13 compute-0 podman[415363]: 2025-11-26 01:51:13.383129971 +0000 UTC m=+0.150144112 container remove fcb70326db96123f3ad80f2e1b609c3a2b4b7bb19dc4212e726117336988f5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meninsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 26 01:51:13 compute-0 nova_compute[350387]: 2025-11-26 01:51:13.383 350391 DEBUG nova.objects.instance [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lazy-loading 'migration_context' on Instance uuid 0e500d52-72e1-4501-b4d6-fc6ca575760f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 01:51:13 compute-0 systemd[1]: libpod-conmon-fcb70326db96123f3ad80f2e1b609c3a2b4b7bb19dc4212e726117336988f5fe.scope: Deactivated successfully.
Nov 26 01:51:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:51:13 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:51:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:51:13 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:51:13 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev f6300e2e-6439-45bc-bcaa-bbca7d2e2363 does not exist
Nov 26 01:51:13 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 1de0b70a-db47-43c3-8e19-715db43b6a98 does not exist
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.168 350391 DEBUG nova.storage.rbd_utils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.216 350391 DEBUG nova.storage.rbd_utils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.227 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:51:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:14.238 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 01:51:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:14.241 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.258 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.330 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.333 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.336 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.337 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.413 350391 DEBUG nova.storage.rbd_utils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.427 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:51:14 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:51:14 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.626 350391 DEBUG nova.network.neutron [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Successfully updated port: cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.646 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.647 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquired lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.648 350391 DEBUG nova.network.neutron [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.711 350391 DEBUG nova.compute.manager [req-a04e635c-c365-4825-98d4-14383f60e15e req-c1cf0974-2ec4-40b9-99a3-ef926057ee00 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Received event network-changed-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.714 350391 DEBUG nova.compute.manager [req-a04e635c-c365-4825-98d4-14383f60e15e req-c1cf0974-2ec4-40b9-99a3-ef926057ee00 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Refreshing instance network info cache due to event network-changed-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.719 350391 DEBUG oslo_concurrency.lockutils [req-a04e635c-c365-4825-98d4-14383f60e15e req-c1cf0974-2ec4-40b9-99a3-ef926057ee00 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:51:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 82 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 88 KiB/s wr, 3 op/s
Nov 26 01:51:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:51:14 compute-0 nova_compute[350387]: 2025-11-26 01:51:14.872 350391 DEBUG nova.network.neutron [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 01:51:15 compute-0 nova_compute[350387]: 2025-11-26 01:51:15.011 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:51:15 compute-0 nova_compute[350387]: 2025-11-26 01:51:15.190 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 01:51:15 compute-0 nova_compute[350387]: 2025-11-26 01:51:15.190 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Ensure instance console log exists: /var/lib/nova/instances/0e500d52-72e1-4501-b4d6-fc6ca575760f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 01:51:15 compute-0 nova_compute[350387]: 2025-11-26 01:51:15.191 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:51:15 compute-0 nova_compute[350387]: 2025-11-26 01:51:15.191 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:51:15 compute-0 nova_compute[350387]: 2025-11-26 01:51:15.192 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.113 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.371 350391 DEBUG nova.network.neutron [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Updating instance_info_cache with network_info: [{"id": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "address": "fa:16:3e:70:20:57", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7c212d-f2", "ovs_interfaceid": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.398 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Releasing lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.399 350391 DEBUG nova.compute.manager [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Instance network_info: |[{"id": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "address": "fa:16:3e:70:20:57", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7c212d-f2", "ovs_interfaceid": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.400 350391 DEBUG oslo_concurrency.lockutils [req-a04e635c-c365-4825-98d4-14383f60e15e req-c1cf0974-2ec4-40b9-99a3-ef926057ee00 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.401 350391 DEBUG nova.network.neutron [req-a04e635c-c365-4825-98d4-14383f60e15e req-c1cf0974-2ec4-40b9-99a3-ef926057ee00 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Refreshing network info cache for port cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.407 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Start _get_guest_xml network_info=[{"id": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "address": "fa:16:3e:70:20:57", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7c212d-f2", "ovs_interfaceid": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T01:48:44Z,direct_url=<?>,disk_format='qcow2',id=48e08d00-37a3-4465-a949-ff0b8afe4def,min_disk=0,min_ram=0,name='cirros',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T01:48:48Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}], 'ephemerals': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'size': 1, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.424 350391 WARNING nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.440 350391 DEBUG nova.virt.libvirt.host [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.442 350391 DEBUG nova.virt.libvirt.host [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.451 350391 DEBUG nova.virt.libvirt.host [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.453 350391 DEBUG nova.virt.libvirt.host [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.454 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.456 350391 DEBUG nova.virt.hardware [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T01:48:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='030e95e2-5458-42ef-a5df-79a19c0b681d',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T01:48:44Z,direct_url=<?>,disk_format='qcow2',id=48e08d00-37a3-4465-a949-ff0b8afe4def,min_disk=0,min_ram=0,name='cirros',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T01:48:48Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.457 350391 DEBUG nova.virt.hardware [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.458 350391 DEBUG nova.virt.hardware [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.459 350391 DEBUG nova.virt.hardware [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.460 350391 DEBUG nova.virt.hardware [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.461 350391 DEBUG nova.virt.hardware [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.462 350391 DEBUG nova.virt.hardware [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.462 350391 DEBUG nova.virt.hardware [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.463 350391 DEBUG nova.virt.hardware [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.463 350391 DEBUG nova.virt.hardware [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.463 350391 DEBUG nova.virt.hardware [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.468 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:51:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 93 MiB data, 207 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 751 KiB/s wr, 16 op/s
Nov 26 01:51:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 01:51:16 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2571783206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.944 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:51:16 compute-0 nova_compute[350387]: 2025-11-26 01:51:16.948 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:51:17 compute-0 nova_compute[350387]: 2025-11-26 01:51:17.059 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:51:17 compute-0 nova_compute[350387]: 2025-11-26 01:51:17.061 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:51:17 compute-0 nova_compute[350387]: 2025-11-26 01:51:17.062 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:51:17 compute-0 nova_compute[350387]: 2025-11-26 01:51:17.063 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 01:51:17 compute-0 nova_compute[350387]: 2025-11-26 01:51:17.087 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 26 01:51:17 compute-0 nova_compute[350387]: 2025-11-26 01:51:17.346 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:51:17 compute-0 nova_compute[350387]: 2025-11-26 01:51:17.347 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:51:17 compute-0 nova_compute[350387]: 2025-11-26 01:51:17.348 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 01:51:17 compute-0 nova_compute[350387]: 2025-11-26 01:51:17.349 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid b1c088bc-7a6b-4580-93ff-685731747189 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 01:51:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 01:51:17 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3641936659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 01:51:17 compute-0 nova_compute[350387]: 2025-11-26 01:51:17.508 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:51:17 compute-0 nova_compute[350387]: 2025-11-26 01:51:17.557 350391 DEBUG nova.storage.rbd_utils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:51:17 compute-0 nova_compute[350387]: 2025-11-26 01:51:17.569 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:51:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 01:51:18 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1470154019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.112 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.115 350391 DEBUG nova.virt.libvirt.vif [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T01:51:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo',id=2,image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='366b90b6-2e85-40c4-9ca1-855cf9022409'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d902f6105ab4c81a51a4751fa89a83e',ramdisk_id='',reservation_id='r-fn9f8qdl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T01:51:12Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yNDEyNjAwMDg1MzcwOTg3MTE0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI0MTI2MDAwODUzNzA5ODcxMTQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MjQxMjYwMDA4NTM3MDk4NzExND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI0MTI2MDAwODUzNzA5ODcxMTQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yNDEyNjAwMDg1MzcwOTg3MTE0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yNDEyNjAwMDg1MzcwOTg3MTE0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Nov 26 01:51:18 compute-0 nova_compute[350387]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MjQxMjYwMDA4NTM3MDk4NzExND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI0MTI2MDAwODUzNzA5ODcxMTQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yNDEyNjAwMDg1MzcwOTg3MTE0PT0tLQo=',user_id='b130e7a8bed3424f9f5ff63b35cd2b28',uuid=0e500d52-72e1-4501-b4d6-fc6ca575760f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "address": "fa:16:3e:70:20:57", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7c212d-f2", "ovs_interfaceid": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.115 350391 DEBUG nova.network.os_vif_util [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converting VIF {"id": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "address": "fa:16:3e:70:20:57", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7c212d-f2", "ovs_interfaceid": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.117 350391 DEBUG nova.network.os_vif_util [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:20:57,bridge_name='br-int',has_traffic_filtering=True,id=cc7c212d-f288-48f9-a0c6-0e5635e3f2b7,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcc7c212d-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.119 350391 DEBUG nova.objects.instance [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0e500d52-72e1-4501-b4d6-fc6ca575760f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.129 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.151 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] End _get_guest_xml xml=<domain type="kvm">
Nov 26 01:51:18 compute-0 nova_compute[350387]:  <uuid>0e500d52-72e1-4501-b4d6-fc6ca575760f</uuid>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  <name>instance-00000002</name>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  <memory>524288</memory>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  <metadata>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <nova:name>vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo</nova:name>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 01:51:16</nova:creationTime>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <nova:flavor name="m1.small">
Nov 26 01:51:18 compute-0 nova_compute[350387]:        <nova:memory>512</nova:memory>
Nov 26 01:51:18 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 01:51:18 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 01:51:18 compute-0 nova_compute[350387]:        <nova:ephemeral>1</nova:ephemeral>
Nov 26 01:51:18 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 01:51:18 compute-0 nova_compute[350387]:        <nova:user uuid="b130e7a8bed3424f9f5ff63b35cd2b28">admin</nova:user>
Nov 26 01:51:18 compute-0 nova_compute[350387]:        <nova:project uuid="4d902f6105ab4c81a51a4751fa89a83e">admin</nova:project>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="48e08d00-37a3-4465-a949-ff0b8afe4def"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <nova:ports>
Nov 26 01:51:18 compute-0 nova_compute[350387]:        <nova:port uuid="cc7c212d-f288-48f9-a0c6-0e5635e3f2b7">
Nov 26 01:51:18 compute-0 nova_compute[350387]:          <nova:ip type="fixed" address="192.168.0.118" ipVersion="4"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:        </nova:port>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      </nova:ports>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  </metadata>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <system>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <entry name="serial">0e500d52-72e1-4501-b4d6-fc6ca575760f</entry>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <entry name="uuid">0e500d52-72e1-4501-b4d6-fc6ca575760f</entry>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    </system>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  <os>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  </os>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  <features>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <apic/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  </features>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  </clock>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  </cpu>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  <devices>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/0e500d52-72e1-4501-b4d6-fc6ca575760f_disk">
Nov 26 01:51:18 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      </source>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 01:51:18 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      </auth>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/0e500d52-72e1-4501-b4d6-fc6ca575760f_disk.eph0">
Nov 26 01:51:18 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      </source>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 01:51:18 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      </auth>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <target dev="vdb" bus="virtio"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/0e500d52-72e1-4501-b4d6-fc6ca575760f_disk.config">
Nov 26 01:51:18 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      </source>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 01:51:18 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      </auth>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <interface type="ethernet">
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <mac address="fa:16:3e:70:20:57"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <mtu size="1442"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <target dev="tapcc7c212d-f2"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    </interface>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/0e500d52-72e1-4501-b4d6-fc6ca575760f/console.log" append="off"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    </serial>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <video>
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    </video>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    </rng>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 01:51:18 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 01:51:18 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 01:51:18 compute-0 nova_compute[350387]:  </devices>
Nov 26 01:51:18 compute-0 nova_compute[350387]: </domain>
Nov 26 01:51:18 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.153 350391 DEBUG nova.compute.manager [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Preparing to wait for external event network-vif-plugged-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.154 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.154 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.155 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.156 350391 DEBUG nova.virt.libvirt.vif [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T01:51:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo',id=2,image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='366b90b6-2e85-40c4-9ca1-855cf9022409'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d902f6105ab4c81a51a4751fa89a83e',ramdisk_id='',reservation_id='r-fn9f8qdl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T01:51:12Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yNDEyNjAwMDg1MzcwOTg3MTE0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI0MTI2MDAwODUzNzA5ODcxMTQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MjQxMjYwMDA4NTM3MDk4NzExND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI0MTI2MDAwODUzNzA5ODcxMTQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yNDEyNjAwMDg1MzcwOTg3MTE0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yNDEyNjAwMDg1MzcwOTg3MTE0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Nov 26 01:51:18 compute-0 nova_compute[350387]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MjQxMjYwMDA4NTM3MDk4NzExND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI0MTI2MDAwODUzNzA5ODcxMTQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yNDEyNjAwMDg1MzcwOTg3MTE0PT0tLQo=',user_id='b130e7a8bed3424f9f5ff63b35cd2b28',uuid=0e500d52-72e1-4501-b4d6-fc6ca575760f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "address": "fa:16:3e:70:20:57", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7c212d-f2", "ovs_interfaceid": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.157 350391 DEBUG nova.network.os_vif_util [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converting VIF {"id": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "address": "fa:16:3e:70:20:57", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7c212d-f2", "ovs_interfaceid": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.158 350391 DEBUG nova.network.os_vif_util [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:20:57,bridge_name='br-int',has_traffic_filtering=True,id=cc7c212d-f288-48f9-a0c6-0e5635e3f2b7,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcc7c212d-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.159 350391 DEBUG os_vif [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:20:57,bridge_name='br-int',has_traffic_filtering=True,id=cc7c212d-f288-48f9-a0c6-0e5635e3f2b7,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcc7c212d-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.159 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.160 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.161 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.165 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.165 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc7c212d-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.166 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc7c212d-f2, col_values=(('external_ids', {'iface-id': 'cc7c212d-f288-48f9-a0c6-0e5635e3f2b7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:20:57', 'vm-uuid': '0e500d52-72e1-4501-b4d6-fc6ca575760f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.168 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:18 compute-0 NetworkManager[48886]: <info>  [1764121878.1696] manager: (tapcc7c212d-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.172 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.181 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.183 350391 INFO os_vif [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:20:57,bridge_name='br-int',has_traffic_filtering=True,id=cc7c212d-f288-48f9-a0c6-0e5635e3f2b7,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcc7c212d-f2')#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.266 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.267 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.268 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.269 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No VIF found with MAC fa:16:3e:70:20:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.270 350391 INFO nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Using config drive#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.322 350391 DEBUG nova.storage.rbd_utils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:51:18 compute-0 rsyslogd[188548]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 01:51:18.115 350391 DEBUG nova.virt.libvirt.vif [None req-bf42769c-37 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 01:51:18 compute-0 rsyslogd[188548]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 01:51:18.156 350391 DEBUG nova.virt.libvirt.vif [None req-bf42769c-37 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.524414) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121878524679, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1379, "num_deletes": 251, "total_data_size": 2100975, "memory_usage": 2148848, "flush_reason": "Manual Compaction"}
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121878538675, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2058358, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24141, "largest_seqno": 25519, "table_properties": {"data_size": 2051882, "index_size": 3679, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13778, "raw_average_key_size": 20, "raw_value_size": 2038703, "raw_average_value_size": 2963, "num_data_blocks": 165, "num_entries": 688, "num_filter_entries": 688, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764121744, "oldest_key_time": 1764121744, "file_creation_time": 1764121878, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 14317 microseconds, and 7263 cpu microseconds.
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.538731) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2058358 bytes OK
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.538750) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.541158) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.541175) EVENT_LOG_v1 {"time_micros": 1764121878541170, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.541191) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2094827, prev total WAL file size 2094827, number of live WAL files 2.
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.542540) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2010KB)], [56(6877KB)]
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121878542582, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9100564, "oldest_snapshot_seqno": -1}
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4604 keys, 7351091 bytes, temperature: kUnknown
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121878600386, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7351091, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7320261, "index_size": 18240, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11525, "raw_key_size": 115327, "raw_average_key_size": 25, "raw_value_size": 7236711, "raw_average_value_size": 1571, "num_data_blocks": 756, "num_entries": 4604, "num_filter_entries": 4604, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764121878, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.600598) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7351091 bytes
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.602697) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.3 rd, 127.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.7 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(8.0) write-amplify(3.6) OK, records in: 5122, records dropped: 518 output_compression: NoCompression
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.602717) EVENT_LOG_v1 {"time_micros": 1764121878602707, "job": 30, "event": "compaction_finished", "compaction_time_micros": 57859, "compaction_time_cpu_micros": 35367, "output_level": 6, "num_output_files": 1, "total_output_size": 7351091, "num_input_records": 5122, "num_output_records": 4604, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121878603286, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764121878605084, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.542335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.605422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.605432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.605436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.605440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:51:18 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:51:18.605444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.702 350391 INFO nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Creating config drive at /var/lib/nova/instances/0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.config#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.715 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcu25g366 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:51:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 110 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 MiB/s wr, 30 op/s
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.862 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcu25g366" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.898 350391 DEBUG nova.storage.rbd_utils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:51:18 compute-0 nova_compute[350387]: 2025-11-26 01:51:18.905 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.config 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.184 350391 DEBUG oslo_concurrency.processutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.config 0e500d52-72e1-4501-b4d6-fc6ca575760f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.186 350391 INFO nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Deleting local config drive /var/lib/nova/instances/0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.config because it was imported into RBD.#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.196 350391 DEBUG nova.network.neutron [req-a04e635c-c365-4825-98d4-14383f60e15e req-c1cf0974-2ec4-40b9-99a3-ef926057ee00 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Updated VIF entry in instance network info cache for port cc7c212d-f288-48f9-a0c6-0e5635e3f2b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.198 350391 DEBUG nova.network.neutron [req-a04e635c-c365-4825-98d4-14383f60e15e req-c1cf0974-2ec4-40b9-99a3-ef926057ee00 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Updating instance_info_cache with network_info: [{"id": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "address": "fa:16:3e:70:20:57", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7c212d-f2", "ovs_interfaceid": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.231 350391 DEBUG oslo_concurrency.lockutils [req-a04e635c-c365-4825-98d4-14383f60e15e req-c1cf0974-2ec4-40b9-99a3-ef926057ee00 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:51:19 compute-0 NetworkManager[48886]: <info>  [1764121879.2908] manager: (tapcc7c212d-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Nov 26 01:51:19 compute-0 kernel: tapcc7c212d-f2: entered promiscuous mode
Nov 26 01:51:19 compute-0 ovn_controller[89102]: 2025-11-26T01:51:19Z|00035|binding|INFO|Claiming lport cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 for this chassis.
Nov 26 01:51:19 compute-0 ovn_controller[89102]: 2025-11-26T01:51:19Z|00036|binding|INFO|cc7c212d-f288-48f9-a0c6-0e5635e3f2b7: Claiming fa:16:3e:70:20:57 192.168.0.118
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.294 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:19.303 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:20:57 192.168.0.118'], port_security=['fa:16:3e:70:20:57 192.168.0.118'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vnceagrg57o4-rkxsz3cjssco-tkhgbferrqyy-port-fjd2vmeyty65', 'neutron:cidrs': '192.168.0.118/24', 'neutron:device_id': '0e500d52-72e1-4501-b4d6-fc6ca575760f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97f5f89-70be-4349-beb5-5f8e6065072e', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vnceagrg57o4-rkxsz3cjssco-tkhgbferrqyy-port-fjd2vmeyty65', 'neutron:project_id': '4d902f6105ab4c81a51a4751fa89a83e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd3202a1a-8d71-42b1-ae70-18469fa18607', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5f5986b-4ad4-4edf-b238-68c26c7002dd, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=cc7c212d-f288-48f9-a0c6-0e5635e3f2b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 01:51:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:19.305 286844 INFO neutron.agent.ovn.metadata.agent [-] Port cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 in datapath c97f5f89-70be-4349-beb5-5f8e6065072e bound to our chassis#033[00m
Nov 26 01:51:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:19.307 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97f5f89-70be-4349-beb5-5f8e6065072e#033[00m
Nov 26 01:51:19 compute-0 ovn_controller[89102]: 2025-11-26T01:51:19Z|00037|binding|INFO|Setting lport cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 ovn-installed in OVS
Nov 26 01:51:19 compute-0 ovn_controller[89102]: 2025-11-26T01:51:19Z|00038|binding|INFO|Setting lport cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 up in Southbound
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.330 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:19.332 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[76f745d8-34f5-4802-9c25-3668e14c888b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:51:19 compute-0 systemd-machined[138512]: New machine qemu-2-instance-00000002.
Nov 26 01:51:19 compute-0 systemd-udevd[415734]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 01:51:19 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 26 01:51:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:19.376 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[649d23c9-3954-4089-b392-7e97638dec94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:51:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:19.379 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[15301869-7237-4461-a988-195cc4a458d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:51:19 compute-0 NetworkManager[48886]: <info>  [1764121879.3883] device (tapcc7c212d-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 01:51:19 compute-0 NetworkManager[48886]: <info>  [1764121879.3921] device (tapcc7c212d-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 01:51:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:19.426 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[0e6aeed1-082c-44e5-b4c5-65b0585a5283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:51:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:19.453 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[320cc38e-a49e-43f7-a8ae-f2fe9301cd3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97f5f89-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:e8:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544483, 'reachable_time': 18802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415745, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:51:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:19.479 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[1b671daf-d26b-4ca7-b59a-a651dfd8f348]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc97f5f89-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544500, 'tstamp': 544500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415746, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc97f5f89-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544503, 'tstamp': 544503}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415746, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:51:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:19.482 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97f5f89-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.484 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.486 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:19.487 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97f5f89-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:51:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:19.487 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 01:51:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:19.488 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97f5f89-70, col_values=(('external_ids', {'iface-id': '3824ec63-7278-42dc-8c72-8ec8e06c2f0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:51:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:19.488 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.540 350391 DEBUG nova.compute.manager [req-9fa25ad6-6af9-401f-bc1a-6cd714ce3b43 req-0fa17611-e223-4c5f-b03e-a7b9baf0adc0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Received event network-vif-plugged-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.541 350391 DEBUG oslo_concurrency.lockutils [req-9fa25ad6-6af9-401f-bc1a-6cd714ce3b43 req-0fa17611-e223-4c5f-b03e-a7b9baf0adc0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.541 350391 DEBUG oslo_concurrency.lockutils [req-9fa25ad6-6af9-401f-bc1a-6cd714ce3b43 req-0fa17611-e223-4c5f-b03e-a7b9baf0adc0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.542 350391 DEBUG oslo_concurrency.lockutils [req-9fa25ad6-6af9-401f-bc1a-6cd714ce3b43 req-0fa17611-e223-4c5f-b03e-a7b9baf0adc0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.542 350391 DEBUG nova.compute.manager [req-9fa25ad6-6af9-401f-bc1a-6cd714ce3b43 req-0fa17611-e223-4c5f-b03e-a7b9baf0adc0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Processing event network-vif-plugged-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.591 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updating instance_info_cache with network_info: [{"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.606 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.606 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.607 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.607 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.608 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.609 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.610 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.610 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.611 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.611 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.634 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.635 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.635 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.635 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:51:19 compute-0 nova_compute[350387]: 2025-11-26 01:51:19.636 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:51:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:51:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:51:20 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3005274903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:51:20 compute-0 nova_compute[350387]: 2025-11-26 01:51:20.155 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:51:20 compute-0 nova_compute[350387]: 2025-11-26 01:51:20.252 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:51:20 compute-0 nova_compute[350387]: 2025-11-26 01:51:20.252 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:51:20 compute-0 nova_compute[350387]: 2025-11-26 01:51:20.252 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:51:20 compute-0 nova_compute[350387]: 2025-11-26 01:51:20.261 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:51:20 compute-0 nova_compute[350387]: 2025-11-26 01:51:20.261 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:51:20 compute-0 nova_compute[350387]: 2025-11-26 01:51:20.262 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:51:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Nov 26 01:51:20 compute-0 nova_compute[350387]: 2025-11-26 01:51:20.847 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:51:20 compute-0 nova_compute[350387]: 2025-11-26 01:51:20.848 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3985MB free_disk=59.93931579589844GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:51:20 compute-0 nova_compute[350387]: 2025-11-26 01:51:20.849 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:51:20 compute-0 nova_compute[350387]: 2025-11-26 01:51:20.849 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:51:20 compute-0 nova_compute[350387]: 2025-11-26 01:51:20.947 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:51:20 compute-0 nova_compute[350387]: 2025-11-26 01:51:20.947 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 0e500d52-72e1-4501-b4d6-fc6ca575760f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:51:20 compute-0 nova_compute[350387]: 2025-11-26 01:51:20.948 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:51:20 compute-0 nova_compute[350387]: 2025-11-26 01:51:20.948 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.018 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.061 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764121881.0609412, 0e500d52-72e1-4501-b4d6-fc6ca575760f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.062 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] VM Started (Lifecycle Event)#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.065 350391 DEBUG nova.compute.manager [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.078 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.084 350391 INFO nova.virt.libvirt.driver [-] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Instance spawned successfully.#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.084 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.087 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.093 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.118 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.119 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.119 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.120 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.120 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.120 350391 DEBUG nova.virt.libvirt.driver [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.124 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.126 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.126 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764121881.0671878, 0e500d52-72e1-4501-b4d6-fc6ca575760f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.126 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] VM Paused (Lifecycle Event)#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.187 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.196 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764121881.0754387, 0e500d52-72e1-4501-b4d6-fc6ca575760f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.196 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] VM Resumed (Lifecycle Event)#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.226 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.236 350391 INFO nova.compute.manager [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Took 9.16 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.237 350391 DEBUG nova.compute.manager [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.239 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.272 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.349 350391 INFO nova.compute.manager [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Took 10.39 seconds to build instance.#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.364 350391 DEBUG oslo_concurrency.lockutils [None req-bf42769c-37aa-4f16-a507-cab589560a5e b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:51:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:51:21 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3753124963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.588 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.598 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.626 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.652 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.653 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.662 350391 DEBUG nova.compute.manager [req-9f11f634-b57f-4b5c-9086-3b204dd75b12 req-50492b11-13af-446b-a735-4b6e3465b7ee 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Received event network-vif-plugged-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.662 350391 DEBUG oslo_concurrency.lockutils [req-9f11f634-b57f-4b5c-9086-3b204dd75b12 req-50492b11-13af-446b-a735-4b6e3465b7ee 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.663 350391 DEBUG oslo_concurrency.lockutils [req-9f11f634-b57f-4b5c-9086-3b204dd75b12 req-50492b11-13af-446b-a735-4b6e3465b7ee 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.663 350391 DEBUG oslo_concurrency.lockutils [req-9f11f634-b57f-4b5c-9086-3b204dd75b12 req-50492b11-13af-446b-a735-4b6e3465b7ee 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.664 350391 DEBUG nova.compute.manager [req-9f11f634-b57f-4b5c-9086-3b204dd75b12 req-50492b11-13af-446b-a735-4b6e3465b7ee 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] No waiting events found dispatching network-vif-plugged-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 01:51:21 compute-0 nova_compute[350387]: 2025-11-26 01:51:21.664 350391 WARNING nova.compute.manager [req-9f11f634-b57f-4b5c-9086-3b204dd75b12 req-50492b11-13af-446b-a735-4b6e3465b7ee 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Received unexpected event network-vif-plugged-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 for instance with vm_state active and task_state None.#033[00m
Nov 26 01:51:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.4 MiB/s wr, 47 op/s
Nov 26 01:51:23 compute-0 nova_compute[350387]: 2025-11-26 01:51:23.170 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:23 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:23.245 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:51:23 compute-0 nova_compute[350387]: 2025-11-26 01:51:23.885 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:51:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 234 KiB/s rd, 1.4 MiB/s wr, 56 op/s
Nov 26 01:51:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:51:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:24.966 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:51:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:24.967 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:51:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:51:24.968 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:51:26 compute-0 nova_compute[350387]: 2025-11-26 01:51:26.121 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:26 compute-0 podman[415853]: 2025-11-26 01:51:26.561797551 +0000 UTC m=+0.107905367 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 26 01:51:26 compute-0 podman[415854]: 2025-11-26 01:51:26.584647359 +0000 UTC m=+0.135964112 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:51:26 compute-0 podman[415855]: 2025-11-26 01:51:26.611073267 +0000 UTC m=+0.150367770 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:51:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 918 KiB/s rd, 1.3 MiB/s wr, 74 op/s
Nov 26 01:51:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:51:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/131468730' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:51:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:51:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/131468730' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:51:28 compute-0 nova_compute[350387]: 2025-11-26 01:51:28.174 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:28 compute-0 podman[415914]: 2025-11-26 01:51:28.643007106 +0000 UTC m=+0.185901536 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:51:28 compute-0 podman[415915]: 2025-11-26 01:51:28.673721306 +0000 UTC m=+0.209959318 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:51:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 660 KiB/s wr, 82 op/s
Nov 26 01:51:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:51:29 compute-0 podman[158021]: time="2025-11-26T01:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:51:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:51:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8618 "" "Go-http-client/1.1"
Nov 26 01:51:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 22 KiB/s wr, 68 op/s
Nov 26 01:51:31 compute-0 nova_compute[350387]: 2025-11-26 01:51:31.124 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:31 compute-0 openstack_network_exporter[367323]: ERROR   01:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:51:31 compute-0 openstack_network_exporter[367323]: ERROR   01:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:51:31 compute-0 openstack_network_exporter[367323]: ERROR   01:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:51:31 compute-0 openstack_network_exporter[367323]: ERROR   01:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:51:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:51:31 compute-0 openstack_network_exporter[367323]: ERROR   01:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:51:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:51:31 compute-0 podman[415958]: 2025-11-26 01:51:31.648365845 +0000 UTC m=+0.116552552 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Nov 26 01:51:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.1 KiB/s wr, 59 op/s
Nov 26 01:51:33 compute-0 nova_compute[350387]: 2025-11-26 01:51:33.178 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:51:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 50 op/s
Nov 26 01:51:36 compute-0 nova_compute[350387]: 2025-11-26 01:51:36.128 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 41 op/s
Nov 26 01:51:37 compute-0 podman[415978]: 2025-11-26 01:51:37.570981609 +0000 UTC m=+0.114234047 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 26 01:51:38 compute-0 nova_compute[350387]: 2025-11-26 01:51:38.183 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 624 KiB/s rd, 19 op/s
Nov 26 01:51:39 compute-0 podman[415997]: 2025-11-26 01:51:39.561637409 +0000 UTC m=+0.110418159 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, architecture=x86_64, distribution-scope=public, version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 01:51:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:51:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:51:41
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.data']
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:51:41 compute-0 nova_compute[350387]: 2025-11-26 01:51:41.131 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:51:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:51:41 compute-0 podman[416015]: 2025-11-26 01:51:41.572496942 +0000 UTC m=+0.119827705 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:51:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:51:43 compute-0 nova_compute[350387]: 2025-11-26 01:51:43.189 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:51:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:51:46 compute-0 nova_compute[350387]: 2025-11-26 01:51:46.135 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:51:48 compute-0 nova_compute[350387]: 2025-11-26 01:51:48.193 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:51:49 compute-0 ovn_controller[89102]: 2025-11-26T01:51:49Z|00039|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Nov 26 01:51:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:51:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008201929494692974 of space, bias 1.0, pg target 0.24605788484078922 quantized to 32 (current 32)
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:51:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:51:51 compute-0 nova_compute[350387]: 2025-11-26 01:51:51.139 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 111 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Nov 26 01:51:53 compute-0 nova_compute[350387]: 2025-11-26 01:51:53.198 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:54 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 26 01:51:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:51:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 114 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 212 KiB/s wr, 4 op/s
Nov 26 01:51:55 compute-0 ovn_controller[89102]: 2025-11-26T01:51:55Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:20:57 192.168.0.118
Nov 26 01:51:55 compute-0 ovn_controller[89102]: 2025-11-26T01:51:55Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:20:57 192.168.0.118
Nov 26 01:51:56 compute-0 nova_compute[350387]: 2025-11-26 01:51:56.141 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 122 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 814 KiB/s wr, 18 op/s
Nov 26 01:51:57 compute-0 podman[416042]: 2025-11-26 01:51:57.580691793 +0000 UTC m=+0.120601687 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 01:51:57 compute-0 podman[416044]: 2025-11-26 01:51:57.586178989 +0000 UTC m=+0.118304232 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:51:57 compute-0 podman[416043]: 2025-11-26 01:51:57.614151811 +0000 UTC m=+0.150761121 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 26 01:51:58 compute-0 nova_compute[350387]: 2025-11-26 01:51:58.201 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:51:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 131 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 1.2 MiB/s wr, 37 op/s
Nov 26 01:51:59 compute-0 podman[416100]: 2025-11-26 01:51:59.555236837 +0000 UTC m=+0.103155153 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 01:51:59 compute-0 podman[416101]: 2025-11-26 01:51:59.633319719 +0000 UTC m=+0.169627926 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:51:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:51:59 compute-0 podman[158021]: time="2025-11-26T01:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:51:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:51:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8621 "" "Go-http-client/1.1"
Nov 26 01:52:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 139 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Nov 26 01:52:01 compute-0 nova_compute[350387]: 2025-11-26 01:52:01.146 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:01 compute-0 openstack_network_exporter[367323]: ERROR   01:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:52:01 compute-0 openstack_network_exporter[367323]: ERROR   01:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:52:01 compute-0 openstack_network_exporter[367323]: ERROR   01:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:52:01 compute-0 openstack_network_exporter[367323]: ERROR   01:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:52:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:52:01 compute-0 openstack_network_exporter[367323]: ERROR   01:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:52:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:52:02 compute-0 podman[416142]: 2025-11-26 01:52:02.590433761 +0000 UTC m=+0.136123697 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release-0.7.12=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 26 01:52:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 139 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Nov 26 01:52:03 compute-0 nova_compute[350387]: 2025-11-26 01:52:03.206 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:52:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 139 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Nov 26 01:52:06 compute-0 nova_compute[350387]: 2025-11-26 01:52:06.149 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 139 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 1.3 MiB/s wr, 53 op/s
Nov 26 01:52:08 compute-0 nova_compute[350387]: 2025-11-26 01:52:08.211 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:08 compute-0 podman[416160]: 2025-11-26 01:52:08.616457281 +0000 UTC m=+0.168448072 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 26 01:52:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 710 KiB/s wr, 39 op/s
Nov 26 01:52:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:52:10 compute-0 podman[416180]: 2025-11-26 01:52:10.603742546 +0000 UTC m=+0.151261705 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, architecture=x86_64, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 26 01:52:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 315 KiB/s wr, 21 op/s
Nov 26 01:52:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:52:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:52:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:52:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:52:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:52:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:52:11 compute-0 nova_compute[350387]: 2025-11-26 01:52:11.153 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:12 compute-0 podman[416200]: 2025-11-26 01:52:12.557888092 +0000 UTC m=+0.108072712 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:52:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 8.7 KiB/s wr, 1 op/s
Nov 26 01:52:13 compute-0 nova_compute[350387]: 2025-11-26 01:52:13.216 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:52:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Nov 26 01:52:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:52:15 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:52:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:52:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:52:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:52:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:52:15 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev fbf87cb1-aca3-4627-b16b-01bca72e6303 does not exist
Nov 26 01:52:15 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 02741265-635a-4bd3-a9ac-a503adf79620 does not exist
Nov 26 01:52:15 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 4475d065-fef1-458a-9c06-a17e4425234c does not exist
Nov 26 01:52:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:52:15 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:52:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:52:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:52:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:52:15 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:52:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:52:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:52:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:52:15 compute-0 nova_compute[350387]: 2025-11-26 01:52:15.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:52:16 compute-0 nova_compute[350387]: 2025-11-26 01:52:16.157 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:16 compute-0 podman[416493]: 2025-11-26 01:52:16.326617462 +0000 UTC m=+0.101424364 container create e74a77810de9756d688cf9c1084086a548e941a32565d1d72fcfbe66188f6de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cartwright, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:52:16 compute-0 podman[416493]: 2025-11-26 01:52:16.285174838 +0000 UTC m=+0.059981800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:52:16 compute-0 systemd[1]: Started libpod-conmon-e74a77810de9756d688cf9c1084086a548e941a32565d1d72fcfbe66188f6de0.scope.
Nov 26 01:52:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:52:16 compute-0 podman[416493]: 2025-11-26 01:52:16.49003051 +0000 UTC m=+0.264837472 container init e74a77810de9756d688cf9c1084086a548e941a32565d1d72fcfbe66188f6de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cartwright, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:52:16 compute-0 podman[416493]: 2025-11-26 01:52:16.506943409 +0000 UTC m=+0.281750311 container start e74a77810de9756d688cf9c1084086a548e941a32565d1d72fcfbe66188f6de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:52:16 compute-0 podman[416493]: 2025-11-26 01:52:16.513409832 +0000 UTC m=+0.288216744 container attach e74a77810de9756d688cf9c1084086a548e941a32565d1d72fcfbe66188f6de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cartwright, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:52:16 compute-0 ecstatic_cartwright[416509]: 167 167
Nov 26 01:52:16 compute-0 systemd[1]: libpod-e74a77810de9756d688cf9c1084086a548e941a32565d1d72fcfbe66188f6de0.scope: Deactivated successfully.
Nov 26 01:52:16 compute-0 podman[416493]: 2025-11-26 01:52:16.522454389 +0000 UTC m=+0.297261291 container died e74a77810de9756d688cf9c1084086a548e941a32565d1d72fcfbe66188f6de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cartwright, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:52:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d35405683cc0fd50c51be21f837e90bf536af3adfa21de32897a5104c81aafe-merged.mount: Deactivated successfully.
Nov 26 01:52:16 compute-0 podman[416493]: 2025-11-26 01:52:16.605589193 +0000 UTC m=+0.380396075 container remove e74a77810de9756d688cf9c1084086a548e941a32565d1d72fcfbe66188f6de0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:52:16 compute-0 systemd[1]: libpod-conmon-e74a77810de9756d688cf9c1084086a548e941a32565d1d72fcfbe66188f6de0.scope: Deactivated successfully.
Nov 26 01:52:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Nov 26 01:52:16 compute-0 podman[416531]: 2025-11-26 01:52:16.900791094 +0000 UTC m=+0.079463592 container create ac17247bb7d668f1022a3d8748443c5f46447dab51ca2c928ea0d62b7ecbf96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 01:52:16 compute-0 podman[416531]: 2025-11-26 01:52:16.875908469 +0000 UTC m=+0.054581047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:52:16 compute-0 systemd[1]: Started libpod-conmon-ac17247bb7d668f1022a3d8748443c5f46447dab51ca2c928ea0d62b7ecbf96c.scope.
Nov 26 01:52:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e9f960169a3b4e66091f19183ae2e7e020287c2b8f1a4db1b7860c604aec63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e9f960169a3b4e66091f19183ae2e7e020287c2b8f1a4db1b7860c604aec63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e9f960169a3b4e66091f19183ae2e7e020287c2b8f1a4db1b7860c604aec63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e9f960169a3b4e66091f19183ae2e7e020287c2b8f1a4db1b7860c604aec63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:52:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e9f960169a3b4e66091f19183ae2e7e020287c2b8f1a4db1b7860c604aec63/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:52:17 compute-0 podman[416531]: 2025-11-26 01:52:17.069331498 +0000 UTC m=+0.248004026 container init ac17247bb7d668f1022a3d8748443c5f46447dab51ca2c928ea0d62b7ecbf96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 01:52:17 compute-0 podman[416531]: 2025-11-26 01:52:17.090868938 +0000 UTC m=+0.269541456 container start ac17247bb7d668f1022a3d8748443c5f46447dab51ca2c928ea0d62b7ecbf96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 01:52:17 compute-0 podman[416531]: 2025-11-26 01:52:17.098052541 +0000 UTC m=+0.276725019 container attach ac17247bb7d668f1022a3d8748443c5f46447dab51ca2c928ea0d62b7ecbf96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:52:17 compute-0 nova_compute[350387]: 2025-11-26 01:52:17.294 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:52:17 compute-0 nova_compute[350387]: 2025-11-26 01:52:17.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:52:17 compute-0 nova_compute[350387]: 2025-11-26 01:52:17.297 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:52:17 compute-0 nova_compute[350387]: 2025-11-26 01:52:17.297 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 01:52:18 compute-0 nova_compute[350387]: 2025-11-26 01:52:18.159 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:52:18 compute-0 nova_compute[350387]: 2025-11-26 01:52:18.160 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:52:18 compute-0 nova_compute[350387]: 2025-11-26 01:52:18.160 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 01:52:18 compute-0 nova_compute[350387]: 2025-11-26 01:52:18.160 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid b1c088bc-7a6b-4580-93ff-685731747189 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 01:52:18 compute-0 nova_compute[350387]: 2025-11-26 01:52:18.220 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:18 compute-0 hungry_chebyshev[416546]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:52:18 compute-0 hungry_chebyshev[416546]: --> relative data size: 1.0
Nov 26 01:52:18 compute-0 hungry_chebyshev[416546]: --> All data devices are unavailable
Nov 26 01:52:18 compute-0 systemd[1]: libpod-ac17247bb7d668f1022a3d8748443c5f46447dab51ca2c928ea0d62b7ecbf96c.scope: Deactivated successfully.
Nov 26 01:52:18 compute-0 systemd[1]: libpod-ac17247bb7d668f1022a3d8748443c5f46447dab51ca2c928ea0d62b7ecbf96c.scope: Consumed 1.282s CPU time.
Nov 26 01:52:18 compute-0 podman[416531]: 2025-11-26 01:52:18.457493203 +0000 UTC m=+1.636165721 container died ac17247bb7d668f1022a3d8748443c5f46447dab51ca2c928ea0d62b7ecbf96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 01:52:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8e9f960169a3b4e66091f19183ae2e7e020287c2b8f1a4db1b7860c604aec63-merged.mount: Deactivated successfully.
Nov 26 01:52:18 compute-0 podman[416531]: 2025-11-26 01:52:18.536391568 +0000 UTC m=+1.715064056 container remove ac17247bb7d668f1022a3d8748443c5f46447dab51ca2c928ea0d62b7ecbf96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chebyshev, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:52:18 compute-0 systemd[1]: libpod-conmon-ac17247bb7d668f1022a3d8748443c5f46447dab51ca2c928ea0d62b7ecbf96c.scope: Deactivated successfully.
Nov 26 01:52:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 0 op/s
Nov 26 01:52:19 compute-0 nova_compute[350387]: 2025-11-26 01:52:19.714 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updating instance_info_cache with network_info: [{"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:52:19 compute-0 nova_compute[350387]: 2025-11-26 01:52:19.734 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:52:19 compute-0 nova_compute[350387]: 2025-11-26 01:52:19.735 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 01:52:19 compute-0 nova_compute[350387]: 2025-11-26 01:52:19.736 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:52:19 compute-0 nova_compute[350387]: 2025-11-26 01:52:19.736 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:52:19 compute-0 nova_compute[350387]: 2025-11-26 01:52:19.737 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:52:19 compute-0 nova_compute[350387]: 2025-11-26 01:52:19.738 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:52:19 compute-0 nova_compute[350387]: 2025-11-26 01:52:19.739 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:52:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:52:19 compute-0 podman[416726]: 2025-11-26 01:52:19.769437889 +0000 UTC m=+0.089375162 container create 433c13568b6d73d18c52ae1f1029f9e83f33c1e01ba15295da5ca7a5fff9a349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 01:52:19 compute-0 podman[416726]: 2025-11-26 01:52:19.739698947 +0000 UTC m=+0.059636290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:52:19 compute-0 systemd[1]: Started libpod-conmon-433c13568b6d73d18c52ae1f1029f9e83f33c1e01ba15295da5ca7a5fff9a349.scope.
Nov 26 01:52:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:52:19 compute-0 podman[416726]: 2025-11-26 01:52:19.898531425 +0000 UTC m=+0.218468738 container init 433c13568b6d73d18c52ae1f1029f9e83f33c1e01ba15295da5ca7a5fff9a349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:52:19 compute-0 podman[416726]: 2025-11-26 01:52:19.914774355 +0000 UTC m=+0.234711658 container start 433c13568b6d73d18c52ae1f1029f9e83f33c1e01ba15295da5ca7a5fff9a349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_edison, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:52:19 compute-0 podman[416726]: 2025-11-26 01:52:19.921159386 +0000 UTC m=+0.241096689 container attach 433c13568b6d73d18c52ae1f1029f9e83f33c1e01ba15295da5ca7a5fff9a349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_edison, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 01:52:19 compute-0 nervous_edison[416742]: 167 167
Nov 26 01:52:19 compute-0 systemd[1]: libpod-433c13568b6d73d18c52ae1f1029f9e83f33c1e01ba15295da5ca7a5fff9a349.scope: Deactivated successfully.
Nov 26 01:52:19 compute-0 podman[416726]: 2025-11-26 01:52:19.927487565 +0000 UTC m=+0.247424878 container died 433c13568b6d73d18c52ae1f1029f9e83f33c1e01ba15295da5ca7a5fff9a349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_edison, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:52:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7636e9ed7eade8a5eed41721c0f5a5872ff74e4f3cd1a7d82f4d67aa0a707825-merged.mount: Deactivated successfully.
Nov 26 01:52:20 compute-0 podman[416726]: 2025-11-26 01:52:20.002978933 +0000 UTC m=+0.322916236 container remove 433c13568b6d73d18c52ae1f1029f9e83f33c1e01ba15295da5ca7a5fff9a349 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:52:20 compute-0 systemd[1]: libpod-conmon-433c13568b6d73d18c52ae1f1029f9e83f33c1e01ba15295da5ca7a5fff9a349.scope: Deactivated successfully.
Nov 26 01:52:20 compute-0 podman[416765]: 2025-11-26 01:52:20.283018645 +0000 UTC m=+0.077614639 container create 93ec52a3560b6bf2bfbada4e0e1e0d0884ef89c585126eef318e0a0f6af43b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_allen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 26 01:52:20 compute-0 nova_compute[350387]: 2025-11-26 01:52:20.306 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:52:20 compute-0 nova_compute[350387]: 2025-11-26 01:52:20.306 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:52:20 compute-0 nova_compute[350387]: 2025-11-26 01:52:20.307 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:52:20 compute-0 podman[416765]: 2025-11-26 01:52:20.255220898 +0000 UTC m=+0.049816942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:52:20 compute-0 nova_compute[350387]: 2025-11-26 01:52:20.345 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:52:20 compute-0 nova_compute[350387]: 2025-11-26 01:52:20.346 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:52:20 compute-0 nova_compute[350387]: 2025-11-26 01:52:20.347 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:52:20 compute-0 nova_compute[350387]: 2025-11-26 01:52:20.348 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:52:20 compute-0 nova_compute[350387]: 2025-11-26 01:52:20.348 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:52:20 compute-0 systemd[1]: Started libpod-conmon-93ec52a3560b6bf2bfbada4e0e1e0d0884ef89c585126eef318e0a0f6af43b95.scope.
Nov 26 01:52:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3bb2dc6dbde900b39a7fbc06176c602e1297d7ce7bc95ea7dafc9c970de7adb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3bb2dc6dbde900b39a7fbc06176c602e1297d7ce7bc95ea7dafc9c970de7adb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3bb2dc6dbde900b39a7fbc06176c602e1297d7ce7bc95ea7dafc9c970de7adb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3bb2dc6dbde900b39a7fbc06176c602e1297d7ce7bc95ea7dafc9c970de7adb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:52:20 compute-0 podman[416765]: 2025-11-26 01:52:20.445215269 +0000 UTC m=+0.239811263 container init 93ec52a3560b6bf2bfbada4e0e1e0d0884ef89c585126eef318e0a0f6af43b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:52:20 compute-0 podman[416765]: 2025-11-26 01:52:20.467614403 +0000 UTC m=+0.262210367 container start 93ec52a3560b6bf2bfbada4e0e1e0d0884ef89c585126eef318e0a0f6af43b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:52:20 compute-0 podman[416765]: 2025-11-26 01:52:20.472153792 +0000 UTC m=+0.266749786 container attach 93ec52a3560b6bf2bfbada4e0e1e0d0884ef89c585126eef318e0a0f6af43b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_allen, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:52:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:52:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:52:20 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3545829754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:52:20 compute-0 nova_compute[350387]: 2025-11-26 01:52:20.877 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:52:20 compute-0 nova_compute[350387]: 2025-11-26 01:52:20.995 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:52:20 compute-0 nova_compute[350387]: 2025-11-26 01:52:20.996 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:52:20 compute-0 nova_compute[350387]: 2025-11-26 01:52:20.997 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:52:21 compute-0 nova_compute[350387]: 2025-11-26 01:52:21.004 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:52:21 compute-0 nova_compute[350387]: 2025-11-26 01:52:21.005 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:52:21 compute-0 nova_compute[350387]: 2025-11-26 01:52:21.006 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:52:21 compute-0 nova_compute[350387]: 2025-11-26 01:52:21.161 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:21 compute-0 wonderful_allen[416782]: {
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:    "0": [
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:        {
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "devices": [
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "/dev/loop3"
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            ],
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "lv_name": "ceph_lv0",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "lv_size": "21470642176",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "name": "ceph_lv0",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "tags": {
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.cluster_name": "ceph",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.crush_device_class": "",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.encrypted": "0",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.osd_id": "0",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.type": "block",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.vdo": "0"
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            },
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "type": "block",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "vg_name": "ceph_vg0"
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:        }
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:    ],
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:    "1": [
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:        {
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "devices": [
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "/dev/loop4"
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            ],
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "lv_name": "ceph_lv1",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "lv_size": "21470642176",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "name": "ceph_lv1",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "tags": {
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.cluster_name": "ceph",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.crush_device_class": "",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.encrypted": "0",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.osd_id": "1",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.type": "block",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.vdo": "0"
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            },
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "type": "block",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "vg_name": "ceph_vg1"
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:        }
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:    ],
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:    "2": [
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:        {
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "devices": [
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "/dev/loop5"
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            ],
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "lv_name": "ceph_lv2",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "lv_size": "21470642176",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "name": "ceph_lv2",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "tags": {
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.cluster_name": "ceph",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.crush_device_class": "",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.encrypted": "0",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.osd_id": "2",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.type": "block",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:                "ceph.vdo": "0"
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            },
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "type": "block",
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:            "vg_name": "ceph_vg2"
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:        }
Nov 26 01:52:21 compute-0 wonderful_allen[416782]:    ]
Nov 26 01:52:21 compute-0 wonderful_allen[416782]: }
Nov 26 01:52:21 compute-0 systemd[1]: libpod-93ec52a3560b6bf2bfbada4e0e1e0d0884ef89c585126eef318e0a0f6af43b95.scope: Deactivated successfully.
Nov 26 01:52:21 compute-0 conmon[416782]: conmon 93ec52a3560b6bf2bfba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93ec52a3560b6bf2bfbada4e0e1e0d0884ef89c585126eef318e0a0f6af43b95.scope/container/memory.events
Nov 26 01:52:21 compute-0 podman[416814]: 2025-11-26 01:52:21.355281944 +0000 UTC m=+0.042316059 container died 93ec52a3560b6bf2bfbada4e0e1e0d0884ef89c585126eef318e0a0f6af43b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_allen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 01:52:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3bb2dc6dbde900b39a7fbc06176c602e1297d7ce7bc95ea7dafc9c970de7adb-merged.mount: Deactivated successfully.
Nov 26 01:52:21 compute-0 podman[416814]: 2025-11-26 01:52:21.444376687 +0000 UTC m=+0.131410792 container remove 93ec52a3560b6bf2bfbada4e0e1e0d0884ef89c585126eef318e0a0f6af43b95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 01:52:21 compute-0 systemd[1]: libpod-conmon-93ec52a3560b6bf2bfbada4e0e1e0d0884ef89c585126eef318e0a0f6af43b95.scope: Deactivated successfully.
Nov 26 01:52:21 compute-0 nova_compute[350387]: 2025-11-26 01:52:21.558 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:52:21 compute-0 nova_compute[350387]: 2025-11-26 01:52:21.559 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3723MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:52:21 compute-0 nova_compute[350387]: 2025-11-26 01:52:21.560 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:52:21 compute-0 nova_compute[350387]: 2025-11-26 01:52:21.560 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:52:21 compute-0 nova_compute[350387]: 2025-11-26 01:52:21.735 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:52:21 compute-0 nova_compute[350387]: 2025-11-26 01:52:21.736 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 0e500d52-72e1-4501-b4d6-fc6ca575760f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:52:21 compute-0 nova_compute[350387]: 2025-11-26 01:52:21.736 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:52:21 compute-0 nova_compute[350387]: 2025-11-26 01:52:21.736 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:52:21 compute-0 nova_compute[350387]: 2025-11-26 01:52:21.875 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:52:22 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:52:22 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1541593924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:52:22 compute-0 nova_compute[350387]: 2025-11-26 01:52:22.390 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:52:22 compute-0 nova_compute[350387]: 2025-11-26 01:52:22.401 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:52:22 compute-0 nova_compute[350387]: 2025-11-26 01:52:22.423 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:52:22 compute-0 nova_compute[350387]: 2025-11-26 01:52:22.452 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:52:22 compute-0 nova_compute[350387]: 2025-11-26 01:52:22.452 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:52:22 compute-0 nova_compute[350387]: 2025-11-26 01:52:22.453 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:52:22 compute-0 nova_compute[350387]: 2025-11-26 01:52:22.454 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 01:52:22 compute-0 nova_compute[350387]: 2025-11-26 01:52:22.475 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 01:52:22 compute-0 podman[416986]: 2025-11-26 01:52:22.628901476 +0000 UTC m=+0.084798333 container create 1ff0c549e0c0ff32d3d969eda95496a2ddf5d5dd7906c4cc154ba260c2043e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:52:22 compute-0 podman[416986]: 2025-11-26 01:52:22.588025128 +0000 UTC m=+0.043922045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:52:22 compute-0 systemd[1]: Started libpod-conmon-1ff0c549e0c0ff32d3d969eda95496a2ddf5d5dd7906c4cc154ba260c2043e9d.scope.
Nov 26 01:52:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:52:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:52:22 compute-0 podman[416986]: 2025-11-26 01:52:22.78371899 +0000 UTC m=+0.239615897 container init 1ff0c549e0c0ff32d3d969eda95496a2ddf5d5dd7906c4cc154ba260c2043e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 01:52:22 compute-0 podman[416986]: 2025-11-26 01:52:22.802475832 +0000 UTC m=+0.258372689 container start 1ff0c549e0c0ff32d3d969eda95496a2ddf5d5dd7906c4cc154ba260c2043e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:52:22 compute-0 podman[416986]: 2025-11-26 01:52:22.809049958 +0000 UTC m=+0.264946815 container attach 1ff0c549e0c0ff32d3d969eda95496a2ddf5d5dd7906c4cc154ba260c2043e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:52:22 compute-0 adoring_payne[417002]: 167 167
Nov 26 01:52:22 compute-0 systemd[1]: libpod-1ff0c549e0c0ff32d3d969eda95496a2ddf5d5dd7906c4cc154ba260c2043e9d.scope: Deactivated successfully.
Nov 26 01:52:22 compute-0 podman[416986]: 2025-11-26 01:52:22.815572443 +0000 UTC m=+0.271469300 container died 1ff0c549e0c0ff32d3d969eda95496a2ddf5d5dd7906c4cc154ba260c2043e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:52:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a14dac55b2278c84a4bc7e1ab4a99dcc3f3f8e0e5190c4bd3d447436bbc279b7-merged.mount: Deactivated successfully.
Nov 26 01:52:22 compute-0 podman[416986]: 2025-11-26 01:52:22.889556018 +0000 UTC m=+0.345452875 container remove 1ff0c549e0c0ff32d3d969eda95496a2ddf5d5dd7906c4cc154ba260c2043e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 01:52:22 compute-0 systemd[1]: libpod-conmon-1ff0c549e0c0ff32d3d969eda95496a2ddf5d5dd7906c4cc154ba260c2043e9d.scope: Deactivated successfully.
Nov 26 01:52:23 compute-0 podman[417025]: 2025-11-26 01:52:23.176153125 +0000 UTC m=+0.093055306 container create 45313c83a8afde3434e78275b1a478d6d1dfdff59dd09dbedb1c010eafd6db70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 01:52:23 compute-0 nova_compute[350387]: 2025-11-26 01:52:23.224 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:23 compute-0 podman[417025]: 2025-11-26 01:52:23.141511184 +0000 UTC m=+0.058413365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:52:23 compute-0 systemd[1]: Started libpod-conmon-45313c83a8afde3434e78275b1a478d6d1dfdff59dd09dbedb1c010eafd6db70.scope.
Nov 26 01:52:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:52:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84e52021029fa2415c336e2b5353b131d7cd0c4eb054f1af24554ad29bfe11c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:52:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84e52021029fa2415c336e2b5353b131d7cd0c4eb054f1af24554ad29bfe11c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:52:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84e52021029fa2415c336e2b5353b131d7cd0c4eb054f1af24554ad29bfe11c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:52:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84e52021029fa2415c336e2b5353b131d7cd0c4eb054f1af24554ad29bfe11c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:52:23 compute-0 podman[417025]: 2025-11-26 01:52:23.353043175 +0000 UTC m=+0.269945396 container init 45313c83a8afde3434e78275b1a478d6d1dfdff59dd09dbedb1c010eafd6db70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 26 01:52:23 compute-0 podman[417025]: 2025-11-26 01:52:23.371145938 +0000 UTC m=+0.288048119 container start 45313c83a8afde3434e78275b1a478d6d1dfdff59dd09dbedb1c010eafd6db70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 01:52:23 compute-0 podman[417025]: 2025-11-26 01:52:23.379121804 +0000 UTC m=+0.296024025 container attach 45313c83a8afde3434e78275b1a478d6d1dfdff59dd09dbedb1c010eafd6db70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:52:24 compute-0 practical_ganguly[417041]: {
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:        "osd_id": 0,
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:        "type": "bluestore"
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:    },
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:        "osd_id": 2,
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:        "type": "bluestore"
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:    },
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:        "osd_id": 1,
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:        "type": "bluestore"
Nov 26 01:52:24 compute-0 practical_ganguly[417041]:    }
Nov 26 01:52:24 compute-0 practical_ganguly[417041]: }
Nov 26 01:52:24 compute-0 systemd[1]: libpod-45313c83a8afde3434e78275b1a478d6d1dfdff59dd09dbedb1c010eafd6db70.scope: Deactivated successfully.
Nov 26 01:52:24 compute-0 systemd[1]: libpod-45313c83a8afde3434e78275b1a478d6d1dfdff59dd09dbedb1c010eafd6db70.scope: Consumed 1.289s CPU time.
Nov 26 01:52:24 compute-0 podman[417025]: 2025-11-26 01:52:24.671339873 +0000 UTC m=+1.588242044 container died 45313c83a8afde3434e78275b1a478d6d1dfdff59dd09dbedb1c010eafd6db70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ganguly, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 01:52:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a84e52021029fa2415c336e2b5353b131d7cd0c4eb054f1af24554ad29bfe11c-merged.mount: Deactivated successfully.
Nov 26 01:52:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:52:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:52:24 compute-0 podman[417025]: 2025-11-26 01:52:24.786755852 +0000 UTC m=+1.703658023 container remove 45313c83a8afde3434e78275b1a478d6d1dfdff59dd09dbedb1c010eafd6db70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 01:52:24 compute-0 systemd[1]: libpod-conmon-45313c83a8afde3434e78275b1a478d6d1dfdff59dd09dbedb1c010eafd6db70.scope: Deactivated successfully.
Nov 26 01:52:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:52:24 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:52:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:52:24 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:52:24 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 9e21555d-b19b-4fd8-9f1c-115daac844d9 does not exist
Nov 26 01:52:24 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 0d583cb6-bd18-4010-bb0a-68c654a8083c does not exist
Nov 26 01:52:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:52:24.968 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:52:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:52:24.968 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:52:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:52:24.969 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:52:25 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:52:25 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:52:26 compute-0 nova_compute[350387]: 2025-11-26 01:52:26.165 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:52:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:52:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2476727238' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:52:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:52:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2476727238' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:52:27 compute-0 nova_compute[350387]: 2025-11-26 01:52:27.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:52:27 compute-0 nova_compute[350387]: 2025-11-26 01:52:27.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 01:52:28 compute-0 nova_compute[350387]: 2025-11-26 01:52:28.228 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:28 compute-0 podman[417138]: 2025-11-26 01:52:28.567130269 +0000 UTC m=+0.117758585 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 26 01:52:28 compute-0 podman[417139]: 2025-11-26 01:52:28.577918735 +0000 UTC m=+0.113436693 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 26 01:52:28 compute-0 podman[417140]: 2025-11-26 01:52:28.581762724 +0000 UTC m=+0.114387570 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:52:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:52:29 compute-0 podman[158021]: time="2025-11-26T01:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:52:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:52:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:52:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8620 "" "Go-http-client/1.1"
Nov 26 01:52:30 compute-0 podman[417194]: 2025-11-26 01:52:30.56668294 +0000 UTC m=+0.124797505 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 01:52:30 compute-0 podman[417195]: 2025-11-26 01:52:30.692971617 +0000 UTC m=+0.233765692 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:52:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s wr, 0 op/s
Nov 26 01:52:31 compute-0 nova_compute[350387]: 2025-11-26 01:52:31.169 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:31 compute-0 openstack_network_exporter[367323]: ERROR   01:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:52:31 compute-0 openstack_network_exporter[367323]: ERROR   01:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:52:31 compute-0 openstack_network_exporter[367323]: ERROR   01:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:52:31 compute-0 openstack_network_exporter[367323]: ERROR   01:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:52:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:52:31 compute-0 openstack_network_exporter[367323]: ERROR   01:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:52:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:52:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Nov 26 01:52:33 compute-0 nova_compute[350387]: 2025-11-26 01:52:33.232 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:33 compute-0 podman[417237]: 2025-11-26 01:52:33.58517143 +0000 UTC m=+0.127843872 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, architecture=x86_64, container_name=kepler, name=ubi9, config_id=edpm, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 01:52:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:52:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Nov 26 01:52:36 compute-0 nova_compute[350387]: 2025-11-26 01:52:36.172 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Nov 26 01:52:38 compute-0 nova_compute[350387]: 2025-11-26 01:52:38.237 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Nov 26 01:52:39 compute-0 podman[417257]: 2025-11-26 01:52:39.593999034 +0000 UTC m=+0.142492406 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Nov 26 01:52:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:52:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:52:41
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'default.rgw.log', 'default.rgw.control', '.mgr', 'images', 'volumes', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root']
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:52:41 compute-0 nova_compute[350387]: 2025-11-26 01:52:41.177 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:52:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:52:41 compute-0 podman[417275]: 2025-11-26 01:52:41.568017573 +0000 UTC m=+0.116504691 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 01:52:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 170 B/s wr, 0 op/s
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.859 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.860 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.860 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.861 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.862 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.863 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.864 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.864 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.865 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.866 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.866 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.866 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.868 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.868 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.868 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.868 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.870 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b1c088bc-7a6b-4580-93ff-685731747189', 'name': 'test_0', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.875 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 0e500d52-72e1-4501-b4d6-fc6ca575760f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 01:52:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:42.877 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/0e500d52-72e1-4501-b4d6-fc6ca575760f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}4e94a0ede5bb893797130fc39ee992faf1803b43b6582353b5619a442e3adefc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 01:52:43 compute-0 nova_compute[350387]: 2025-11-26 01:52:43.242 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:43 compute-0 podman[417296]: 2025-11-26 01:52:43.555450392 +0000 UTC m=+0.100564580 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.085 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Wed, 26 Nov 2025 01:52:42 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-88f3adbf-95a2-4d21-9560-3edbc6c9e516 x-openstack-request-id: req-88f3adbf-95a2-4d21-9560-3edbc6c9e516 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.086 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "0e500d52-72e1-4501-b4d6-fc6ca575760f", "name": "vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo", "status": "ACTIVE", "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "user_id": "b130e7a8bed3424f9f5ff63b35cd2b28", "metadata": {"metering.server_group": "366b90b6-2e85-40c4-9ca1-855cf9022409"}, "hostId": "2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1", "image": {"id": "48e08d00-37a3-4465-a949-ff0b8afe4def", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/48e08d00-37a3-4465-a949-ff0b8afe4def"}]}, "flavor": {"id": "030e95e2-5458-42ef-a5df-79a19c0b681d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/030e95e2-5458-42ef-a5df-79a19c0b681d"}]}, "created": "2025-11-26T01:51:09Z", "updated": "2025-11-26T01:51:21Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.118", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:70:20:57"}, {"version": 4, "addr": "192.168.122.183", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:70:20:57"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/0e500d52-72e1-4501-b4d6-fc6ca575760f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/0e500d52-72e1-4501-b4d6-fc6ca575760f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-26T01:51:21.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.086 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/0e500d52-72e1-4501-b4d6-fc6ca575760f used request id req-88f3adbf-95a2-4d21-9560-3edbc6c9e516 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.088 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0e500d52-72e1-4501-b4d6-fc6ca575760f', 'name': 'vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {'metering.server_group': '366b90b6-2e85-40c4-9ca1-855cf9022409'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.089 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.089 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.089 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.089 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.091 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.091 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.092 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T01:52:44.089775) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.092 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.092 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.092 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.093 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.093 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T01:52:44.093207) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.101 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.107 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 0e500d52-72e1-4501-b4d6-fc6ca575760f / tapcc7c212d-f2 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.107 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.108 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.108 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.109 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.109 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.109 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.109 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T01:52:44.109649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.110 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.111 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.111 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.111 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.111 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.111 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.112 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.112 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.113 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T01:52:44.111896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.113 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.114 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.115 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.115 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.115 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.115 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.116 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.116 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T01:52:44.115446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.117 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.117 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.117 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.117 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.117 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.118 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T01:52:44.118100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.118 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes volume: 2174 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.119 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.bytes volume: 4488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.119 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.120 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.120 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.120 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.120 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.120 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T01:52:44.120660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.160 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/cpu volume: 36540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.203 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/cpu volume: 37110000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.205 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.205 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.206 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.206 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.206 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.207 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.207 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes.delta volume: 2174 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.208 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.208 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T01:52:44.207115) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.209 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.209 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.210 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.210 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.210 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.210 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.211 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/memory.usage volume: 48.98046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T01:52:44.210739) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.211 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/memory.usage volume: 49.125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.212 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.213 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.213 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.213 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.213 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.213 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.214 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo>]
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.216 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.216 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.216 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.217 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.217 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-26T01:52:44.213571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T01:52:44.217328) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.218 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes volume: 2010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.218 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.219 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.220 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.220 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.220 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.220 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.221 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.221 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T01:52:44.220936) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.221 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes.delta volume: 1920 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.222 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.223 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.223 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.223 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.224 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.224 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.224 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.224 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.225 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.packets volume: 37 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.226 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T01:52:44.224493) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.226 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.227 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.227 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.227 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.228 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.228 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.229 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.233 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T01:52:44.228132) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.233 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.233 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.234 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.234 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.234 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.235 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.235 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.235 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T01:52:44.234929) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.236 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.237 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.238 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.238 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.238 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.238 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.239 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T01:52:44.239159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.275 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.276 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.276 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.308 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.309 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.309 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.310 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.311 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.311 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.311 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.312 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.312 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T01:52:44.312224) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.396 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.397 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.397 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.480 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.480 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.480 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.481 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.481 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.481 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.481 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.481 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.481 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.482 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.482 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo>]
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-26T01:52:44.481789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.482 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.483 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.483 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.483 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.483 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.484 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 2182324777 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.484 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T01:52:44.483745) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.484 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 336768448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.484 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 176765271 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.484 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.latency volume: 2021453674 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.485 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.latency volume: 321911498 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.485 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.latency volume: 237452008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.485 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.485 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.486 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.486 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.486 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.486 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.486 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.486 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.486 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.487 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.487 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.487 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.488 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.488 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.488 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.488 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.488 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.488 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.489 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.489 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.489 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.489 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.489 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.490 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.490 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.490 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.490 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.490 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.491 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.491 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.491 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.491 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.491 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.491 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.bytes volume: 41713664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.492 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.492 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.492 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.492 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.493 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.493 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.493 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.493 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.493 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.493 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.494 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.494 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.494 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.494 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.494 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.494 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.494 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 5787370869 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.494 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 30575996 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.495 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.495 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.latency volume: 6318184171 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.495 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.latency volume: 31365598 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.495 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.496 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.496 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.496 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.496 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.496 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.496 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.496 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.497 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.497 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.497 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.requests volume: 225 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.497 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.497 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.498 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.498 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.498 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.498 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.498 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.498 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.499 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.499 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.499 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.499 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.499 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.499 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.500 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.500 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.501 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.501 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.501 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.501 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.502 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.502 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.502 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.502 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.502 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.502 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.503 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T01:52:44.486271) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T01:52:44.488904) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T01:52:44.491139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.506 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T01:52:44.493320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T01:52:44.494498) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T01:52:44.496652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.506 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T01:52:44.498854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.506 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.506 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.507 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.507 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.507 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.507 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.508 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.508 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.508 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.508 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.508 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:52:44.509 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:52:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:52:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 85 B/s wr, 0 op/s
Nov 26 01:52:46 compute-0 nova_compute[350387]: 2025-11-26 01:52:46.179 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 85 B/s wr, 0 op/s
Nov 26 01:52:48 compute-0 nova_compute[350387]: 2025-11-26 01:52:48.247 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 85 B/s wr, 0 op/s
Nov 26 01:52:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:52:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 85 B/s wr, 0 op/s
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:52:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:52:51 compute-0 nova_compute[350387]: 2025-11-26 01:52:51.180 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Nov 26 01:52:53 compute-0 nova_compute[350387]: 2025-11-26 01:52:53.251 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:52:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Nov 26 01:52:56 compute-0 nova_compute[350387]: 2025-11-26 01:52:56.185 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Nov 26 01:52:58 compute-0 nova_compute[350387]: 2025-11-26 01:52:58.254 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:52:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Nov 26 01:52:59 compute-0 podman[417322]: 2025-11-26 01:52:59.550369837 +0000 UTC m=+0.105595192 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Nov 26 01:52:59 compute-0 podman[417323]: 2025-11-26 01:52:59.577538866 +0000 UTC m=+0.127514782 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:52:59 compute-0 podman[417324]: 2025-11-26 01:52:59.589063113 +0000 UTC m=+0.122841490 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:52:59 compute-0 podman[158021]: time="2025-11-26T01:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:52:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:52:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:52:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8619 "" "Go-http-client/1.1"
Nov 26 01:53:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 0 op/s
Nov 26 01:53:01 compute-0 nova_compute[350387]: 2025-11-26 01:53:01.186 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:01 compute-0 openstack_network_exporter[367323]: ERROR   01:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:53:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:53:01 compute-0 openstack_network_exporter[367323]: ERROR   01:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:53:01 compute-0 openstack_network_exporter[367323]: ERROR   01:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:53:01 compute-0 openstack_network_exporter[367323]: ERROR   01:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:53:01 compute-0 openstack_network_exporter[367323]: ERROR   01:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:53:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:53:01 compute-0 podman[417383]: 2025-11-26 01:53:01.602637922 +0000 UTC m=+0.149975629 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 01:53:01 compute-0 podman[417384]: 2025-11-26 01:53:01.687160346 +0000 UTC m=+0.223442120 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 26 01:53:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 0 op/s
Nov 26 01:53:03 compute-0 nova_compute[350387]: 2025-11-26 01:53:03.258 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:04 compute-0 podman[417427]: 2025-11-26 01:53:04.588662143 +0000 UTC m=+0.129729885 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, config_id=edpm, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9)
Nov 26 01:53:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:53:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s wr, 0 op/s
Nov 26 01:53:06 compute-0 nova_compute[350387]: 2025-11-26 01:53:06.191 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s wr, 0 op/s
Nov 26 01:53:08 compute-0 nova_compute[350387]: 2025-11-26 01:53:08.262 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s wr, 0 op/s
Nov 26 01:53:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:53:10 compute-0 podman[417447]: 2025-11-26 01:53:10.582967728 +0000 UTC m=+0.135355585 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3)
Nov 26 01:53:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s wr, 0 op/s
Nov 26 01:53:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:53:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:53:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:53:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:53:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:53:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:53:11 compute-0 nova_compute[350387]: 2025-11-26 01:53:11.193 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:12 compute-0 podman[417467]: 2025-11-26 01:53:12.582118528 +0000 UTC m=+0.131105204 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 01:53:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:13 compute-0 nova_compute[350387]: 2025-11-26 01:53:13.266 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:14 compute-0 podman[417487]: 2025-11-26 01:53:14.591409827 +0000 UTC m=+0.142632211 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:53:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:53:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:16 compute-0 nova_compute[350387]: 2025-11-26 01:53:16.194 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:16 compute-0 nova_compute[350387]: 2025-11-26 01:53:16.315 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:53:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:18 compute-0 nova_compute[350387]: 2025-11-26 01:53:18.270 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:18 compute-0 nova_compute[350387]: 2025-11-26 01:53:18.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:53:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:19 compute-0 nova_compute[350387]: 2025-11-26 01:53:19.293 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:53:19 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 26 01:53:19 compute-0 nova_compute[350387]: 2025-11-26 01:53:19.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:53:19 compute-0 nova_compute[350387]: 2025-11-26 01:53:19.297 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:53:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:53:19 compute-0 nova_compute[350387]: 2025-11-26 01:53:19.799 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:53:19 compute-0 nova_compute[350387]: 2025-11-26 01:53:19.800 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:53:19 compute-0 nova_compute[350387]: 2025-11-26 01:53:19.800 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 01:53:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:21 compute-0 nova_compute[350387]: 2025-11-26 01:53:21.198 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:21 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 26 01:53:22 compute-0 nova_compute[350387]: 2025-11-26 01:53:22.537 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Updating instance_info_cache with network_info: [{"id": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "address": "fa:16:3e:70:20:57", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7c212d-f2", "ovs_interfaceid": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:53:22 compute-0 nova_compute[350387]: 2025-11-26 01:53:22.573 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:53:22 compute-0 nova_compute[350387]: 2025-11-26 01:53:22.574 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 01:53:22 compute-0 nova_compute[350387]: 2025-11-26 01:53:22.575 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:53:22 compute-0 nova_compute[350387]: 2025-11-26 01:53:22.576 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:53:22 compute-0 nova_compute[350387]: 2025-11-26 01:53:22.577 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:53:22 compute-0 nova_compute[350387]: 2025-11-26 01:53:22.578 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:53:22 compute-0 nova_compute[350387]: 2025-11-26 01:53:22.579 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:53:22 compute-0 nova_compute[350387]: 2025-11-26 01:53:22.580 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:53:22 compute-0 nova_compute[350387]: 2025-11-26 01:53:22.608 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:53:22 compute-0 nova_compute[350387]: 2025-11-26 01:53:22.609 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:53:22 compute-0 nova_compute[350387]: 2025-11-26 01:53:22.610 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:53:22 compute-0 nova_compute[350387]: 2025-11-26 01:53:22.611 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:53:22 compute-0 nova_compute[350387]: 2025-11-26 01:53:22.611 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:53:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:53:23 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2991500672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:53:23 compute-0 nova_compute[350387]: 2025-11-26 01:53:23.129 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:53:23 compute-0 nova_compute[350387]: 2025-11-26 01:53:23.273 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:23 compute-0 nova_compute[350387]: 2025-11-26 01:53:23.380 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:53:23 compute-0 nova_compute[350387]: 2025-11-26 01:53:23.381 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:53:23 compute-0 nova_compute[350387]: 2025-11-26 01:53:23.381 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:53:23 compute-0 nova_compute[350387]: 2025-11-26 01:53:23.391 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:53:23 compute-0 nova_compute[350387]: 2025-11-26 01:53:23.391 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:53:23 compute-0 nova_compute[350387]: 2025-11-26 01:53:23.392 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:53:23 compute-0 nova_compute[350387]: 2025-11-26 01:53:23.961 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:53:23 compute-0 nova_compute[350387]: 2025-11-26 01:53:23.963 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3804MB free_disk=59.9220085144043GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:53:23 compute-0 nova_compute[350387]: 2025-11-26 01:53:23.963 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:53:23 compute-0 nova_compute[350387]: 2025-11-26 01:53:23.964 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:53:24 compute-0 nova_compute[350387]: 2025-11-26 01:53:24.063 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:53:24 compute-0 nova_compute[350387]: 2025-11-26 01:53:24.064 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 0e500d52-72e1-4501-b4d6-fc6ca575760f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:53:24 compute-0 nova_compute[350387]: 2025-11-26 01:53:24.064 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:53:24 compute-0 nova_compute[350387]: 2025-11-26 01:53:24.065 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:53:24 compute-0 nova_compute[350387]: 2025-11-26 01:53:24.133 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:53:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:53:24 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471065665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:53:24 compute-0 nova_compute[350387]: 2025-11-26 01:53:24.665 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:53:24 compute-0 nova_compute[350387]: 2025-11-26 01:53:24.678 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:53:24 compute-0 nova_compute[350387]: 2025-11-26 01:53:24.705 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:53:24 compute-0 nova_compute[350387]: 2025-11-26 01:53:24.708 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:53:24 compute-0 nova_compute[350387]: 2025-11-26 01:53:24.709 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:53:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:53:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:53:24.969 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:53:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:53:24.970 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:53:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:53:24.971 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:53:26 compute-0 nova_compute[350387]: 2025-11-26 01:53:26.201 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:53:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:53:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:53:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:53:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:53:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:53:26 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 57956488-3c45-469e-a3e1-b4a8008c075d does not exist
Nov 26 01:53:26 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 81789c25-8698-49cc-a596-3d61998ed300 does not exist
Nov 26 01:53:26 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev d731eb7d-87d4-4a84-b857-f71ec1fb06fd does not exist
Nov 26 01:53:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:53:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:53:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:53:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:53:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:53:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:53:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:53:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:53:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:53:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3108154500' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:53:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:53:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3108154500' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:53:27 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:53:27 compute-0 podman[417827]: 2025-11-26 01:53:27.509001505 +0000 UTC m=+0.100550339 container create 5b81f2bdc6da3a5b2efb5e9bcdf2ab3d1ffffce992682f8f33000964bc389297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bassi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:53:27 compute-0 podman[417827]: 2025-11-26 01:53:27.475180217 +0000 UTC m=+0.066729101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:53:27 compute-0 systemd[1]: Started libpod-conmon-5b81f2bdc6da3a5b2efb5e9bcdf2ab3d1ffffce992682f8f33000964bc389297.scope.
Nov 26 01:53:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:53:27 compute-0 podman[417827]: 2025-11-26 01:53:27.653767695 +0000 UTC m=+0.245316519 container init 5b81f2bdc6da3a5b2efb5e9bcdf2ab3d1ffffce992682f8f33000964bc389297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:53:27 compute-0 podman[417827]: 2025-11-26 01:53:27.671071365 +0000 UTC m=+0.262620189 container start 5b81f2bdc6da3a5b2efb5e9bcdf2ab3d1ffffce992682f8f33000964bc389297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bassi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:53:27 compute-0 podman[417827]: 2025-11-26 01:53:27.677317102 +0000 UTC m=+0.268865936 container attach 5b81f2bdc6da3a5b2efb5e9bcdf2ab3d1ffffce992682f8f33000964bc389297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bassi, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:53:27 compute-0 epic_bassi[417844]: 167 167
Nov 26 01:53:27 compute-0 systemd[1]: libpod-5b81f2bdc6da3a5b2efb5e9bcdf2ab3d1ffffce992682f8f33000964bc389297.scope: Deactivated successfully.
Nov 26 01:53:27 compute-0 conmon[417844]: conmon 5b81f2bdc6da3a5b2efb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b81f2bdc6da3a5b2efb5e9bcdf2ab3d1ffffce992682f8f33000964bc389297.scope/container/memory.events
Nov 26 01:53:27 compute-0 nova_compute[350387]: 2025-11-26 01:53:27.706 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:53:27 compute-0 podman[417849]: 2025-11-26 01:53:27.758656237 +0000 UTC m=+0.047479759 container died 5b81f2bdc6da3a5b2efb5e9bcdf2ab3d1ffffce992682f8f33000964bc389297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bassi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 26 01:53:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f931fd209b4f785763fdd84367aa0bcbdfc493097c3a08b59bc75d0325b1084-merged.mount: Deactivated successfully.
Nov 26 01:53:27 compute-0 podman[417849]: 2025-11-26 01:53:27.832978891 +0000 UTC m=+0.121802373 container remove 5b81f2bdc6da3a5b2efb5e9bcdf2ab3d1ffffce992682f8f33000964bc389297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bassi, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:53:27 compute-0 systemd[1]: libpod-conmon-5b81f2bdc6da3a5b2efb5e9bcdf2ab3d1ffffce992682f8f33000964bc389297.scope: Deactivated successfully.
Nov 26 01:53:28 compute-0 podman[417869]: 2025-11-26 01:53:28.107966269 +0000 UTC m=+0.063744097 container create f21cdaa1e263f6e38828186d691f3f79bf347574612ff49181f8b4146aa8ecde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 01:53:28 compute-0 systemd[1]: Started libpod-conmon-f21cdaa1e263f6e38828186d691f3f79bf347574612ff49181f8b4146aa8ecde.scope.
Nov 26 01:53:28 compute-0 podman[417869]: 2025-11-26 01:53:28.089498268 +0000 UTC m=+0.045276126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:53:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb9d807f2726492be46ace8dcb51d4438b094ede7732a19ac048eee350a37a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb9d807f2726492be46ace8dcb51d4438b094ede7732a19ac048eee350a37a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb9d807f2726492be46ace8dcb51d4438b094ede7732a19ac048eee350a37a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb9d807f2726492be46ace8dcb51d4438b094ede7732a19ac048eee350a37a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb9d807f2726492be46ace8dcb51d4438b094ede7732a19ac048eee350a37a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:53:28 compute-0 podman[417869]: 2025-11-26 01:53:28.252047208 +0000 UTC m=+0.207825066 container init f21cdaa1e263f6e38828186d691f3f79bf347574612ff49181f8b4146aa8ecde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_napier, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 26 01:53:28 compute-0 podman[417869]: 2025-11-26 01:53:28.265435435 +0000 UTC m=+0.221213253 container start f21cdaa1e263f6e38828186d691f3f79bf347574612ff49181f8b4146aa8ecde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:53:28 compute-0 podman[417869]: 2025-11-26 01:53:28.270466327 +0000 UTC m=+0.226244155 container attach f21cdaa1e263f6e38828186d691f3f79bf347574612ff49181f8b4146aa8ecde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:53:28 compute-0 nova_compute[350387]: 2025-11-26 01:53:28.281 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:29 compute-0 laughing_napier[417886]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:53:29 compute-0 laughing_napier[417886]: --> relative data size: 1.0
Nov 26 01:53:29 compute-0 laughing_napier[417886]: --> All data devices are unavailable
Nov 26 01:53:29 compute-0 systemd[1]: libpod-f21cdaa1e263f6e38828186d691f3f79bf347574612ff49181f8b4146aa8ecde.scope: Deactivated successfully.
Nov 26 01:53:29 compute-0 systemd[1]: libpod-f21cdaa1e263f6e38828186d691f3f79bf347574612ff49181f8b4146aa8ecde.scope: Consumed 1.161s CPU time.
Nov 26 01:53:29 compute-0 podman[417915]: 2025-11-26 01:53:29.57601627 +0000 UTC m=+0.058830018 container died f21cdaa1e263f6e38828186d691f3f79bf347574612ff49181f8b4146aa8ecde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_napier, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:53:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fb9d807f2726492be46ace8dcb51d4438b094ede7732a19ac048eee350a37a8-merged.mount: Deactivated successfully.
Nov 26 01:53:29 compute-0 podman[417915]: 2025-11-26 01:53:29.664084972 +0000 UTC m=+0.146898650 container remove f21cdaa1e263f6e38828186d691f3f79bf347574612ff49181f8b4146aa8ecde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_napier, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:53:29 compute-0 systemd[1]: libpod-conmon-f21cdaa1e263f6e38828186d691f3f79bf347574612ff49181f8b4146aa8ecde.scope: Deactivated successfully.
Nov 26 01:53:29 compute-0 podman[417931]: 2025-11-26 01:53:29.734256369 +0000 UTC m=+0.092023024 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 01:53:29 compute-0 podman[158021]: time="2025-11-26T01:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:53:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:53:29 compute-0 podman[417930]: 2025-11-26 01:53:29.755799076 +0000 UTC m=+0.110993498 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Nov 26 01:53:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:53:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8619 "" "Go-http-client/1.1"
Nov 26 01:53:29 compute-0 podman[417932]: 2025-11-26 01:53:29.777342703 +0000 UTC m=+0.124084367 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:53:30 compute-0 podman[418125]: 2025-11-26 01:53:30.568955607 +0000 UTC m=+0.038465935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:53:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:30 compute-0 podman[418125]: 2025-11-26 01:53:30.886919986 +0000 UTC m=+0.356430314 container create 77dc98b005429bc54069e92d191b4173f7fc87da8ac9f56b7828ac0cb58c7af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pike, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 01:53:30 compute-0 systemd[1]: Started libpod-conmon-77dc98b005429bc54069e92d191b4173f7fc87da8ac9f56b7828ac0cb58c7af0.scope.
Nov 26 01:53:31 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:53:31 compute-0 podman[418125]: 2025-11-26 01:53:31.035031839 +0000 UTC m=+0.504542217 container init 77dc98b005429bc54069e92d191b4173f7fc87da8ac9f56b7828ac0cb58c7af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pike, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 01:53:31 compute-0 podman[418125]: 2025-11-26 01:53:31.051569115 +0000 UTC m=+0.521079453 container start 77dc98b005429bc54069e92d191b4173f7fc87da8ac9f56b7828ac0cb58c7af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:53:31 compute-0 vibrant_pike[418140]: 167 167
Nov 26 01:53:31 compute-0 podman[418125]: 2025-11-26 01:53:31.058339516 +0000 UTC m=+0.527849854 container attach 77dc98b005429bc54069e92d191b4173f7fc87da8ac9f56b7828ac0cb58c7af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pike, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 01:53:31 compute-0 systemd[1]: libpod-77dc98b005429bc54069e92d191b4173f7fc87da8ac9f56b7828ac0cb58c7af0.scope: Deactivated successfully.
Nov 26 01:53:31 compute-0 podman[418125]: 2025-11-26 01:53:31.070613652 +0000 UTC m=+0.540123990 container died 77dc98b005429bc54069e92d191b4173f7fc87da8ac9f56b7828ac0cb58c7af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pike, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:53:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecf34af45b0584e098c481d080eb1678e1d49576d39233d59c36a600f51e0732-merged.mount: Deactivated successfully.
Nov 26 01:53:31 compute-0 podman[418125]: 2025-11-26 01:53:31.142887948 +0000 UTC m=+0.612398256 container remove 77dc98b005429bc54069e92d191b4173f7fc87da8ac9f56b7828ac0cb58c7af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 01:53:31 compute-0 systemd[1]: libpod-conmon-77dc98b005429bc54069e92d191b4173f7fc87da8ac9f56b7828ac0cb58c7af0.scope: Deactivated successfully.
Nov 26 01:53:31 compute-0 nova_compute[350387]: 2025-11-26 01:53:31.203 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:31 compute-0 openstack_network_exporter[367323]: ERROR   01:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:53:31 compute-0 openstack_network_exporter[367323]: ERROR   01:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:53:31 compute-0 openstack_network_exporter[367323]: ERROR   01:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:53:31 compute-0 openstack_network_exporter[367323]: ERROR   01:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:53:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:53:31 compute-0 openstack_network_exporter[367323]: ERROR   01:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:53:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:53:31 compute-0 podman[418163]: 2025-11-26 01:53:31.459602652 +0000 UTC m=+0.109820545 container create 329540e2777ee0f8000b8a65babdf9856fab5967e087c629426e3b2c2768dca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 01:53:31 compute-0 podman[418163]: 2025-11-26 01:53:31.421735645 +0000 UTC m=+0.071953618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:53:31 compute-0 systemd[1]: Started libpod-conmon-329540e2777ee0f8000b8a65babdf9856fab5967e087c629426e3b2c2768dca6.scope.
Nov 26 01:53:31 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5578e404fa63ad5c52ec7425d9cccb8f83e192ec9bef05b8966466b1e29fd8eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5578e404fa63ad5c52ec7425d9cccb8f83e192ec9bef05b8966466b1e29fd8eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5578e404fa63ad5c52ec7425d9cccb8f83e192ec9bef05b8966466b1e29fd8eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5578e404fa63ad5c52ec7425d9cccb8f83e192ec9bef05b8966466b1e29fd8eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:53:31 compute-0 podman[418163]: 2025-11-26 01:53:31.637083232 +0000 UTC m=+0.287301165 container init 329540e2777ee0f8000b8a65babdf9856fab5967e087c629426e3b2c2768dca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:53:31 compute-0 podman[418163]: 2025-11-26 01:53:31.656481849 +0000 UTC m=+0.306699772 container start 329540e2777ee0f8000b8a65babdf9856fab5967e087c629426e3b2c2768dca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:53:31 compute-0 podman[418163]: 2025-11-26 01:53:31.667990923 +0000 UTC m=+0.318208896 container attach 329540e2777ee0f8000b8a65babdf9856fab5967e087c629426e3b2c2768dca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]: {
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:    "0": [
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:        {
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "devices": [
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "/dev/loop3"
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            ],
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "lv_name": "ceph_lv0",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "lv_size": "21470642176",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "name": "ceph_lv0",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "tags": {
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.cluster_name": "ceph",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.crush_device_class": "",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.encrypted": "0",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.osd_id": "0",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.type": "block",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.vdo": "0"
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            },
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "type": "block",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "vg_name": "ceph_vg0"
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:        }
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:    ],
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:    "1": [
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:        {
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "devices": [
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "/dev/loop4"
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            ],
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "lv_name": "ceph_lv1",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "lv_size": "21470642176",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "name": "ceph_lv1",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "tags": {
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.cluster_name": "ceph",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.crush_device_class": "",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.encrypted": "0",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.osd_id": "1",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.type": "block",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.vdo": "0"
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            },
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "type": "block",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "vg_name": "ceph_vg1"
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:        }
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:    ],
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:    "2": [
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:        {
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "devices": [
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "/dev/loop5"
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            ],
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "lv_name": "ceph_lv2",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "lv_size": "21470642176",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "name": "ceph_lv2",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "tags": {
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.cluster_name": "ceph",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.crush_device_class": "",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.encrypted": "0",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.osd_id": "2",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.type": "block",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:                "ceph.vdo": "0"
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            },
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "type": "block",
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:            "vg_name": "ceph_vg2"
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:        }
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]:    ]
Nov 26 01:53:32 compute-0 practical_stonebraker[418178]: }
Nov 26 01:53:32 compute-0 systemd[1]: libpod-329540e2777ee0f8000b8a65babdf9856fab5967e087c629426e3b2c2768dca6.scope: Deactivated successfully.
Nov 26 01:53:32 compute-0 podman[418163]: 2025-11-26 01:53:32.488706767 +0000 UTC m=+1.138924660 container died 329540e2777ee0f8000b8a65babdf9856fab5967e087c629426e3b2c2768dca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:53:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5578e404fa63ad5c52ec7425d9cccb8f83e192ec9bef05b8966466b1e29fd8eb-merged.mount: Deactivated successfully.
Nov 26 01:53:32 compute-0 podman[418163]: 2025-11-26 01:53:32.570590134 +0000 UTC m=+1.220808027 container remove 329540e2777ee0f8000b8a65babdf9856fab5967e087c629426e3b2c2768dca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:53:32 compute-0 systemd[1]: libpod-conmon-329540e2777ee0f8000b8a65babdf9856fab5967e087c629426e3b2c2768dca6.scope: Deactivated successfully.
Nov 26 01:53:32 compute-0 podman[418187]: 2025-11-26 01:53:32.639614219 +0000 UTC m=+0.168104228 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 26 01:53:32 compute-0 podman[418188]: 2025-11-26 01:53:32.650137235 +0000 UTC m=+0.180111395 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 01:53:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:33 compute-0 nova_compute[350387]: 2025-11-26 01:53:33.284 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:33 compute-0 podman[418378]: 2025-11-26 01:53:33.705416387 +0000 UTC m=+0.072204446 container create e16c4234a284773398933e61169ff1e462c017e9bf52df7b29454ca8124fd891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:53:33 compute-0 systemd[1]: Started libpod-conmon-e16c4234a284773398933e61169ff1e462c017e9bf52df7b29454ca8124fd891.scope.
Nov 26 01:53:33 compute-0 podman[418378]: 2025-11-26 01:53:33.680118864 +0000 UTC m=+0.046907003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:53:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:53:33 compute-0 podman[418378]: 2025-11-26 01:53:33.819392758 +0000 UTC m=+0.186180847 container init e16c4234a284773398933e61169ff1e462c017e9bf52df7b29454ca8124fd891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 01:53:33 compute-0 podman[418378]: 2025-11-26 01:53:33.831957672 +0000 UTC m=+0.198745731 container start e16c4234a284773398933e61169ff1e462c017e9bf52df7b29454ca8124fd891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:53:33 compute-0 podman[418378]: 2025-11-26 01:53:33.836674735 +0000 UTC m=+0.203462794 container attach e16c4234a284773398933e61169ff1e462c017e9bf52df7b29454ca8124fd891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 01:53:33 compute-0 affectionate_chebyshev[418394]: 167 167
Nov 26 01:53:33 compute-0 systemd[1]: libpod-e16c4234a284773398933e61169ff1e462c017e9bf52df7b29454ca8124fd891.scope: Deactivated successfully.
Nov 26 01:53:33 compute-0 podman[418378]: 2025-11-26 01:53:33.843504407 +0000 UTC m=+0.210292496 container died e16c4234a284773398933e61169ff1e462c017e9bf52df7b29454ca8124fd891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 01:53:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6a14796f144bd3a2ebc41b0e5822e4c11d2bc604846f8966e352a95701f213a-merged.mount: Deactivated successfully.
Nov 26 01:53:33 compute-0 podman[418378]: 2025-11-26 01:53:33.917306147 +0000 UTC m=+0.284094226 container remove e16c4234a284773398933e61169ff1e462c017e9bf52df7b29454ca8124fd891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 01:53:33 compute-0 systemd[1]: libpod-conmon-e16c4234a284773398933e61169ff1e462c017e9bf52df7b29454ca8124fd891.scope: Deactivated successfully.
Nov 26 01:53:34 compute-0 podman[418416]: 2025-11-26 01:53:34.215124348 +0000 UTC m=+0.099079873 container create b113f7b3a049e19d2d170bb2aa0c35e0c35d6013f172b97c23b6727899d92f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 01:53:34 compute-0 podman[418416]: 2025-11-26 01:53:34.18254977 +0000 UTC m=+0.066505295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:53:34 compute-0 systemd[1]: Started libpod-conmon-b113f7b3a049e19d2d170bb2aa0c35e0c35d6013f172b97c23b6727899d92f7f.scope.
Nov 26 01:53:34 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:53:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807ce65578edbbb9dfa78b2a8aba0895326fd713aac4a832114ea2e51fe80c7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:53:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807ce65578edbbb9dfa78b2a8aba0895326fd713aac4a832114ea2e51fe80c7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:53:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807ce65578edbbb9dfa78b2a8aba0895326fd713aac4a832114ea2e51fe80c7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:53:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807ce65578edbbb9dfa78b2a8aba0895326fd713aac4a832114ea2e51fe80c7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:53:34 compute-0 podman[418416]: 2025-11-26 01:53:34.396369265 +0000 UTC m=+0.280324800 container init b113f7b3a049e19d2d170bb2aa0c35e0c35d6013f172b97c23b6727899d92f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcclintock, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 26 01:53:34 compute-0 podman[418416]: 2025-11-26 01:53:34.427509782 +0000 UTC m=+0.311465277 container start b113f7b3a049e19d2d170bb2aa0c35e0c35d6013f172b97c23b6727899d92f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcclintock, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:53:34 compute-0 podman[418416]: 2025-11-26 01:53:34.432546694 +0000 UTC m=+0.316502229 container attach b113f7b3a049e19d2d170bb2aa0c35e0c35d6013f172b97c23b6727899d92f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 01:53:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:53:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]: {
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:        "osd_id": 0,
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:        "type": "bluestore"
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:    },
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:        "osd_id": 2,
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:        "type": "bluestore"
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:    },
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:        "osd_id": 1,
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:        "type": "bluestore"
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]:    }
Nov 26 01:53:35 compute-0 kind_mcclintock[418431]: }
Nov 26 01:53:35 compute-0 systemd[1]: libpod-b113f7b3a049e19d2d170bb2aa0c35e0c35d6013f172b97c23b6727899d92f7f.scope: Deactivated successfully.
Nov 26 01:53:35 compute-0 podman[418416]: 2025-11-26 01:53:35.579475969 +0000 UTC m=+1.463431474 container died b113f7b3a049e19d2d170bb2aa0c35e0c35d6013f172b97c23b6727899d92f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 01:53:35 compute-0 systemd[1]: libpod-b113f7b3a049e19d2d170bb2aa0c35e0c35d6013f172b97c23b6727899d92f7f.scope: Consumed 1.140s CPU time.
Nov 26 01:53:35 compute-0 podman[418458]: 2025-11-26 01:53:35.609084473 +0000 UTC m=+0.148833114 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., container_name=kepler, managed_by=edpm_ansible, architecture=x86_64, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.openshift.tags=base rhel9)
Nov 26 01:53:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-807ce65578edbbb9dfa78b2a8aba0895326fd713aac4a832114ea2e51fe80c7e-merged.mount: Deactivated successfully.
Nov 26 01:53:35 compute-0 podman[418416]: 2025-11-26 01:53:35.671187803 +0000 UTC m=+1.555143308 container remove b113f7b3a049e19d2d170bb2aa0c35e0c35d6013f172b97c23b6727899d92f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:53:35 compute-0 systemd[1]: libpod-conmon-b113f7b3a049e19d2d170bb2aa0c35e0c35d6013f172b97c23b6727899d92f7f.scope: Deactivated successfully.
Nov 26 01:53:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:53:35 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:53:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:53:35 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:53:35 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev db98d1b7-761c-464d-8fae-9a38a6410c83 does not exist
Nov 26 01:53:35 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 89666b45-46af-458c-9cda-1416e2adb8af does not exist
Nov 26 01:53:36 compute-0 nova_compute[350387]: 2025-11-26 01:53:36.207 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:36 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:53:36 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:53:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:38 compute-0 nova_compute[350387]: 2025-11-26 01:53:38.288 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:53:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:53:41
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.log', 'vms', 'default.rgw.control', 'images', '.rgw.root', '.mgr', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta']
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:53:41 compute-0 nova_compute[350387]: 2025-11-26 01:53:41.210 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:53:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:53:41 compute-0 podman[418546]: 2025-11-26 01:53:41.635042783 +0000 UTC m=+0.174595570 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:53:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:43 compute-0 nova_compute[350387]: 2025-11-26 01:53:43.294 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:43 compute-0 podman[418568]: 2025-11-26 01:53:43.580124876 +0000 UTC m=+0.127389790 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, config_id=edpm, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41)
Nov 26 01:53:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:53:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:44 compute-0 podman[418588]: 2025-11-26 01:53:44.846463815 +0000 UTC m=+0.152699464 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:53:46 compute-0 nova_compute[350387]: 2025-11-26 01:53:46.214 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:48 compute-0 nova_compute[350387]: 2025-11-26 01:53:48.299 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:53:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011047613669662043 of space, bias 1.0, pg target 0.3314284100898613 quantized to 32 (current 32)
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:53:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:53:51 compute-0 nova_compute[350387]: 2025-11-26 01:53:51.222 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:53 compute-0 nova_compute[350387]: 2025-11-26 01:53:53.303 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:53:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 5966 writes, 26K keys, 5966 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 5966 writes, 5966 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1355 writes, 6117 keys, 1355 commit groups, 1.0 writes per commit group, ingest: 8.79 MB, 0.01 MB/s#012Interval WAL: 1355 writes, 1355 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    114.9      0.26              0.15        15    0.017       0      0       0.0       0.0#012  L6      1/0    7.01 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    170.4    138.0      0.71              0.42        14    0.051     63K   7821       0.0       0.0#012 Sum      1/0    7.01 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    124.6    131.8      0.98              0.57        29    0.034     63K   7821       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    125.0    126.3      0.30              0.19         8    0.038     20K   2552       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    170.4    138.0      0.71              0.42        14    0.051     63K   7821       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    118.0      0.25              0.15        14    0.018       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.029, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 1.0 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5636b955b1f0#2 capacity: 308.00 MB usage: 13.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000143 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(850,12.87 MB,4.17986%) FilterBlock(30,181.73 KB,0.0576217%) IndexBlock(30,342.27 KB,0.108521%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 26 01:53:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:53:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:56 compute-0 nova_compute[350387]: 2025-11-26 01:53:56.232 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:58 compute-0 nova_compute[350387]: 2025-11-26 01:53:58.311 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:53:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:53:59 compute-0 podman[158021]: time="2025-11-26T01:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:53:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:53:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:53:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8624 "" "Go-http-client/1.1"
Nov 26 01:54:00 compute-0 podman[418612]: 2025-11-26 01:54:00.526461767 +0000 UTC m=+0.084170883 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 26 01:54:00 compute-0 podman[418611]: 2025-11-26 01:54:00.549516856 +0000 UTC m=+0.101446869 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251118)
Nov 26 01:54:00 compute-0 podman[418613]: 2025-11-26 01:54:00.554440105 +0000 UTC m=+0.089109972 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 01:54:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:01 compute-0 nova_compute[350387]: 2025-11-26 01:54:01.234 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:01 compute-0 openstack_network_exporter[367323]: ERROR   01:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:54:01 compute-0 openstack_network_exporter[367323]: ERROR   01:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:54:01 compute-0 openstack_network_exporter[367323]: ERROR   01:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:54:01 compute-0 openstack_network_exporter[367323]: ERROR   01:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:54:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:54:01 compute-0 openstack_network_exporter[367323]: ERROR   01:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:54:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:54:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:03 compute-0 nova_compute[350387]: 2025-11-26 01:54:03.316 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:03 compute-0 podman[418670]: 2025-11-26 01:54:03.577503269 +0000 UTC m=+0.115411002 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Nov 26 01:54:03 compute-0 podman[418671]: 2025-11-26 01:54:03.62827107 +0000 UTC m=+0.159074503 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 26 01:54:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:54:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:06 compute-0 nova_compute[350387]: 2025-11-26 01:54:06.238 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:06 compute-0 podman[418715]: 2025-11-26 01:54:06.646697394 +0000 UTC m=+0.189286864 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, io.openshift.expose-services=, release-0.7.12=, distribution-scope=public, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_id=edpm, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=kepler, name=ubi9)
Nov 26 01:54:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:08 compute-0 nova_compute[350387]: 2025-11-26 01:54:08.322 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:54:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:54:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:54:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:54:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:54:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:54:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:54:11 compute-0 nova_compute[350387]: 2025-11-26 01:54:11.248 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:12 compute-0 podman[418735]: 2025-11-26 01:54:12.587487015 +0000 UTC m=+0.139774630 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:54:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:13 compute-0 nova_compute[350387]: 2025-11-26 01:54:13.328 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:14 compute-0 podman[418753]: 2025-11-26 01:54:14.59868199 +0000 UTC m=+0.137803573 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Nov 26 01:54:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:54:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:15 compute-0 podman[418771]: 2025-11-26 01:54:15.563027281 +0000 UTC m=+0.126102824 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 01:54:16 compute-0 nova_compute[350387]: 2025-11-26 01:54:16.249 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:17 compute-0 nova_compute[350387]: 2025-11-26 01:54:17.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:54:18 compute-0 nova_compute[350387]: 2025-11-26 01:54:18.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:54:18 compute-0 nova_compute[350387]: 2025-11-26 01:54:18.332 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:54:20 compute-0 nova_compute[350387]: 2025-11-26 01:54:20.294 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:54:20 compute-0 nova_compute[350387]: 2025-11-26 01:54:20.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:54:20 compute-0 nova_compute[350387]: 2025-11-26 01:54:20.297 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:54:20 compute-0 nova_compute[350387]: 2025-11-26 01:54:20.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 01:54:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:21 compute-0 nova_compute[350387]: 2025-11-26 01:54:21.255 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:21 compute-0 nova_compute[350387]: 2025-11-26 01:54:21.351 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:54:21 compute-0 nova_compute[350387]: 2025-11-26 01:54:21.352 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:54:21 compute-0 nova_compute[350387]: 2025-11-26 01:54:21.353 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 01:54:21 compute-0 nova_compute[350387]: 2025-11-26 01:54:21.354 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid b1c088bc-7a6b-4580-93ff-685731747189 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 01:54:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:23 compute-0 nova_compute[350387]: 2025-11-26 01:54:23.336 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:23 compute-0 nova_compute[350387]: 2025-11-26 01:54:23.406 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updating instance_info_cache with network_info: [{"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:54:23 compute-0 nova_compute[350387]: 2025-11-26 01:54:23.430 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:54:23 compute-0 nova_compute[350387]: 2025-11-26 01:54:23.431 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 01:54:23 compute-0 nova_compute[350387]: 2025-11-26 01:54:23.432 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:54:23 compute-0 nova_compute[350387]: 2025-11-26 01:54:23.433 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:54:23 compute-0 nova_compute[350387]: 2025-11-26 01:54:23.434 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:54:23 compute-0 nova_compute[350387]: 2025-11-26 01:54:23.436 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:54:23 compute-0 nova_compute[350387]: 2025-11-26 01:54:23.437 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:54:23 compute-0 nova_compute[350387]: 2025-11-26 01:54:23.437 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:54:23 compute-0 nova_compute[350387]: 2025-11-26 01:54:23.466 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:54:23 compute-0 nova_compute[350387]: 2025-11-26 01:54:23.468 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:54:23 compute-0 nova_compute[350387]: 2025-11-26 01:54:23.470 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:54:23 compute-0 nova_compute[350387]: 2025-11-26 01:54:23.471 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:54:23 compute-0 nova_compute[350387]: 2025-11-26 01:54:23.472 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:54:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:54:24 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1877984099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.055 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.174 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.175 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.176 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.184 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.185 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.185 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:54:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.774 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.776 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3787MB free_disk=59.9220085144043GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.777 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.778 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:54:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.865 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.866 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 0e500d52-72e1-4501-b4d6-fc6ca575760f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.867 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.868 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:54:24 compute-0 nova_compute[350387]: 2025-11-26 01:54:24.958 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:54:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:54:24.971 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:54:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:54:24.972 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:54:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:54:24.973 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:54:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:54:25 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4176468787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:54:25 compute-0 nova_compute[350387]: 2025-11-26 01:54:25.437 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:54:25 compute-0 nova_compute[350387]: 2025-11-26 01:54:25.449 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:54:25 compute-0 nova_compute[350387]: 2025-11-26 01:54:25.469 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:54:25 compute-0 nova_compute[350387]: 2025-11-26 01:54:25.472 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:54:25 compute-0 nova_compute[350387]: 2025-11-26 01:54:25.473 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:54:26 compute-0 nova_compute[350387]: 2025-11-26 01:54:26.258 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:54:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/298547041' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:54:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:54:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/298547041' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:54:28 compute-0 nova_compute[350387]: 2025-11-26 01:54:28.341 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:29 compute-0 podman[158021]: time="2025-11-26T01:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:54:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:54:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:54:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8628 "" "Go-http-client/1.1"
Nov 26 01:54:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:31 compute-0 nova_compute[350387]: 2025-11-26 01:54:31.262 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:31 compute-0 openstack_network_exporter[367323]: ERROR   01:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:54:31 compute-0 openstack_network_exporter[367323]: ERROR   01:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:54:31 compute-0 openstack_network_exporter[367323]: ERROR   01:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:54:31 compute-0 openstack_network_exporter[367323]: ERROR   01:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:54:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:54:31 compute-0 openstack_network_exporter[367323]: ERROR   01:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:54:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:54:31 compute-0 podman[418839]: 2025-11-26 01:54:31.565703964 +0000 UTC m=+0.121217847 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, managed_by=edpm_ansible)
Nov 26 01:54:31 compute-0 podman[418841]: 2025-11-26 01:54:31.570685554 +0000 UTC m=+0.104273809 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 01:54:31 compute-0 podman[418840]: 2025-11-26 01:54:31.573978617 +0000 UTC m=+0.119999812 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Nov 26 01:54:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:33 compute-0 nova_compute[350387]: 2025-11-26 01:54:33.346 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:34 compute-0 podman[418899]: 2025-11-26 01:54:34.584689653 +0000 UTC m=+0.130559339 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Nov 26 01:54:34 compute-0 podman[418900]: 2025-11-26 01:54:34.675229844 +0000 UTC m=+0.206328504 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:54:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:54:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:36 compute-0 nova_compute[350387]: 2025-11-26 01:54:36.264 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:54:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:54:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:54:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:54:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:54:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:54:37 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev d9bdd255-1eaa-4ed1-9188-e9d40d0e7d62 does not exist
Nov 26 01:54:37 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev eb06697e-83a8-415a-b9c4-c1e2ac8c07bf does not exist
Nov 26 01:54:37 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev e46806e6-1d67-49de-83c6-65a125bcc505 does not exist
Nov 26 01:54:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:54:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:54:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:54:37 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:54:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:54:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:54:37 compute-0 podman[419069]: 2025-11-26 01:54:37.614028434 +0000 UTC m=+0.157686524 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, managed_by=edpm_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, architecture=x86_64, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., name=ubi9)
Nov 26 01:54:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:54:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:54:38 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:54:38 compute-0 nova_compute[350387]: 2025-11-26 01:54:38.351 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:38 compute-0 podman[419226]: 2025-11-26 01:54:38.553603777 +0000 UTC m=+0.092848877 container create 5b389c69f9ce9000b63b7adfbe873b4ea3a037c14032944518f92f27446385a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:54:38 compute-0 podman[419226]: 2025-11-26 01:54:38.530478565 +0000 UTC m=+0.069723685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:54:38 compute-0 systemd[1]: Started libpod-conmon-5b389c69f9ce9000b63b7adfbe873b4ea3a037c14032944518f92f27446385a5.scope.
Nov 26 01:54:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:54:38 compute-0 podman[419226]: 2025-11-26 01:54:38.684558687 +0000 UTC m=+0.223803867 container init 5b389c69f9ce9000b63b7adfbe873b4ea3a037c14032944518f92f27446385a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:54:38 compute-0 podman[419226]: 2025-11-26 01:54:38.694589889 +0000 UTC m=+0.233834979 container start 5b389c69f9ce9000b63b7adfbe873b4ea3a037c14032944518f92f27446385a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:54:38 compute-0 podman[419226]: 2025-11-26 01:54:38.699158938 +0000 UTC m=+0.238404118 container attach 5b389c69f9ce9000b63b7adfbe873b4ea3a037c14032944518f92f27446385a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:54:38 compute-0 reverent_shannon[419242]: 167 167
Nov 26 01:54:38 compute-0 systemd[1]: libpod-5b389c69f9ce9000b63b7adfbe873b4ea3a037c14032944518f92f27446385a5.scope: Deactivated successfully.
Nov 26 01:54:38 compute-0 podman[419247]: 2025-11-26 01:54:38.788778293 +0000 UTC m=+0.057089490 container died 5b389c69f9ce9000b63b7adfbe873b4ea3a037c14032944518f92f27446385a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 01:54:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7db64de57017cc0c8d6b18eb4d6eff86e9a5aeda45d859f165fdfe3db6e6df6a-merged.mount: Deactivated successfully.
Nov 26 01:54:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:38 compute-0 podman[419247]: 2025-11-26 01:54:38.863335904 +0000 UTC m=+0.131647061 container remove 5b389c69f9ce9000b63b7adfbe873b4ea3a037c14032944518f92f27446385a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 01:54:38 compute-0 systemd[1]: libpod-conmon-5b389c69f9ce9000b63b7adfbe873b4ea3a037c14032944518f92f27446385a5.scope: Deactivated successfully.
Nov 26 01:54:39 compute-0 podman[419269]: 2025-11-26 01:54:39.201753929 +0000 UTC m=+0.112717837 container create 8683b9471a0bf8e209f37dd370dc96b4fed7383fdae6d557fc03e96e3fc899ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 01:54:39 compute-0 podman[419269]: 2025-11-26 01:54:39.150740032 +0000 UTC m=+0.061703990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:54:39 compute-0 systemd[1]: Started libpod-conmon-8683b9471a0bf8e209f37dd370dc96b4fed7383fdae6d557fc03e96e3fc899ac.scope.
Nov 26 01:54:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f411edababbc1b6874f38a06b984ac3d3416b3e616a92e77cddba6f8f3075bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f411edababbc1b6874f38a06b984ac3d3416b3e616a92e77cddba6f8f3075bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f411edababbc1b6874f38a06b984ac3d3416b3e616a92e77cddba6f8f3075bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f411edababbc1b6874f38a06b984ac3d3416b3e616a92e77cddba6f8f3075bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f411edababbc1b6874f38a06b984ac3d3416b3e616a92e77cddba6f8f3075bf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:54:39 compute-0 podman[419269]: 2025-11-26 01:54:39.374450145 +0000 UTC m=+0.285414103 container init 8683b9471a0bf8e209f37dd370dc96b4fed7383fdae6d557fc03e96e3fc899ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:54:39 compute-0 podman[419269]: 2025-11-26 01:54:39.40050977 +0000 UTC m=+0.311473688 container start 8683b9471a0bf8e209f37dd370dc96b4fed7383fdae6d557fc03e96e3fc899ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:54:39 compute-0 podman[419269]: 2025-11-26 01:54:39.407076585 +0000 UTC m=+0.318040563 container attach 8683b9471a0bf8e209f37dd370dc96b4fed7383fdae6d557fc03e96e3fc899ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:54:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:54:40 compute-0 amazing_mclean[419285]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:54:40 compute-0 amazing_mclean[419285]: --> relative data size: 1.0
Nov 26 01:54:40 compute-0 amazing_mclean[419285]: --> All data devices are unavailable
Nov 26 01:54:40 compute-0 systemd[1]: libpod-8683b9471a0bf8e209f37dd370dc96b4fed7383fdae6d557fc03e96e3fc899ac.scope: Deactivated successfully.
Nov 26 01:54:40 compute-0 podman[419269]: 2025-11-26 01:54:40.772609939 +0000 UTC m=+1.683573837 container died 8683b9471a0bf8e209f37dd370dc96b4fed7383fdae6d557fc03e96e3fc899ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 01:54:40 compute-0 systemd[1]: libpod-8683b9471a0bf8e209f37dd370dc96b4fed7383fdae6d557fc03e96e3fc899ac.scope: Consumed 1.280s CPU time.
Nov 26 01:54:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f411edababbc1b6874f38a06b984ac3d3416b3e616a92e77cddba6f8f3075bf-merged.mount: Deactivated successfully.
Nov 26 01:54:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:40 compute-0 podman[419269]: 2025-11-26 01:54:40.865540287 +0000 UTC m=+1.776504165 container remove 8683b9471a0bf8e209f37dd370dc96b4fed7383fdae6d557fc03e96e3fc899ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 01:54:40 compute-0 systemd[1]: libpod-conmon-8683b9471a0bf8e209f37dd370dc96b4fed7383fdae6d557fc03e96e3fc899ac.scope: Deactivated successfully.
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:54:41
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'images', '.rgw.root', 'vms', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:54:41 compute-0 nova_compute[350387]: 2025-11-26 01:54:41.266 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:54:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:54:42 compute-0 podman[419466]: 2025-11-26 01:54:42.182033168 +0000 UTC m=+0.096057797 container create 85fb85fe6930d888cdbccd4233d721acabc85c68cbbbf02bb32a3d042adc6b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 01:54:42 compute-0 podman[419466]: 2025-11-26 01:54:42.146640391 +0000 UTC m=+0.060665070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:54:42 compute-0 systemd[1]: Started libpod-conmon-85fb85fe6930d888cdbccd4233d721acabc85c68cbbbf02bb32a3d042adc6b0f.scope.
Nov 26 01:54:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:54:42 compute-0 podman[419466]: 2025-11-26 01:54:42.330605104 +0000 UTC m=+0.244629713 container init 85fb85fe6930d888cdbccd4233d721acabc85c68cbbbf02bb32a3d042adc6b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:54:42 compute-0 podman[419466]: 2025-11-26 01:54:42.348509039 +0000 UTC m=+0.262533678 container start 85fb85fe6930d888cdbccd4233d721acabc85c68cbbbf02bb32a3d042adc6b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 01:54:42 compute-0 podman[419466]: 2025-11-26 01:54:42.356559076 +0000 UTC m=+0.270583665 container attach 85fb85fe6930d888cdbccd4233d721acabc85c68cbbbf02bb32a3d042adc6b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:54:42 compute-0 suspicious_lewin[419482]: 167 167
Nov 26 01:54:42 compute-0 systemd[1]: libpod-85fb85fe6930d888cdbccd4233d721acabc85c68cbbbf02bb32a3d042adc6b0f.scope: Deactivated successfully.
Nov 26 01:54:42 compute-0 conmon[419482]: conmon 85fb85fe6930d888cdbc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85fb85fe6930d888cdbccd4233d721acabc85c68cbbbf02bb32a3d042adc6b0f.scope/container/memory.events
Nov 26 01:54:42 compute-0 podman[419466]: 2025-11-26 01:54:42.365961111 +0000 UTC m=+0.279985730 container died 85fb85fe6930d888cdbccd4233d721acabc85c68cbbbf02bb32a3d042adc6b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 26 01:54:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-55352e353ffbb9f7de47457fc6c5578d1b7f6fa008fe40402635793734bb664f-merged.mount: Deactivated successfully.
Nov 26 01:54:42 compute-0 podman[419466]: 2025-11-26 01:54:42.445048149 +0000 UTC m=+0.359072748 container remove 85fb85fe6930d888cdbccd4233d721acabc85c68cbbbf02bb32a3d042adc6b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:54:42 compute-0 systemd[1]: libpod-conmon-85fb85fe6930d888cdbccd4233d721acabc85c68cbbbf02bb32a3d042adc6b0f.scope: Deactivated successfully.
Nov 26 01:54:42 compute-0 podman[419505]: 2025-11-26 01:54:42.743357724 +0000 UTC m=+0.091466238 container create 6ac4729f273ce05819144417dd611b5f85daeb9bd67665135827303839ce7e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 01:54:42 compute-0 podman[419505]: 2025-11-26 01:54:42.71093752 +0000 UTC m=+0.059046074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:54:42 compute-0 systemd[1]: Started libpod-conmon-6ac4729f273ce05819144417dd611b5f85daeb9bd67665135827303839ce7e4a.scope.
Nov 26 01:54:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.866 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.867 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.867 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.868 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.868 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.879 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b1c088bc-7a6b-4580-93ff-685731747189', 'name': 'test_0', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.885 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0e500d52-72e1-4501-b4d6-fc6ca575760f', 'name': 'vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {'metering.server_group': '366b90b6-2e85-40c4-9ca1-855cf9022409'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.885 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.885 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.886 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.886 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.887 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.887 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.887 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.887 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.888 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.888 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.889 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T01:54:42.886081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.889 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T01:54:42.888884) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.896 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:54:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d8b8245e5733bfef2d7111b96956a78ae8a6ff8f25f2527858012174598bf5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.904 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d8b8245e5733bfef2d7111b96956a78ae8a6ff8f25f2527858012174598bf5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:54:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d8b8245e5733bfef2d7111b96956a78ae8a6ff8f25f2527858012174598bf5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:54:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d8b8245e5733bfef2d7111b96956a78ae8a6ff8f25f2527858012174598bf5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.910 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.910 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.910 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.910 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.910 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.910 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.911 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.911 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T01:54:42.910666) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.911 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.911 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.912 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.912 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.912 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.912 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.912 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T01:54:42.912195) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.913 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.913 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.913 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.913 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.913 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.913 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.913 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.914 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T01:54:42.913758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.914 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.914 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.915 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.915 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.915 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.915 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.915 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes volume: 2244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.915 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.bytes volume: 4670 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.916 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.916 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.916 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T01:54:42.915346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.916 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.916 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.916 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T01:54:42.916901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:42 compute-0 podman[419505]: 2025-11-26 01:54:42.940751435 +0000 UTC m=+0.288859949 container init 6ac4729f273ce05819144417dd611b5f85daeb9bd67665135827303839ce7e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 01:54:42 compute-0 podman[419505]: 2025-11-26 01:54:42.953604178 +0000 UTC m=+0.301712702 container start 6ac4729f273ce05819144417dd611b5f85daeb9bd67665135827303839ce7e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 01:54:42 compute-0 podman[419505]: 2025-11-26 01:54:42.96007302 +0000 UTC m=+0.308181554 container attach 6ac4729f273ce05819144417dd611b5f85daeb9bd67665135827303839ce7e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:54:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:42.972 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/cpu volume: 38520000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:42 compute-0 podman[419518]: 2025-11-26 01:54:42.984077766 +0000 UTC m=+0.168539580 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3)
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.001 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/cpu volume: 154780000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.002 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.002 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.002 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.003 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.003 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.003 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.003 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.004 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.bytes.delta volume: 182 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.004 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.004 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.005 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.005 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.006 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.006 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.006 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/memory.usage volume: 48.98046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.006 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/memory.usage volume: 49.09375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.007 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.007 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.007 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.008 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.008 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.008 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.009 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T01:54:43.003299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.009 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T01:54:43.006202) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T01:54:43.009130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.009 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes volume: 2010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.009 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.bytes volume: 4891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.010 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.010 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.010 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.010 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.011 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.011 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.011 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.011 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.bytes.delta volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.012 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.013 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.013 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.013 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.013 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.013 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T01:54:43.011198) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.014 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.014 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.015 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.015 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.015 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.015 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.015 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.015 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T01:54:43.013945) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.016 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.016 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.016 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.016 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.017 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.017 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.017 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.017 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.017 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.017 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.018 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.018 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.018 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.018 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.019 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.019 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.019 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T01:54:43.015948) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T01:54:43.017789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T01:54:43.019156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.045 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.046 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.046 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.069 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.069 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.069 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.070 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.070 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.070 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.070 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.072 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.072 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.072 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T01:54:43.072126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.147 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.147 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.148 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.214 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.215 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.215 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.216 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.216 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.216 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.216 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.216 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.216 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.217 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.217 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.217 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 2182324777 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.217 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 336768448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.218 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 176765271 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T01:54:43.217113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.218 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.latency volume: 2021453674 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.218 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.latency volume: 321911498 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.218 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.latency volume: 237452008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.219 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.219 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.219 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.219 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.219 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.219 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.220 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.220 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.220 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T01:54:43.219878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.220 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.220 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.221 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.221 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.222 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.222 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.222 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.222 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.222 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.223 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.223 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.223 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.224 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.224 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T01:54:43.223183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.224 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.224 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.224 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.225 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.225 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.225 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.225 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.225 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.226 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.226 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.226 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.227 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.227 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.bytes volume: 41824256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T01:54:43.226021) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.227 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.228 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.228 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.228 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.228 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.228 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.229 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.229 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.229 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.229 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.230 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.230 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.230 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.230 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.230 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.230 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.231 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 5787370869 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T01:54:43.229108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.231 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 30575996 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.231 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.232 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.latency volume: 8294131606 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.232 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.latency volume: 31365598 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.232 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.233 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.233 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.233 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.233 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.234 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.234 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.234 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T01:54:43.230915) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.234 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.234 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.234 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.235 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.235 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.235 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.236 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.236 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.237 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.237 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.237 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T01:54:43.234145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.237 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.237 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.237 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.237 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.238 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.238 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.238 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.239 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.239 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T01:54:43.237474) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.239 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.239 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:54:43.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:54:43 compute-0 nova_compute[350387]: 2025-11-26 01:54:43.355 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:43 compute-0 festive_hoover[419530]: {
Nov 26 01:54:43 compute-0 festive_hoover[419530]:    "0": [
Nov 26 01:54:43 compute-0 festive_hoover[419530]:        {
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "devices": [
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "/dev/loop3"
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            ],
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "lv_name": "ceph_lv0",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "lv_size": "21470642176",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "name": "ceph_lv0",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "tags": {
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.cluster_name": "ceph",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.crush_device_class": "",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.encrypted": "0",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.osd_id": "0",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.type": "block",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.vdo": "0"
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            },
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "type": "block",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "vg_name": "ceph_vg0"
Nov 26 01:54:43 compute-0 festive_hoover[419530]:        }
Nov 26 01:54:43 compute-0 festive_hoover[419530]:    ],
Nov 26 01:54:43 compute-0 festive_hoover[419530]:    "1": [
Nov 26 01:54:43 compute-0 festive_hoover[419530]:        {
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "devices": [
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "/dev/loop4"
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            ],
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "lv_name": "ceph_lv1",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "lv_size": "21470642176",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "name": "ceph_lv1",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "tags": {
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.cluster_name": "ceph",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.crush_device_class": "",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.encrypted": "0",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.osd_id": "1",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.type": "block",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.vdo": "0"
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            },
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "type": "block",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "vg_name": "ceph_vg1"
Nov 26 01:54:43 compute-0 festive_hoover[419530]:        }
Nov 26 01:54:43 compute-0 festive_hoover[419530]:    ],
Nov 26 01:54:43 compute-0 festive_hoover[419530]:    "2": [
Nov 26 01:54:43 compute-0 festive_hoover[419530]:        {
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "devices": [
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "/dev/loop5"
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            ],
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "lv_name": "ceph_lv2",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "lv_size": "21470642176",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "name": "ceph_lv2",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "tags": {
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.cluster_name": "ceph",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.crush_device_class": "",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.encrypted": "0",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.osd_id": "2",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.type": "block",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:                "ceph.vdo": "0"
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            },
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "type": "block",
Nov 26 01:54:43 compute-0 festive_hoover[419530]:            "vg_name": "ceph_vg2"
Nov 26 01:54:43 compute-0 festive_hoover[419530]:        }
Nov 26 01:54:43 compute-0 festive_hoover[419530]:    ]
Nov 26 01:54:43 compute-0 festive_hoover[419530]: }
Nov 26 01:54:43 compute-0 systemd[1]: libpod-6ac4729f273ce05819144417dd611b5f85daeb9bd67665135827303839ce7e4a.scope: Deactivated successfully.
Nov 26 01:54:43 compute-0 podman[419547]: 2025-11-26 01:54:43.875633106 +0000 UTC m=+0.044675459 container died 6ac4729f273ce05819144417dd611b5f85daeb9bd67665135827303839ce7e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:54:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-17d8b8245e5733bfef2d7111b96956a78ae8a6ff8f25f2527858012174598bf5-merged.mount: Deactivated successfully.
Nov 26 01:54:44 compute-0 podman[419547]: 2025-11-26 01:54:44.003018316 +0000 UTC m=+0.172060629 container remove 6ac4729f273ce05819144417dd611b5f85daeb9bd67665135827303839ce7e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:54:44 compute-0 systemd[1]: libpod-conmon-6ac4729f273ce05819144417dd611b5f85daeb9bd67665135827303839ce7e4a.scope: Deactivated successfully.
Nov 26 01:54:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:54:44 compute-0 podman[419662]: 2025-11-26 01:54:44.814073896 +0000 UTC m=+0.100857413 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Nov 26 01:54:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:45 compute-0 podman[419722]: 2025-11-26 01:54:45.145538965 +0000 UTC m=+0.077460913 container create f6c9e5ae5144cb26e0f75672abb50d50d5c307d6aa9c9efe6dc2aa857371ee94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:54:45 compute-0 podman[419722]: 2025-11-26 01:54:45.114681986 +0000 UTC m=+0.046603954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:54:45 compute-0 systemd[1]: Started libpod-conmon-f6c9e5ae5144cb26e0f75672abb50d50d5c307d6aa9c9efe6dc2aa857371ee94.scope.
Nov 26 01:54:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:54:45 compute-0 podman[419722]: 2025-11-26 01:54:45.280428486 +0000 UTC m=+0.212350464 container init f6c9e5ae5144cb26e0f75672abb50d50d5c307d6aa9c9efe6dc2aa857371ee94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_pascal, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 01:54:45 compute-0 podman[419722]: 2025-11-26 01:54:45.295542702 +0000 UTC m=+0.227464650 container start f6c9e5ae5144cb26e0f75672abb50d50d5c307d6aa9c9efe6dc2aa857371ee94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 01:54:45 compute-0 podman[419722]: 2025-11-26 01:54:45.300672206 +0000 UTC m=+0.232594214 container attach f6c9e5ae5144cb26e0f75672abb50d50d5c307d6aa9c9efe6dc2aa857371ee94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_pascal, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:54:45 compute-0 nifty_pascal[419738]: 167 167
Nov 26 01:54:45 compute-0 systemd[1]: libpod-f6c9e5ae5144cb26e0f75672abb50d50d5c307d6aa9c9efe6dc2aa857371ee94.scope: Deactivated successfully.
Nov 26 01:54:45 compute-0 podman[419722]: 2025-11-26 01:54:45.304656978 +0000 UTC m=+0.236578956 container died f6c9e5ae5144cb26e0f75672abb50d50d5c307d6aa9c9efe6dc2aa857371ee94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_pascal, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 01:54:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9bc9897e895269c8b2823de0ec3f541610b0261e6206a042f328d5813011d58-merged.mount: Deactivated successfully.
Nov 26 01:54:45 compute-0 podman[419722]: 2025-11-26 01:54:45.376277996 +0000 UTC m=+0.308199974 container remove f6c9e5ae5144cb26e0f75672abb50d50d5c307d6aa9c9efe6dc2aa857371ee94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_pascal, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 01:54:45 compute-0 systemd[1]: libpod-conmon-f6c9e5ae5144cb26e0f75672abb50d50d5c307d6aa9c9efe6dc2aa857371ee94.scope: Deactivated successfully.
Nov 26 01:54:45 compute-0 podman[419762]: 2025-11-26 01:54:45.673202222 +0000 UTC m=+0.078971666 container create ddfb5066ac2e93e4c2602c96b1fa73e01193b25a4bc0ae59867efeac90618730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_khayyam, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:54:45 compute-0 podman[419762]: 2025-11-26 01:54:45.642537408 +0000 UTC m=+0.048306862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:54:45 compute-0 systemd[1]: Started libpod-conmon-ddfb5066ac2e93e4c2602c96b1fa73e01193b25a4bc0ae59867efeac90618730.scope.
Nov 26 01:54:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:54:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b36e538f952213d6106c12e263cbaa87b8e8180126310bd64cb3f72b38a1b29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:54:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b36e538f952213d6106c12e263cbaa87b8e8180126310bd64cb3f72b38a1b29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:54:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b36e538f952213d6106c12e263cbaa87b8e8180126310bd64cb3f72b38a1b29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:54:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b36e538f952213d6106c12e263cbaa87b8e8180126310bd64cb3f72b38a1b29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:54:45 compute-0 podman[419762]: 2025-11-26 01:54:45.815530212 +0000 UTC m=+0.221299666 container init ddfb5066ac2e93e4c2602c96b1fa73e01193b25a4bc0ae59867efeac90618730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 01:54:45 compute-0 podman[419776]: 2025-11-26 01:54:45.832860781 +0000 UTC m=+0.107144010 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:54:45 compute-0 podman[419762]: 2025-11-26 01:54:45.846956788 +0000 UTC m=+0.252726182 container start ddfb5066ac2e93e4c2602c96b1fa73e01193b25a4bc0ae59867efeac90618730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_khayyam, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:54:45 compute-0 podman[419762]: 2025-11-26 01:54:45.852868454 +0000 UTC m=+0.258637868 container attach ddfb5066ac2e93e4c2602c96b1fa73e01193b25a4bc0ae59867efeac90618730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_khayyam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:54:46 compute-0 nova_compute[350387]: 2025-11-26 01:54:46.271 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]: {
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:        "osd_id": 0,
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:        "type": "bluestore"
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:    },
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:        "osd_id": 2,
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:        "type": "bluestore"
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:    },
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:        "osd_id": 1,
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:        "type": "bluestore"
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]:    }
Nov 26 01:54:47 compute-0 sharp_khayyam[419787]: }
Nov 26 01:54:47 compute-0 systemd[1]: libpod-ddfb5066ac2e93e4c2602c96b1fa73e01193b25a4bc0ae59867efeac90618730.scope: Deactivated successfully.
Nov 26 01:54:47 compute-0 podman[419762]: 2025-11-26 01:54:47.049111079 +0000 UTC m=+1.454880523 container died ddfb5066ac2e93e4c2602c96b1fa73e01193b25a4bc0ae59867efeac90618730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 01:54:47 compute-0 systemd[1]: libpod-ddfb5066ac2e93e4c2602c96b1fa73e01193b25a4bc0ae59867efeac90618730.scope: Consumed 1.193s CPU time.
Nov 26 01:54:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b36e538f952213d6106c12e263cbaa87b8e8180126310bd64cb3f72b38a1b29-merged.mount: Deactivated successfully.
Nov 26 01:54:47 compute-0 podman[419762]: 2025-11-26 01:54:47.151644958 +0000 UTC m=+1.557414372 container remove ddfb5066ac2e93e4c2602c96b1fa73e01193b25a4bc0ae59867efeac90618730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_khayyam, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:54:47 compute-0 systemd[1]: libpod-conmon-ddfb5066ac2e93e4c2602c96b1fa73e01193b25a4bc0ae59867efeac90618730.scope: Deactivated successfully.
Nov 26 01:54:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:54:47 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:54:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:54:47 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:54:47 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 1a25b8e4-49a6-4ce8-ae18-5af20c9ba2b4 does not exist
Nov 26 01:54:47 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 36b1fcbe-368d-4421-8958-3fa0e20a4340 does not exist
Nov 26 01:54:48 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:54:48 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:54:48 compute-0 nova_compute[350387]: 2025-11-26 01:54:48.359 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:54:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011047613669662043 of space, bias 1.0, pg target 0.3314284100898613 quantized to 32 (current 32)
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:54:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:54:51 compute-0 nova_compute[350387]: 2025-11-26 01:54:51.274 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:53 compute-0 nova_compute[350387]: 2025-11-26 01:54:53.363 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:54:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:56 compute-0 nova_compute[350387]: 2025-11-26 01:54:56.277 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:58 compute-0 nova_compute[350387]: 2025-11-26 01:54:58.367 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:54:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:54:59 compute-0 podman[158021]: time="2025-11-26T01:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:54:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:54:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:54:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8631 "" "Go-http-client/1.1"
Nov 26 01:55:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:55:01 compute-0 nova_compute[350387]: 2025-11-26 01:55:01.282 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:01 compute-0 openstack_network_exporter[367323]: ERROR   01:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:55:01 compute-0 openstack_network_exporter[367323]: ERROR   01:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:55:01 compute-0 openstack_network_exporter[367323]: ERROR   01:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:55:01 compute-0 openstack_network_exporter[367323]: ERROR   01:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:55:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:55:01 compute-0 openstack_network_exporter[367323]: ERROR   01:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:55:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:55:02 compute-0 podman[419896]: 2025-11-26 01:55:02.571323214 +0000 UTC m=+0.116448481 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute)
Nov 26 01:55:02 compute-0 podman[419898]: 2025-11-26 01:55:02.590928287 +0000 UTC m=+0.126594398 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:55:02 compute-0 podman[419897]: 2025-11-26 01:55:02.613144373 +0000 UTC m=+0.154266518 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 01:55:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:55:03 compute-0 nova_compute[350387]: 2025-11-26 01:55:03.370 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:04 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:04.503 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 01:55:04 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:04.505 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 01:55:04 compute-0 nova_compute[350387]: 2025-11-26 01:55:04.510 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:55:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:55:05 compute-0 podman[419952]: 2025-11-26 01:55:05.580744635 +0000 UTC m=+0.125242739 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118)
Nov 26 01:55:05 compute-0 podman[419953]: 2025-11-26 01:55:05.646293462 +0000 UTC m=+0.189518650 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 01:55:06 compute-0 nova_compute[350387]: 2025-11-26 01:55:06.283 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:55:08 compute-0 nova_compute[350387]: 2025-11-26 01:55:08.375 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:08 compute-0 podman[419996]: 2025-11-26 01:55:08.599758666 +0000 UTC m=+0.146317074 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, release=1214.1726694543, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.4)
Nov 26 01:55:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:55:09 compute-0 nova_compute[350387]: 2025-11-26 01:55:09.428 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:55:09 compute-0 nova_compute[350387]: 2025-11-26 01:55:09.430 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:55:09 compute-0 nova_compute[350387]: 2025-11-26 01:55:09.455 350391 DEBUG nova.compute.manager [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 01:55:09 compute-0 nova_compute[350387]: 2025-11-26 01:55:09.569 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:55:09 compute-0 nova_compute[350387]: 2025-11-26 01:55:09.571 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:55:09 compute-0 nova_compute[350387]: 2025-11-26 01:55:09.584 350391 DEBUG nova.virt.hardware [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 01:55:09 compute-0 nova_compute[350387]: 2025-11-26 01:55:09.585 350391 INFO nova.compute.claims [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 01:55:09 compute-0 nova_compute[350387]: 2025-11-26 01:55:09.732 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:55:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:55:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:55:10 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/71246245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.221 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.237 350391 DEBUG nova.compute.provider_tree [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.252 350391 DEBUG nova.scheduler.client.report [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.274 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.275 350391 DEBUG nova.compute.manager [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.322 350391 DEBUG nova.compute.manager [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.323 350391 DEBUG nova.network.neutron [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.345 350391 INFO nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.381 350391 DEBUG nova.compute.manager [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.475 350391 DEBUG nova.compute.manager [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.478 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.479 350391 INFO nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Creating image(s)#033[00m
Nov 26 01:55:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:10.507 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.558 350391 DEBUG nova.storage.rbd_utils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.618 350391 DEBUG nova.storage.rbd_utils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.670 350391 DEBUG nova.storage.rbd_utils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.680 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.763 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.764 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "f456d938eec6117407d48c9debbc5604edb4194e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.765 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "f456d938eec6117407d48c9debbc5604edb4194e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.766 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "f456d938eec6117407d48c9debbc5604edb4194e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.808 350391 DEBUG nova.storage.rbd_utils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:55:10 compute-0 nova_compute[350387]: 2025-11-26 01:55:10.817 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:55:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 139 MiB data, 250 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:55:11 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 26 01:55:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:55:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:55:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:55:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:55:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:55:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:55:11 compute-0 nova_compute[350387]: 2025-11-26 01:55:11.223 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:55:11 compute-0 nova_compute[350387]: 2025-11-26 01:55:11.292 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:11 compute-0 nova_compute[350387]: 2025-11-26 01:55:11.393 350391 DEBUG nova.storage.rbd_utils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] resizing rbd image a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 26 01:55:11 compute-0 nova_compute[350387]: 2025-11-26 01:55:11.613 350391 DEBUG nova.objects.instance [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lazy-loading 'migration_context' on Instance uuid a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 01:55:11 compute-0 nova_compute[350387]: 2025-11-26 01:55:11.669 350391 DEBUG nova.storage.rbd_utils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:55:11 compute-0 nova_compute[350387]: 2025-11-26 01:55:11.716 350391 DEBUG nova.storage.rbd_utils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:55:11 compute-0 nova_compute[350387]: 2025-11-26 01:55:11.725 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:55:11 compute-0 nova_compute[350387]: 2025-11-26 01:55:11.811 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:55:11 compute-0 nova_compute[350387]: 2025-11-26 01:55:11.812 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:55:11 compute-0 nova_compute[350387]: 2025-11-26 01:55:11.812 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:55:11 compute-0 nova_compute[350387]: 2025-11-26 01:55:11.813 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:55:11 compute-0 nova_compute[350387]: 2025-11-26 01:55:11.858 350391 DEBUG nova.storage.rbd_utils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:55:11 compute-0 nova_compute[350387]: 2025-11-26 01:55:11.866 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:55:12 compute-0 nova_compute[350387]: 2025-11-26 01:55:12.362 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.394726) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122112394777, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 251, "total_data_size": 3448044, "memory_usage": 3501808, "flush_reason": "Manual Compaction"}
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122112422378, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3393217, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25520, "largest_seqno": 27563, "table_properties": {"data_size": 3383783, "index_size": 5992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18545, "raw_average_key_size": 20, "raw_value_size": 3365219, "raw_average_value_size": 3642, "num_data_blocks": 265, "num_entries": 924, "num_filter_entries": 924, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764121879, "oldest_key_time": 1764121879, "file_creation_time": 1764122112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 28313 microseconds, and 14872 cpu microseconds.
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.423036) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3393217 bytes OK
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.423629) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.427229) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.427256) EVENT_LOG_v1 {"time_micros": 1764122112427247, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.427283) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3439509, prev total WAL file size 3439509, number of live WAL files 2.
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.433586) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3313KB)], [59(7178KB)]
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122112433638, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10744308, "oldest_snapshot_seqno": -1}
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5014 keys, 8977611 bytes, temperature: kUnknown
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122112490207, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8977611, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8942403, "index_size": 21594, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 124445, "raw_average_key_size": 24, "raw_value_size": 8849963, "raw_average_value_size": 1765, "num_data_blocks": 895, "num_entries": 5014, "num_filter_entries": 5014, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764122112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.490410) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8977611 bytes
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.492693) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.8 rd, 158.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.0 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 5528, records dropped: 514 output_compression: NoCompression
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.492715) EVENT_LOG_v1 {"time_micros": 1764122112492705, "job": 32, "event": "compaction_finished", "compaction_time_micros": 56619, "compaction_time_cpu_micros": 36440, "output_level": 6, "num_output_files": 1, "total_output_size": 8977611, "num_input_records": 5528, "num_output_records": 5014, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122112493613, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122112495881, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.433339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.496278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.496285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.496288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.496291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:55:12 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:55:12.496294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:55:12 compute-0 nova_compute[350387]: 2025-11-26 01:55:12.602 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 01:55:12 compute-0 nova_compute[350387]: 2025-11-26 01:55:12.603 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Ensure instance console log exists: /var/lib/nova/instances/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 01:55:12 compute-0 nova_compute[350387]: 2025-11-26 01:55:12.604 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:55:12 compute-0 nova_compute[350387]: 2025-11-26 01:55:12.604 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:55:12 compute-0 nova_compute[350387]: 2025-11-26 01:55:12.605 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:55:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 151 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 354 KiB/s wr, 4 op/s
Nov 26 01:55:12 compute-0 nova_compute[350387]: 2025-11-26 01:55:12.880 350391 DEBUG nova.network.neutron [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Successfully updated port: 867227e5-4422-4cfb-93d9-0589612717db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 01:55:12 compute-0 nova_compute[350387]: 2025-11-26 01:55:12.901 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "refresh_cache-a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:55:12 compute-0 nova_compute[350387]: 2025-11-26 01:55:12.902 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquired lock "refresh_cache-a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:55:12 compute-0 nova_compute[350387]: 2025-11-26 01:55:12.902 350391 DEBUG nova.network.neutron [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 01:55:13 compute-0 nova_compute[350387]: 2025-11-26 01:55:13.007 350391 DEBUG nova.compute.manager [req-fe47a05e-de89-4f43-903f-be434d69a21a req-562dcd4f-e78e-41c3-9ee7-7ab827ee4e58 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Received event network-changed-867227e5-4422-4cfb-93d9-0589612717db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 01:55:13 compute-0 nova_compute[350387]: 2025-11-26 01:55:13.008 350391 DEBUG nova.compute.manager [req-fe47a05e-de89-4f43-903f-be434d69a21a req-562dcd4f-e78e-41c3-9ee7-7ab827ee4e58 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Refreshing instance network info cache due to event network-changed-867227e5-4422-4cfb-93d9-0589612717db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 01:55:13 compute-0 nova_compute[350387]: 2025-11-26 01:55:13.009 350391 DEBUG oslo_concurrency.lockutils [req-fe47a05e-de89-4f43-903f-be434d69a21a req-562dcd4f-e78e-41c3-9ee7-7ab827ee4e58 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:55:13 compute-0 nova_compute[350387]: 2025-11-26 01:55:13.081 350391 DEBUG nova.network.neutron [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 01:55:13 compute-0 nova_compute[350387]: 2025-11-26 01:55:13.380 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:13 compute-0 podman[420332]: 2025-11-26 01:55:13.603737131 +0000 UTC m=+0.148381521 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.715 350391 DEBUG nova.network.neutron [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Updating instance_info_cache with network_info: [{"id": "867227e5-4422-4cfb-93d9-0589612717db", "address": "fa:16:3e:d6:c0:70", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.36", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867227e5-44", "ovs_interfaceid": "867227e5-4422-4cfb-93d9-0589612717db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.744 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Releasing lock "refresh_cache-a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.744 350391 DEBUG nova.compute.manager [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Instance network_info: |[{"id": "867227e5-4422-4cfb-93d9-0589612717db", "address": "fa:16:3e:d6:c0:70", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.36", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867227e5-44", "ovs_interfaceid": "867227e5-4422-4cfb-93d9-0589612717db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.744 350391 DEBUG oslo_concurrency.lockutils [req-fe47a05e-de89-4f43-903f-be434d69a21a req-562dcd4f-e78e-41c3-9ee7-7ab827ee4e58 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.745 350391 DEBUG nova.network.neutron [req-fe47a05e-de89-4f43-903f-be434d69a21a req-562dcd4f-e78e-41c3-9ee7-7ab827ee4e58 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Refreshing network info cache for port 867227e5-4422-4cfb-93d9-0589612717db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.750 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Start _get_guest_xml network_info=[{"id": "867227e5-4422-4cfb-93d9-0589612717db", "address": "fa:16:3e:d6:c0:70", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.36", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867227e5-44", "ovs_interfaceid": "867227e5-4422-4cfb-93d9-0589612717db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T01:48:44Z,direct_url=<?>,disk_format='qcow2',id=48e08d00-37a3-4465-a949-ff0b8afe4def,min_disk=0,min_ram=0,name='cirros',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T01:48:48Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}], 'ephemerals': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'size': 1, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.760 350391 WARNING nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.769 350391 DEBUG nova.virt.libvirt.host [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.770 350391 DEBUG nova.virt.libvirt.host [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.780 350391 DEBUG nova.virt.libvirt.host [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.781 350391 DEBUG nova.virt.libvirt.host [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.782 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.782 350391 DEBUG nova.virt.hardware [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T01:48:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='030e95e2-5458-42ef-a5df-79a19c0b681d',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T01:48:44Z,direct_url=<?>,disk_format='qcow2',id=48e08d00-37a3-4465-a949-ff0b8afe4def,min_disk=0,min_ram=0,name='cirros',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T01:48:48Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.783 350391 DEBUG nova.virt.hardware [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.784 350391 DEBUG nova.virt.hardware [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.784 350391 DEBUG nova.virt.hardware [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.785 350391 DEBUG nova.virt.hardware [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 01:55:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.785 350391 DEBUG nova.virt.hardware [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.786 350391 DEBUG nova.virt.hardware [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.787 350391 DEBUG nova.virt.hardware [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.787 350391 DEBUG nova.virt.hardware [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.788 350391 DEBUG nova.virt.hardware [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.788 350391 DEBUG nova.virt.hardware [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 01:55:14 compute-0 nova_compute[350387]: 2025-11-26 01:55:14.792 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:55:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 161 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 745 KiB/s wr, 16 op/s
Nov 26 01:55:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 01:55:15 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/13191364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 01:55:15 compute-0 nova_compute[350387]: 2025-11-26 01:55:15.353 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:55:15 compute-0 nova_compute[350387]: 2025-11-26 01:55:15.354 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:55:15 compute-0 podman[420376]: 2025-11-26 01:55:15.551040898 +0000 UTC m=+0.101169392 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.33.7, architecture=x86_64)
Nov 26 01:55:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 01:55:15 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1162016773' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 01:55:15 compute-0 nova_compute[350387]: 2025-11-26 01:55:15.846 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:55:15 compute-0 nova_compute[350387]: 2025-11-26 01:55:15.905 350391 DEBUG nova.storage.rbd_utils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:55:15 compute-0 nova_compute[350387]: 2025-11-26 01:55:15.919 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.287 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 01:55:16 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1690570927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.452 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.455 350391 DEBUG nova.virt.libvirt.vif [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T01:55:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o',id=3,image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='366b90b6-2e85-40c4-9ca1-855cf9022409'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d902f6105ab4c81a51a4751fa89a83e',ramdisk_id='',reservation_id='r-g9hi0hcw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T01:55:10Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yODQxODE3MjYzOTEzNDI0Nzc1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI4NDE4MTcyNjM5MTM0MjQ3NzU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mjg0MTgxNzI2MzkxMzQyNDc3NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI4NDE4MTcyNjM5MTM0MjQ3NzU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yODQxODE3MjYzOTEzNDI0Nzc1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yODQxODE3MjYzOTEzNDI0Nzc1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Nov 26 01:55:16 compute-0 nova_compute[350387]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mjg0MTgxNzI2MzkxMzQyNDc3NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI4NDE4MTcyNjM5MTM0MjQ3NzU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yODQxODE3MjYzOTEzNDI0Nzc1PT0tLQo=',user_id='b130e7a8bed3424f9f5ff63b35cd2b28',uuid=a8b199f7-8cd5-45ea-bc7e-af8352a6afa2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "867227e5-4422-4cfb-93d9-0589612717db", "address": "fa:16:3e:d6:c0:70", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.36", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867227e5-44", "ovs_interfaceid": "867227e5-4422-4cfb-93d9-0589612717db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.456 350391 DEBUG nova.network.os_vif_util [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converting VIF {"id": "867227e5-4422-4cfb-93d9-0589612717db", "address": "fa:16:3e:d6:c0:70", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.36", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867227e5-44", "ovs_interfaceid": "867227e5-4422-4cfb-93d9-0589612717db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.457 350391 DEBUG nova.network.os_vif_util [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:c0:70,bridge_name='br-int',has_traffic_filtering=True,id=867227e5-4422-4cfb-93d9-0589612717db,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap867227e5-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.459 350391 DEBUG nova.objects.instance [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lazy-loading 'pci_devices' on Instance uuid a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.479 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] End _get_guest_xml xml=<domain type="kvm">
Nov 26 01:55:16 compute-0 nova_compute[350387]:  <uuid>a8b199f7-8cd5-45ea-bc7e-af8352a6afa2</uuid>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  <name>instance-00000003</name>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  <memory>524288</memory>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  <metadata>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <nova:name>vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o</nova:name>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 01:55:14</nova:creationTime>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <nova:flavor name="m1.small">
Nov 26 01:55:16 compute-0 nova_compute[350387]:        <nova:memory>512</nova:memory>
Nov 26 01:55:16 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 01:55:16 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 01:55:16 compute-0 nova_compute[350387]:        <nova:ephemeral>1</nova:ephemeral>
Nov 26 01:55:16 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 01:55:16 compute-0 nova_compute[350387]:        <nova:user uuid="b130e7a8bed3424f9f5ff63b35cd2b28">admin</nova:user>
Nov 26 01:55:16 compute-0 nova_compute[350387]:        <nova:project uuid="4d902f6105ab4c81a51a4751fa89a83e">admin</nova:project>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="48e08d00-37a3-4465-a949-ff0b8afe4def"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <nova:ports>
Nov 26 01:55:16 compute-0 nova_compute[350387]:        <nova:port uuid="867227e5-4422-4cfb-93d9-0589612717db">
Nov 26 01:55:16 compute-0 nova_compute[350387]:          <nova:ip type="fixed" address="192.168.0.36" ipVersion="4"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:        </nova:port>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      </nova:ports>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  </metadata>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <system>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <entry name="serial">a8b199f7-8cd5-45ea-bc7e-af8352a6afa2</entry>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <entry name="uuid">a8b199f7-8cd5-45ea-bc7e-af8352a6afa2</entry>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    </system>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  <os>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  </os>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  <features>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <apic/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  </features>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  </clock>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  </cpu>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  <devices>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk">
Nov 26 01:55:16 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      </source>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 01:55:16 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      </auth>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk.eph0">
Nov 26 01:55:16 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      </source>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 01:55:16 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      </auth>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <target dev="vdb" bus="virtio"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk.config">
Nov 26 01:55:16 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      </source>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 01:55:16 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      </auth>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <interface type="ethernet">
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <mac address="fa:16:3e:d6:c0:70"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <mtu size="1442"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <target dev="tap867227e5-44"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    </interface>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/console.log" append="off"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    </serial>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <video>
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    </video>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    </rng>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 01:55:16 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 01:55:16 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 01:55:16 compute-0 nova_compute[350387]:  </devices>
Nov 26 01:55:16 compute-0 nova_compute[350387]: </domain>
Nov 26 01:55:16 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.483 350391 DEBUG nova.compute.manager [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Preparing to wait for external event network-vif-plugged-867227e5-4422-4cfb-93d9-0589612717db prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.484 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.484 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.485 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.486 350391 DEBUG nova.virt.libvirt.vif [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T01:55:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o',id=3,image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='366b90b6-2e85-40c4-9ca1-855cf9022409'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d902f6105ab4c81a51a4751fa89a83e',ramdisk_id='',reservation_id='r-g9hi0hcw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T01:55:10Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yODQxODE3MjYzOTEzNDI0Nzc1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI4NDE4MTcyNjM5MTM0MjQ3NzU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mjg0MTgxNzI2MzkxMzQyNDc3NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI4NDE4MTcyNjM5MTM0MjQ3NzU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yODQxODE3MjYzOTEzNDI0Nzc1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yODQxODE3MjYzOTEzNDI0Nzc1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Nov 26 01:55:16 compute-0 nova_compute[350387]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mjg0MTgxNzI2MzkxMzQyNDc3NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI4NDE4MTcyNjM5MTM0MjQ3NzU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yODQxODE3MjYzOTEzNDI0Nzc1PT0tLQo=',user_id='b130e7a8bed3424f9f5ff63b35cd2b28',uuid=a8b199f7-8cd5-45ea-bc7e-af8352a6afa2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "867227e5-4422-4cfb-93d9-0589612717db", "address": "fa:16:3e:d6:c0:70", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.36", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867227e5-44", "ovs_interfaceid": "867227e5-4422-4cfb-93d9-0589612717db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.486 350391 DEBUG nova.network.os_vif_util [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converting VIF {"id": "867227e5-4422-4cfb-93d9-0589612717db", "address": "fa:16:3e:d6:c0:70", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.36", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867227e5-44", "ovs_interfaceid": "867227e5-4422-4cfb-93d9-0589612717db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.487 350391 DEBUG nova.network.os_vif_util [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:c0:70,bridge_name='br-int',has_traffic_filtering=True,id=867227e5-4422-4cfb-93d9-0589612717db,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap867227e5-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.488 350391 DEBUG os_vif [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:c0:70,bridge_name='br-int',has_traffic_filtering=True,id=867227e5-4422-4cfb-93d9-0589612717db,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap867227e5-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.489 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.490 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.490 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.496 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.496 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap867227e5-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.497 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap867227e5-44, col_values=(('external_ids', {'iface-id': '867227e5-4422-4cfb-93d9-0589612717db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:c0:70', 'vm-uuid': 'a8b199f7-8cd5-45ea-bc7e-af8352a6afa2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.500 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.502 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 01:55:16 compute-0 NetworkManager[48886]: <info>  [1764122116.5032] manager: (tap867227e5-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.513 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.514 350391 INFO os_vif [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:c0:70,bridge_name='br-int',has_traffic_filtering=True,id=867227e5-4422-4cfb-93d9-0589612717db,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap867227e5-44')#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.589 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.590 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.591 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.592 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No VIF found with MAC fa:16:3e:d6:c0:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.594 350391 INFO nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Using config drive#033[00m
Nov 26 01:55:16 compute-0 rsyslogd[188548]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 01:55:16.455 350391 DEBUG nova.virt.libvirt.vif [None req-672751c4-16 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 01:55:16 compute-0 podman[420456]: 2025-11-26 01:55:16.59638019 +0000 UTC m=+0.147965720 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:55:16 compute-0 nova_compute[350387]: 2025-11-26 01:55:16.654 350391 DEBUG nova.storage.rbd_utils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:55:16 compute-0 rsyslogd[188548]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 01:55:16.486 350391 DEBUG nova.virt.libvirt.vif [None req-672751c4-16 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 01:55:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 172 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 MiB/s wr, 28 op/s
Nov 26 01:55:17 compute-0 nova_compute[350387]: 2025-11-26 01:55:17.423 350391 DEBUG nova.network.neutron [req-fe47a05e-de89-4f43-903f-be434d69a21a req-562dcd4f-e78e-41c3-9ee7-7ab827ee4e58 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Updated VIF entry in instance network info cache for port 867227e5-4422-4cfb-93d9-0589612717db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 01:55:17 compute-0 nova_compute[350387]: 2025-11-26 01:55:17.423 350391 DEBUG nova.network.neutron [req-fe47a05e-de89-4f43-903f-be434d69a21a req-562dcd4f-e78e-41c3-9ee7-7ab827ee4e58 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Updating instance_info_cache with network_info: [{"id": "867227e5-4422-4cfb-93d9-0589612717db", "address": "fa:16:3e:d6:c0:70", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.36", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867227e5-44", "ovs_interfaceid": "867227e5-4422-4cfb-93d9-0589612717db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:55:17 compute-0 nova_compute[350387]: 2025-11-26 01:55:17.444 350391 DEBUG oslo_concurrency.lockutils [req-fe47a05e-de89-4f43-903f-be434d69a21a req-562dcd4f-e78e-41c3-9ee7-7ab827ee4e58 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:55:18 compute-0 nova_compute[350387]: 2025-11-26 01:55:18.360 350391 INFO nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Creating config drive at /var/lib/nova/instances/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.config#033[00m
Nov 26 01:55:18 compute-0 nova_compute[350387]: 2025-11-26 01:55:18.371 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6wsltmsu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:55:18 compute-0 nova_compute[350387]: 2025-11-26 01:55:18.519 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6wsltmsu" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:55:18 compute-0 nova_compute[350387]: 2025-11-26 01:55:18.578 350391 DEBUG nova.storage.rbd_utils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:55:18 compute-0 nova_compute[350387]: 2025-11-26 01:55:18.589 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.config a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:55:18 compute-0 nova_compute[350387]: 2025-11-26 01:55:18.850 350391 DEBUG oslo_concurrency.processutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.config a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:55:18 compute-0 nova_compute[350387]: 2025-11-26 01:55:18.852 350391 INFO nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Deleting local config drive /var/lib/nova/instances/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.config because it was imported into RBD.#033[00m
Nov 26 01:55:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Nov 26 01:55:18 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 26 01:55:18 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 26 01:55:18 compute-0 kernel: tap867227e5-44: entered promiscuous mode
Nov 26 01:55:18 compute-0 ovn_controller[89102]: 2025-11-26T01:55:18Z|00040|binding|INFO|Claiming lport 867227e5-4422-4cfb-93d9-0589612717db for this chassis.
Nov 26 01:55:18 compute-0 ovn_controller[89102]: 2025-11-26T01:55:18Z|00041|binding|INFO|867227e5-4422-4cfb-93d9-0589612717db: Claiming fa:16:3e:d6:c0:70 192.168.0.36
Nov 26 01:55:19 compute-0 NetworkManager[48886]: <info>  [1764122119.0020] manager: (tap867227e5-44): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.002 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:19.013 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:c0:70 192.168.0.36'], port_security=['fa:16:3e:d6:c0:70 192.168.0.36'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vnceagrg57o4-kl5by2wl55k2-qlnmxyop4kzj-port-cgeuuhndjcpy', 'neutron:cidrs': '192.168.0.36/24', 'neutron:device_id': 'a8b199f7-8cd5-45ea-bc7e-af8352a6afa2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97f5f89-70be-4349-beb5-5f8e6065072e', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vnceagrg57o4-kl5by2wl55k2-qlnmxyop4kzj-port-cgeuuhndjcpy', 'neutron:project_id': '4d902f6105ab4c81a51a4751fa89a83e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd3202a1a-8d71-42b1-ae70-18469fa18607', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5f5986b-4ad4-4edf-b238-68c26c7002dd, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=867227e5-4422-4cfb-93d9-0589612717db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 01:55:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:19.017 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 867227e5-4422-4cfb-93d9-0589612717db in datapath c97f5f89-70be-4349-beb5-5f8e6065072e bound to our chassis#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.023 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:19.026 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97f5f89-70be-4349-beb5-5f8e6065072e#033[00m
Nov 26 01:55:19 compute-0 ovn_controller[89102]: 2025-11-26T01:55:19Z|00042|binding|INFO|Setting lport 867227e5-4422-4cfb-93d9-0589612717db ovn-installed in OVS
Nov 26 01:55:19 compute-0 ovn_controller[89102]: 2025-11-26T01:55:19Z|00043|binding|INFO|Setting lport 867227e5-4422-4cfb-93d9-0589612717db up in Southbound
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.032 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:19 compute-0 systemd-machined[138512]: New machine qemu-3-instance-00000003.
Nov 26 01:55:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:19.051 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[b834baae-670b-451c-b1f4-8c801baf9840]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:55:19 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Nov 26 01:55:19 compute-0 systemd-udevd[420574]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 01:55:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:19.087 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[64442979-faf6-48c9-802e-34fc65a95e0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:55:19 compute-0 NetworkManager[48886]: <info>  [1764122119.0932] device (tap867227e5-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 01:55:19 compute-0 NetworkManager[48886]: <info>  [1764122119.0942] device (tap867227e5-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 01:55:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:19.091 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[4dd0545a-cf34-4d79-bdd5-60baf9598cab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:55:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:19.128 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[282111f1-19fd-4d27-9847-25a9be9698e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:55:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:19.149 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[84123326-abaf-49fa-9484-0d4d9b41b2fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97f5f89-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:e8:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544483, 'reachable_time': 19796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420584, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:55:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:19.169 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[fff00c43-8906-48fb-b71c-7720eca08bbc]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc97f5f89-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544500, 'tstamp': 544500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420586, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc97f5f89-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544503, 'tstamp': 544503}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420586, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:55:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:19.172 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97f5f89-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.174 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:19.179 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97f5f89-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:55:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:19.181 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 01:55:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:19.182 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97f5f89-70, col_values=(('external_ids', {'iface-id': '3824ec63-7278-42dc-8c72-8ec8e06c2f0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:55:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:19.182 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.568 350391 DEBUG nova.compute.manager [req-9d9104d2-b6bf-4b55-9f30-b69cef741ff7 req-348f0c50-ebd1-49bf-97ae-7c9f4a9f0d55 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Received event network-vif-plugged-867227e5-4422-4cfb-93d9-0589612717db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.569 350391 DEBUG oslo_concurrency.lockutils [req-9d9104d2-b6bf-4b55-9f30-b69cef741ff7 req-348f0c50-ebd1-49bf-97ae-7c9f4a9f0d55 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.570 350391 DEBUG oslo_concurrency.lockutils [req-9d9104d2-b6bf-4b55-9f30-b69cef741ff7 req-348f0c50-ebd1-49bf-97ae-7c9f4a9f0d55 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.570 350391 DEBUG oslo_concurrency.lockutils [req-9d9104d2-b6bf-4b55-9f30-b69cef741ff7 req-348f0c50-ebd1-49bf-97ae-7c9f4a9f0d55 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.571 350391 DEBUG nova.compute.manager [req-9d9104d2-b6bf-4b55-9f30-b69cef741ff7 req-348f0c50-ebd1-49bf-97ae-7c9f4a9f0d55 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Processing event network-vif-plugged-867227e5-4422-4cfb-93d9-0589612717db _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 01:55:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.877 350391 DEBUG nova.compute.manager [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.878 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764122119.8771417, a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.879 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] VM Started (Lifecycle Event)#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.884 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.890 350391 INFO nova.virt.libvirt.driver [-] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Instance spawned successfully.#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.891 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.902 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.910 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.917 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.917 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.918 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.918 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.919 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.919 350391 DEBUG nova.virt.libvirt.driver [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.927 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.927 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764122119.8772943, a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.928 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] VM Paused (Lifecycle Event)#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.955 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.963 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764122119.8832371, a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.964 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] VM Resumed (Lifecycle Event)#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.970 350391 INFO nova.compute.manager [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Took 9.49 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.971 350391 DEBUG nova.compute.manager [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.985 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:55:19 compute-0 nova_compute[350387]: 2025-11-26 01:55:19.992 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 01:55:20 compute-0 nova_compute[350387]: 2025-11-26 01:55:20.034 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 01:55:20 compute-0 nova_compute[350387]: 2025-11-26 01:55:20.060 350391 INFO nova.compute.manager [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Took 10.53 seconds to build instance.#033[00m
Nov 26 01:55:20 compute-0 nova_compute[350387]: 2025-11-26 01:55:20.080 350391 DEBUG oslo_concurrency.lockutils [None req-672751c4-167e-43c8-a315-b11c56ba571c b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:55:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.290 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.340 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.341 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.342 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.342 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.343 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.344 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.378 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.379 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.379 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.380 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.380 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:55:21 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 26 01:55:21 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.499 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.644 350391 DEBUG nova.compute.manager [req-d27de99d-4651-4d78-add3-0987b07b752e req-bfc6cb48-7d28-42c5-9d76-2f679e76d4b1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Received event network-vif-plugged-867227e5-4422-4cfb-93d9-0589612717db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.646 350391 DEBUG oslo_concurrency.lockutils [req-d27de99d-4651-4d78-add3-0987b07b752e req-bfc6cb48-7d28-42c5-9d76-2f679e76d4b1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.650 350391 DEBUG oslo_concurrency.lockutils [req-d27de99d-4651-4d78-add3-0987b07b752e req-bfc6cb48-7d28-42c5-9d76-2f679e76d4b1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.651 350391 DEBUG oslo_concurrency.lockutils [req-d27de99d-4651-4d78-add3-0987b07b752e req-bfc6cb48-7d28-42c5-9d76-2f679e76d4b1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.651 350391 DEBUG nova.compute.manager [req-d27de99d-4651-4d78-add3-0987b07b752e req-bfc6cb48-7d28-42c5-9d76-2f679e76d4b1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] No waiting events found dispatching network-vif-plugged-867227e5-4422-4cfb-93d9-0589612717db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.652 350391 WARNING nova.compute.manager [req-d27de99d-4651-4d78-add3-0987b07b752e req-bfc6cb48-7d28-42c5-9d76-2f679e76d4b1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Received unexpected event network-vif-plugged-867227e5-4422-4cfb-93d9-0589612717db for instance with vm_state active and task_state None.#033[00m
Nov 26 01:55:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:55:21 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3920880537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:55:21 compute-0 nova_compute[350387]: 2025-11-26 01:55:21.951 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.077 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.079 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.079 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.088 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.088 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.089 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.095 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.096 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.096 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.538 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.539 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3727MB free_disk=59.905879974365234GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.539 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.540 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.614 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.615 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 0e500d52-72e1-4501-b4d6-fc6ca575760f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.615 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.615 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.616 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.634 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing inventories for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.663 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating ProviderTree inventory for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.663 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating inventory in ProviderTree for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.677 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing aggregate associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.699 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing trait associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, traits: COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 01:55:22 compute-0 nova_compute[350387]: 2025-11-26 01:55:22.764 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:55:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 1.4 MiB/s wr, 58 op/s
Nov 26 01:55:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:55:23 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/943549414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:55:23 compute-0 nova_compute[350387]: 2025-11-26 01:55:23.236 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:55:23 compute-0 nova_compute[350387]: 2025-11-26 01:55:23.244 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:55:23 compute-0 nova_compute[350387]: 2025-11-26 01:55:23.265 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:55:23 compute-0 nova_compute[350387]: 2025-11-26 01:55:23.297 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:55:23 compute-0 nova_compute[350387]: 2025-11-26 01:55:23.298 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:55:24 compute-0 nova_compute[350387]: 2025-11-26 01:55:24.254 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:55:24 compute-0 nova_compute[350387]: 2025-11-26 01:55:24.255 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:55:24 compute-0 nova_compute[350387]: 2025-11-26 01:55:24.711 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:55:24 compute-0 nova_compute[350387]: 2025-11-26 01:55:24.711 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:55:24 compute-0 nova_compute[350387]: 2025-11-26 01:55:24.712 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 01:55:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:55:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 756 KiB/s rd, 1.0 MiB/s wr, 67 op/s
Nov 26 01:55:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:24.972 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:55:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:24.975 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:55:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:55:24.976 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:55:26 compute-0 nova_compute[350387]: 2025-11-26 01:55:26.292 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:26 compute-0 nova_compute[350387]: 2025-11-26 01:55:26.502 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 664 KiB/s wr, 78 op/s
Nov 26 01:55:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:55:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1479272089' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:55:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:55:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1479272089' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:55:28 compute-0 nova_compute[350387]: 2025-11-26 01:55:28.433 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Updating instance_info_cache with network_info: [{"id": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "address": "fa:16:3e:70:20:57", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7c212d-f2", "ovs_interfaceid": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:55:28 compute-0 nova_compute[350387]: 2025-11-26 01:55:28.449 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:55:28 compute-0 nova_compute[350387]: 2025-11-26 01:55:28.450 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 01:55:28 compute-0 nova_compute[350387]: 2025-11-26 01:55:28.451 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:55:28 compute-0 nova_compute[350387]: 2025-11-26 01:55:28.452 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:55:28 compute-0 nova_compute[350387]: 2025-11-26 01:55:28.453 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:55:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 69 op/s
Nov 26 01:55:29 compute-0 nova_compute[350387]: 2025-11-26 01:55:29.493 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:55:29 compute-0 podman[158021]: time="2025-11-26T01:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:55:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:55:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8641 "" "Go-http-client/1.1"
Nov 26 01:55:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:55:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Nov 26 01:55:31 compute-0 nova_compute[350387]: 2025-11-26 01:55:31.298 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:31 compute-0 openstack_network_exporter[367323]: ERROR   01:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:55:31 compute-0 openstack_network_exporter[367323]: ERROR   01:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:55:31 compute-0 openstack_network_exporter[367323]: ERROR   01:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:55:31 compute-0 openstack_network_exporter[367323]: ERROR   01:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:55:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:55:31 compute-0 openstack_network_exporter[367323]: ERROR   01:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:55:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:55:31 compute-0 nova_compute[350387]: 2025-11-26 01:55:31.506 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 85 B/s wr, 55 op/s
Nov 26 01:55:33 compute-0 podman[420715]: 2025-11-26 01:55:33.554712849 +0000 UTC m=+0.110512165 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 01:55:33 compute-0 podman[420714]: 2025-11-26 01:55:33.554712039 +0000 UTC m=+0.103214029 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 01:55:33 compute-0 podman[420713]: 2025-11-26 01:55:33.554629036 +0000 UTC m=+0.105537584 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Nov 26 01:55:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:55:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 39 op/s
Nov 26 01:55:36 compute-0 nova_compute[350387]: 2025-11-26 01:55:36.301 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:36 compute-0 nova_compute[350387]: 2025-11-26 01:55:36.511 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:36 compute-0 podman[420774]: 2025-11-26 01:55:36.584000408 +0000 UTC m=+0.123590183 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:55:36 compute-0 podman[420775]: 2025-11-26 01:55:36.633134283 +0000 UTC m=+0.169792625 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:55:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 787 KiB/s rd, 25 op/s
Nov 26 01:55:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 2 op/s
Nov 26 01:55:39 compute-0 podman[420816]: 2025-11-26 01:55:39.599459339 +0000 UTC m=+0.147100926 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, version=9.4, name=ubi9, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64)
Nov 26 01:55:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:55:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:55:41
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'vms']
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:55:41 compute-0 nova_compute[350387]: 2025-11-26 01:55:41.308 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:41 compute-0 nova_compute[350387]: 2025-11-26 01:55:41.515 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:55:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:55:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:55:44 compute-0 podman[420835]: 2025-11-26 01:55:44.597451617 +0000 UTC m=+0.141958161 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 01:55:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:55:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:55:46 compute-0 nova_compute[350387]: 2025-11-26 01:55:46.310 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:46 compute-0 nova_compute[350387]: 2025-11-26 01:55:46.518 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:46 compute-0 podman[420854]: 2025-11-26 01:55:46.585219971 +0000 UTC m=+0.141328623 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, vendor=Red Hat, Inc.)
Nov 26 01:55:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:55:47 compute-0 podman[420875]: 2025-11-26 01:55:47.545388844 +0000 UTC m=+0.096997674 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 01:55:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:55:48 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:55:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:55:48 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:55:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:55:48 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:55:48 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 7e37b5eb-a9d1-474d-805f-677453cb954f does not exist
Nov 26 01:55:48 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 49f779f4-d577-43d3-a5b5-6f9bdca97683 does not exist
Nov 26 01:55:48 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 818099ab-3e96-4c47-842f-83ffbffcd17e does not exist
Nov 26 01:55:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:55:48 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:55:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:55:48 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:55:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:55:48 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:55:48 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:55:48 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:55:48 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:55:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:55:49 compute-0 ovn_controller[89102]: 2025-11-26T01:55:49Z|00044|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Nov 26 01:55:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:55:49 compute-0 podman[421166]: 2025-11-26 01:55:49.961761405 +0000 UTC m=+0.072799772 container create eea5c4b639475784edaff6950c74bc7ee07a183ab112ae95961d965da4bd4f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 01:55:50 compute-0 podman[421166]: 2025-11-26 01:55:49.937323606 +0000 UTC m=+0.048361983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:55:50 compute-0 systemd[1]: Started libpod-conmon-eea5c4b639475784edaff6950c74bc7ee07a183ab112ae95961d965da4bd4f25.scope.
Nov 26 01:55:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:55:50 compute-0 podman[421166]: 2025-11-26 01:55:50.10818192 +0000 UTC m=+0.219220357 container init eea5c4b639475784edaff6950c74bc7ee07a183ab112ae95961d965da4bd4f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bartik, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 01:55:50 compute-0 podman[421166]: 2025-11-26 01:55:50.124623493 +0000 UTC m=+0.235661860 container start eea5c4b639475784edaff6950c74bc7ee07a183ab112ae95961d965da4bd4f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 01:55:50 compute-0 podman[421166]: 2025-11-26 01:55:50.129691666 +0000 UTC m=+0.240730113 container attach eea5c4b639475784edaff6950c74bc7ee07a183ab112ae95961d965da4bd4f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bartik, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 01:55:50 compute-0 gifted_bartik[421181]: 167 167
Nov 26 01:55:50 compute-0 systemd[1]: libpod-eea5c4b639475784edaff6950c74bc7ee07a183ab112ae95961d965da4bd4f25.scope: Deactivated successfully.
Nov 26 01:55:50 compute-0 podman[421166]: 2025-11-26 01:55:50.139393359 +0000 UTC m=+0.250431756 container died eea5c4b639475784edaff6950c74bc7ee07a183ab112ae95961d965da4bd4f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bartik, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:55:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-68c8d513091c5dfeddac585d8f52de23a6be9554cf414ca1e6d55be6b7586299-merged.mount: Deactivated successfully.
Nov 26 01:55:50 compute-0 podman[421166]: 2025-11-26 01:55:50.216138692 +0000 UTC m=+0.327177089 container remove eea5c4b639475784edaff6950c74bc7ee07a183ab112ae95961d965da4bd4f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 01:55:50 compute-0 systemd[1]: libpod-conmon-eea5c4b639475784edaff6950c74bc7ee07a183ab112ae95961d965da4bd4f25.scope: Deactivated successfully.
Nov 26 01:55:50 compute-0 podman[421204]: 2025-11-26 01:55:50.498124467 +0000 UTC m=+0.088588487 container create bece4a8a782953ca593b36d9a2bfb9648a151ac27482c8d24bdd73834427f7f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:55:50 compute-0 podman[421204]: 2025-11-26 01:55:50.462704439 +0000 UTC m=+0.053168489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:55:50 compute-0 systemd[1]: Started libpod-conmon-bece4a8a782953ca593b36d9a2bfb9648a151ac27482c8d24bdd73834427f7f9.scope.
Nov 26 01:55:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82351409c06cb39649d04f9c2920d6532a0326b6757bb89eeb78f7d46c1a0e3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82351409c06cb39649d04f9c2920d6532a0326b6757bb89eeb78f7d46c1a0e3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82351409c06cb39649d04f9c2920d6532a0326b6757bb89eeb78f7d46c1a0e3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82351409c06cb39649d04f9c2920d6532a0326b6757bb89eeb78f7d46c1a0e3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82351409c06cb39649d04f9c2920d6532a0326b6757bb89eeb78f7d46c1a0e3a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:55:50 compute-0 podman[421204]: 2025-11-26 01:55:50.64833488 +0000 UTC m=+0.238798900 container init bece4a8a782953ca593b36d9a2bfb9648a151ac27482c8d24bdd73834427f7f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:55:50 compute-0 podman[421204]: 2025-11-26 01:55:50.663729633 +0000 UTC m=+0.254193653 container start bece4a8a782953ca593b36d9a2bfb9648a151ac27482c8d24bdd73834427f7f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_keldysh, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:55:50 compute-0 podman[421204]: 2025-11-26 01:55:50.668349693 +0000 UTC m=+0.258813753 container attach bece4a8a782953ca593b36d9a2bfb9648a151ac27482c8d24bdd73834427f7f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:55:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013736228796314383 of space, bias 1.0, pg target 0.4120868638894315 quantized to 32 (current 32)
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:55:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:55:51 compute-0 nova_compute[350387]: 2025-11-26 01:55:51.314 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:51 compute-0 nova_compute[350387]: 2025-11-26 01:55:51.520 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:51 compute-0 silly_keldysh[421221]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:55:51 compute-0 silly_keldysh[421221]: --> relative data size: 1.0
Nov 26 01:55:51 compute-0 silly_keldysh[421221]: --> All data devices are unavailable
Nov 26 01:55:51 compute-0 systemd[1]: libpod-bece4a8a782953ca593b36d9a2bfb9648a151ac27482c8d24bdd73834427f7f9.scope: Deactivated successfully.
Nov 26 01:55:51 compute-0 podman[421204]: 2025-11-26 01:55:51.840994253 +0000 UTC m=+1.431458323 container died bece4a8a782953ca593b36d9a2bfb9648a151ac27482c8d24bdd73834427f7f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_keldysh, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 01:55:51 compute-0 systemd[1]: libpod-bece4a8a782953ca593b36d9a2bfb9648a151ac27482c8d24bdd73834427f7f9.scope: Consumed 1.101s CPU time.
Nov 26 01:55:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-82351409c06cb39649d04f9c2920d6532a0326b6757bb89eeb78f7d46c1a0e3a-merged.mount: Deactivated successfully.
Nov 26 01:55:51 compute-0 podman[421204]: 2025-11-26 01:55:51.947189815 +0000 UTC m=+1.537653835 container remove bece4a8a782953ca593b36d9a2bfb9648a151ac27482c8d24bdd73834427f7f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:55:51 compute-0 systemd[1]: libpod-conmon-bece4a8a782953ca593b36d9a2bfb9648a151ac27482c8d24bdd73834427f7f9.scope: Deactivated successfully.
Nov 26 01:55:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:55:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 6681 writes, 26K keys, 6681 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6681 writes, 1314 syncs, 5.08 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 789 writes, 2582 keys, 789 commit groups, 1.0 writes per commit group, ingest: 2.77 MB, 0.00 MB/s#012Interval WAL: 789 writes, 312 syncs, 2.53 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 01:55:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:55:53 compute-0 podman[421397]: 2025-11-26 01:55:53.049277886 +0000 UTC m=+0.066327320 container create 91555b24d21af4c52f2f440d02d9fedb537295bf07c72fedcb02702e9dbc8018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hoover, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:55:53 compute-0 podman[421397]: 2025-11-26 01:55:53.019626881 +0000 UTC m=+0.036676295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:55:53 compute-0 systemd[1]: Started libpod-conmon-91555b24d21af4c52f2f440d02d9fedb537295bf07c72fedcb02702e9dbc8018.scope.
Nov 26 01:55:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:55:53 compute-0 podman[421397]: 2025-11-26 01:55:53.187927422 +0000 UTC m=+0.204976856 container init 91555b24d21af4c52f2f440d02d9fedb537295bf07c72fedcb02702e9dbc8018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 01:55:53 compute-0 podman[421397]: 2025-11-26 01:55:53.205322363 +0000 UTC m=+0.222371797 container start 91555b24d21af4c52f2f440d02d9fedb537295bf07c72fedcb02702e9dbc8018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 01:55:53 compute-0 podman[421397]: 2025-11-26 01:55:53.212166035 +0000 UTC m=+0.229215469 container attach 91555b24d21af4c52f2f440d02d9fedb537295bf07c72fedcb02702e9dbc8018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hoover, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 01:55:53 compute-0 great_hoover[421414]: 167 167
Nov 26 01:55:53 compute-0 systemd[1]: libpod-91555b24d21af4c52f2f440d02d9fedb537295bf07c72fedcb02702e9dbc8018.scope: Deactivated successfully.
Nov 26 01:55:53 compute-0 podman[421397]: 2025-11-26 01:55:53.217110935 +0000 UTC m=+0.234160339 container died 91555b24d21af4c52f2f440d02d9fedb537295bf07c72fedcb02702e9dbc8018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:55:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-1229982f7fca25045c86cfb6714667c6ee4884be2c93be79c269b069fb65c5ad-merged.mount: Deactivated successfully.
Nov 26 01:55:53 compute-0 podman[421397]: 2025-11-26 01:55:53.272539426 +0000 UTC m=+0.289588830 container remove 91555b24d21af4c52f2f440d02d9fedb537295bf07c72fedcb02702e9dbc8018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:55:53 compute-0 systemd[1]: libpod-conmon-91555b24d21af4c52f2f440d02d9fedb537295bf07c72fedcb02702e9dbc8018.scope: Deactivated successfully.
Nov 26 01:55:53 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 26 01:55:53 compute-0 podman[421438]: 2025-11-26 01:55:53.520099211 +0000 UTC m=+0.063113159 container create 7ae86e173fd6f6871937bca3031a680b27835c00970c057de3a2fe07b255e0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 01:55:53 compute-0 podman[421438]: 2025-11-26 01:55:53.498477342 +0000 UTC m=+0.041491320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:55:53 compute-0 systemd[1]: Started libpod-conmon-7ae86e173fd6f6871937bca3031a680b27835c00970c057de3a2fe07b255e0bf.scope.
Nov 26 01:55:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe39357e39f1d163890322953cd66f7a99d20bf26a66fcefb892cc67880603f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe39357e39f1d163890322953cd66f7a99d20bf26a66fcefb892cc67880603f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe39357e39f1d163890322953cd66f7a99d20bf26a66fcefb892cc67880603f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe39357e39f1d163890322953cd66f7a99d20bf26a66fcefb892cc67880603f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:55:53 compute-0 podman[421438]: 2025-11-26 01:55:53.667890406 +0000 UTC m=+0.210904364 container init 7ae86e173fd6f6871937bca3031a680b27835c00970c057de3a2fe07b255e0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hypatia, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 01:55:53 compute-0 podman[421438]: 2025-11-26 01:55:53.685756069 +0000 UTC m=+0.228770047 container start 7ae86e173fd6f6871937bca3031a680b27835c00970c057de3a2fe07b255e0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hypatia, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:55:53 compute-0 podman[421438]: 2025-11-26 01:55:53.696933374 +0000 UTC m=+0.239947322 container attach 7ae86e173fd6f6871937bca3031a680b27835c00970c057de3a2fe07b255e0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]: {
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:    "0": [
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:        {
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "devices": [
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "/dev/loop3"
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            ],
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "lv_name": "ceph_lv0",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "lv_size": "21470642176",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "name": "ceph_lv0",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "tags": {
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.cluster_name": "ceph",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.crush_device_class": "",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.encrypted": "0",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.osd_id": "0",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.type": "block",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.vdo": "0"
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            },
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "type": "block",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "vg_name": "ceph_vg0"
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:        }
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:    ],
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:    "1": [
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:        {
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "devices": [
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "/dev/loop4"
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            ],
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "lv_name": "ceph_lv1",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "lv_size": "21470642176",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "name": "ceph_lv1",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "tags": {
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.cluster_name": "ceph",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.crush_device_class": "",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.encrypted": "0",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.osd_id": "1",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.type": "block",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.vdo": "0"
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            },
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "type": "block",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "vg_name": "ceph_vg1"
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:        }
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:    ],
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:    "2": [
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:        {
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "devices": [
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "/dev/loop5"
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            ],
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "lv_name": "ceph_lv2",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "lv_size": "21470642176",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "name": "ceph_lv2",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "tags": {
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.cluster_name": "ceph",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.crush_device_class": "",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.encrypted": "0",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.osd_id": "2",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.type": "block",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:                "ceph.vdo": "0"
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            },
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "type": "block",
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:            "vg_name": "ceph_vg2"
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:        }
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]:    ]
Nov 26 01:55:54 compute-0 youthful_hypatia[421453]: }
Nov 26 01:55:54 compute-0 ovn_controller[89102]: 2025-11-26T01:55:54Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:c0:70 192.168.0.36
Nov 26 01:55:54 compute-0 ovn_controller[89102]: 2025-11-26T01:55:54Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:c0:70 192.168.0.36
Nov 26 01:55:54 compute-0 systemd[1]: libpod-7ae86e173fd6f6871937bca3031a680b27835c00970c057de3a2fe07b255e0bf.scope: Deactivated successfully.
Nov 26 01:55:54 compute-0 podman[421462]: 2025-11-26 01:55:54.6760244 +0000 UTC m=+0.066599258 container died 7ae86e173fd6f6871937bca3031a680b27835c00970c057de3a2fe07b255e0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hypatia, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:55:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fe39357e39f1d163890322953cd66f7a99d20bf26a66fcefb892cc67880603f-merged.mount: Deactivated successfully.
Nov 26 01:55:54 compute-0 podman[421462]: 2025-11-26 01:55:54.77255707 +0000 UTC m=+0.163131878 container remove 7ae86e173fd6f6871937bca3031a680b27835c00970c057de3a2fe07b255e0bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hypatia, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:55:54 compute-0 systemd[1]: libpod-conmon-7ae86e173fd6f6871937bca3031a680b27835c00970c057de3a2fe07b255e0bf.scope: Deactivated successfully.
Nov 26 01:55:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:55:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 177 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 361 KiB/s wr, 25 op/s
Nov 26 01:55:56 compute-0 podman[421614]: 2025-11-26 01:55:56.01587761 +0000 UTC m=+0.077496904 container create 2e46a27d30c78edef87c9c8cc1cc3da6c9ae4004f56bf797bab917dd1cfb3dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_boyd, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 01:55:56 compute-0 podman[421614]: 2025-11-26 01:55:55.986567865 +0000 UTC m=+0.048187189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:55:56 compute-0 systemd[1]: Started libpod-conmon-2e46a27d30c78edef87c9c8cc1cc3da6c9ae4004f56bf797bab917dd1cfb3dbd.scope.
Nov 26 01:55:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:55:56 compute-0 podman[421614]: 2025-11-26 01:55:56.148687461 +0000 UTC m=+0.210306845 container init 2e46a27d30c78edef87c9c8cc1cc3da6c9ae4004f56bf797bab917dd1cfb3dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_boyd, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 01:55:56 compute-0 podman[421614]: 2025-11-26 01:55:56.157550561 +0000 UTC m=+0.219169855 container start 2e46a27d30c78edef87c9c8cc1cc3da6c9ae4004f56bf797bab917dd1cfb3dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_boyd, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:55:56 compute-0 podman[421614]: 2025-11-26 01:55:56.162736617 +0000 UTC m=+0.224356001 container attach 2e46a27d30c78edef87c9c8cc1cc3da6c9ae4004f56bf797bab917dd1cfb3dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_boyd, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 01:55:56 compute-0 keen_boyd[421630]: 167 167
Nov 26 01:55:56 compute-0 systemd[1]: libpod-2e46a27d30c78edef87c9c8cc1cc3da6c9ae4004f56bf797bab917dd1cfb3dbd.scope: Deactivated successfully.
Nov 26 01:55:56 compute-0 podman[421614]: 2025-11-26 01:55:56.166428701 +0000 UTC m=+0.228048015 container died 2e46a27d30c78edef87c9c8cc1cc3da6c9ae4004f56bf797bab917dd1cfb3dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_boyd, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 01:55:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b31eab0c2b2e58ae0680d32fb5104196b373aab09473c9a98ae08bb9766b5f1a-merged.mount: Deactivated successfully.
Nov 26 01:55:56 compute-0 podman[421614]: 2025-11-26 01:55:56.232523383 +0000 UTC m=+0.294142707 container remove 2e46a27d30c78edef87c9c8cc1cc3da6c9ae4004f56bf797bab917dd1cfb3dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_boyd, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 01:55:56 compute-0 systemd[1]: libpod-conmon-2e46a27d30c78edef87c9c8cc1cc3da6c9ae4004f56bf797bab917dd1cfb3dbd.scope: Deactivated successfully.
Nov 26 01:55:56 compute-0 nova_compute[350387]: 2025-11-26 01:55:56.317 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:56 compute-0 podman[421652]: 2025-11-26 01:55:56.48440169 +0000 UTC m=+0.075999992 container create b11b901904767f78105c9ad30fa11f4973f0d101b85985e3bbb9ff941415137e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:55:56 compute-0 nova_compute[350387]: 2025-11-26 01:55:56.523 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:55:56 compute-0 podman[421652]: 2025-11-26 01:55:56.451601626 +0000 UTC m=+0.043199968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:55:56 compute-0 systemd[1]: Started libpod-conmon-b11b901904767f78105c9ad30fa11f4973f0d101b85985e3bbb9ff941415137e.scope.
Nov 26 01:55:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a28a51a5eb553af3e4e69e69af2172966177cbb021c3c6d78b693945383ea86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a28a51a5eb553af3e4e69e69af2172966177cbb021c3c6d78b693945383ea86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a28a51a5eb553af3e4e69e69af2172966177cbb021c3c6d78b693945383ea86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a28a51a5eb553af3e4e69e69af2172966177cbb021c3c6d78b693945383ea86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:55:56 compute-0 podman[421652]: 2025-11-26 01:55:56.657707663 +0000 UTC m=+0.249306175 container init b11b901904767f78105c9ad30fa11f4973f0d101b85985e3bbb9ff941415137e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lovelace, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 01:55:56 compute-0 podman[421652]: 2025-11-26 01:55:56.67640801 +0000 UTC m=+0.268006312 container start b11b901904767f78105c9ad30fa11f4973f0d101b85985e3bbb9ff941415137e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lovelace, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 01:55:56 compute-0 podman[421652]: 2025-11-26 01:55:56.691149805 +0000 UTC m=+0.282748107 container attach b11b901904767f78105c9ad30fa11f4973f0d101b85985e3bbb9ff941415137e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:55:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 184 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 753 KiB/s wr, 32 op/s
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]: {
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:        "osd_id": 0,
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:        "type": "bluestore"
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:    },
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:        "osd_id": 2,
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:        "type": "bluestore"
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:    },
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:        "osd_id": 1,
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:        "type": "bluestore"
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]:    }
Nov 26 01:55:57 compute-0 beautiful_lovelace[421667]: }
Nov 26 01:55:57 compute-0 systemd[1]: libpod-b11b901904767f78105c9ad30fa11f4973f0d101b85985e3bbb9ff941415137e.scope: Deactivated successfully.
Nov 26 01:55:57 compute-0 systemd[1]: libpod-b11b901904767f78105c9ad30fa11f4973f0d101b85985e3bbb9ff941415137e.scope: Consumed 1.098s CPU time.
Nov 26 01:55:57 compute-0 podman[421652]: 2025-11-26 01:55:57.786337112 +0000 UTC m=+1.377935434 container died b11b901904767f78105c9ad30fa11f4973f0d101b85985e3bbb9ff941415137e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:55:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a28a51a5eb553af3e4e69e69af2172966177cbb021c3c6d78b693945383ea86-merged.mount: Deactivated successfully.
Nov 26 01:55:57 compute-0 podman[421652]: 2025-11-26 01:55:57.880536056 +0000 UTC m=+1.472134348 container remove b11b901904767f78105c9ad30fa11f4973f0d101b85985e3bbb9ff941415137e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 01:55:57 compute-0 systemd[1]: libpod-conmon-b11b901904767f78105c9ad30fa11f4973f0d101b85985e3bbb9ff941415137e.scope: Deactivated successfully.
Nov 26 01:55:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:55:57 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:55:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:55:57 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:55:57 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 61b3a23f-8620-440d-b7ac-dce8f4b1f807 does not exist
Nov 26 01:55:57 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev f5d56910-aa75-479c-9e74-b4aaca0d1f4e does not exist
Nov 26 01:55:57 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:55:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 198 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Nov 26 01:55:58 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:55:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:55:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 7846 writes, 31K keys, 7846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7846 writes, 1645 syncs, 4.77 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 717 writes, 2400 keys, 717 commit groups, 1.0 writes per commit group, ingest: 2.56 MB, 0.00 MB/s#012Interval WAL: 717 writes, 284 syncs, 2.52 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 01:55:59 compute-0 podman[158021]: time="2025-11-26T01:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:55:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:55:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8639 "" "Go-http-client/1.1"
Nov 26 01:55:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:56:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Nov 26 01:56:01 compute-0 nova_compute[350387]: 2025-11-26 01:56:01.320 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:01 compute-0 openstack_network_exporter[367323]: ERROR   01:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:56:01 compute-0 openstack_network_exporter[367323]: ERROR   01:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:56:01 compute-0 openstack_network_exporter[367323]: ERROR   01:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:56:01 compute-0 openstack_network_exporter[367323]: ERROR   01:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:56:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:56:01 compute-0 openstack_network_exporter[367323]: ERROR   01:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:56:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:56:01 compute-0 nova_compute[350387]: 2025-11-26 01:56:01.525 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Nov 26 01:56:04 compute-0 podman[421765]: 2025-11-26 01:56:04.585624071 +0000 UTC m=+0.120460354 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:56:04 compute-0 podman[421763]: 2025-11-26 01:56:04.597504336 +0000 UTC m=+0.144251425 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 01:56:04 compute-0 podman[421764]: 2025-11-26 01:56:04.614138805 +0000 UTC m=+0.163243281 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:56:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:56:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Nov 26 01:56:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 01:56:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 6678 writes, 26K keys, 6678 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6678 writes, 1316 syncs, 5.07 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 732 writes, 2356 keys, 732 commit groups, 1.0 writes per commit group, ingest: 2.30 MB, 0.00 MB/s#012Interval WAL: 732 writes, 312 syncs, 2.35 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 01:56:06 compute-0 ceph-mgr[193049]: [devicehealth INFO root] Check health
Nov 26 01:56:06 compute-0 nova_compute[350387]: 2025-11-26 01:56:06.324 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:06 compute-0 nova_compute[350387]: 2025-11-26 01:56:06.528 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 1.1 MiB/s wr, 32 op/s
Nov 26 01:56:07 compute-0 podman[421820]: 2025-11-26 01:56:07.622987929 +0000 UTC m=+0.159213467 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:56:07 compute-0 podman[421821]: 2025-11-26 01:56:07.65107405 +0000 UTC m=+0.186510356 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:56:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 767 KiB/s wr, 25 op/s
Nov 26 01:56:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:56:10 compute-0 podman[421862]: 2025-11-26 01:56:10.54513619 +0000 UTC m=+0.098119855 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, distribution-scope=public, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, name=ubi9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 26 01:56:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 73 KiB/s wr, 18 op/s
Nov 26 01:56:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:56:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:56:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:56:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:56:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:56:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:56:11 compute-0 nova_compute[350387]: 2025-11-26 01:56:11.327 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:11 compute-0 nova_compute[350387]: 2025-11-26 01:56:11.530 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Nov 26 01:56:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:56:14 compute-0 podman[421882]: 2025-11-26 01:56:14.820105196 +0000 UTC m=+0.118664944 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 01:56:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 0 op/s
Nov 26 01:56:16 compute-0 nova_compute[350387]: 2025-11-26 01:56:16.334 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:16 compute-0 nova_compute[350387]: 2025-11-26 01:56:16.534 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:56:17 compute-0 podman[421902]: 2025-11-26 01:56:17.578408552 +0000 UTC m=+0.134676495 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter)
Nov 26 01:56:17 compute-0 podman[421923]: 2025-11-26 01:56:17.776152143 +0000 UTC m=+0.129804659 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:56:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:56:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:56:20 compute-0 nova_compute[350387]: 2025-11-26 01:56:20.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:56:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:56:21 compute-0 nova_compute[350387]: 2025-11-26 01:56:21.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:56:21 compute-0 nova_compute[350387]: 2025-11-26 01:56:21.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:56:21 compute-0 nova_compute[350387]: 2025-11-26 01:56:21.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:56:21 compute-0 nova_compute[350387]: 2025-11-26 01:56:21.341 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:21 compute-0 nova_compute[350387]: 2025-11-26 01:56:21.536 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.293 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.341 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.342 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.342 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.343 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.344 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:56:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:56:23 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1746709432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.862 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.956 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.957 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.957 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.961 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.962 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.962 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.968 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.969 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:56:23 compute-0 nova_compute[350387]: 2025-11-26 01:56:23.970 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:56:24 compute-0 nova_compute[350387]: 2025-11-26 01:56:24.573 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:56:24 compute-0 nova_compute[350387]: 2025-11-26 01:56:24.577 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3501MB free_disk=59.88883972167969GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:56:24 compute-0 nova_compute[350387]: 2025-11-26 01:56:24.577 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:56:24 compute-0 nova_compute[350387]: 2025-11-26 01:56:24.578 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:56:24 compute-0 nova_compute[350387]: 2025-11-26 01:56:24.664 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:56:24 compute-0 nova_compute[350387]: 2025-11-26 01:56:24.665 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 0e500d52-72e1-4501-b4d6-fc6ca575760f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:56:24 compute-0 nova_compute[350387]: 2025-11-26 01:56:24.665 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:56:24 compute-0 nova_compute[350387]: 2025-11-26 01:56:24.666 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:56:24 compute-0 nova_compute[350387]: 2025-11-26 01:56:24.666 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:56:24 compute-0 nova_compute[350387]: 2025-11-26 01:56:24.752 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:56:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:56:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:56:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:56:24.974 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:56:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:56:24.975 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:56:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:56:24.976 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:56:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:56:25 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1217407882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:56:25 compute-0 nova_compute[350387]: 2025-11-26 01:56:25.248 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:56:25 compute-0 nova_compute[350387]: 2025-11-26 01:56:25.257 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:56:25 compute-0 nova_compute[350387]: 2025-11-26 01:56:25.272 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:56:25 compute-0 nova_compute[350387]: 2025-11-26 01:56:25.274 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:56:25 compute-0 nova_compute[350387]: 2025-11-26 01:56:25.274 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:56:26 compute-0 nova_compute[350387]: 2025-11-26 01:56:26.276 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:56:26 compute-0 nova_compute[350387]: 2025-11-26 01:56:26.276 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:56:26 compute-0 nova_compute[350387]: 2025-11-26 01:56:26.277 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 01:56:26 compute-0 nova_compute[350387]: 2025-11-26 01:56:26.347 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:26 compute-0 nova_compute[350387]: 2025-11-26 01:56:26.540 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:56:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:56:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4248836822' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:56:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:56:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4248836822' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:56:27 compute-0 nova_compute[350387]: 2025-11-26 01:56:27.389 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:56:27 compute-0 nova_compute[350387]: 2025-11-26 01:56:27.389 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:56:27 compute-0 nova_compute[350387]: 2025-11-26 01:56:27.390 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 01:56:27 compute-0 nova_compute[350387]: 2025-11-26 01:56:27.390 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid b1c088bc-7a6b-4580-93ff-685731747189 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 01:56:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 01:56:29 compute-0 podman[158021]: time="2025-11-26T01:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:56:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:56:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8629 "" "Go-http-client/1.1"
Nov 26 01:56:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:56:30 compute-0 nova_compute[350387]: 2025-11-26 01:56:30.187 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updating instance_info_cache with network_info: [{"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:56:30 compute-0 nova_compute[350387]: 2025-11-26 01:56:30.206 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:56:30 compute-0 nova_compute[350387]: 2025-11-26 01:56:30.207 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 01:56:30 compute-0 nova_compute[350387]: 2025-11-26 01:56:30.208 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:56:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Nov 26 01:56:31 compute-0 nova_compute[350387]: 2025-11-26 01:56:31.354 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:31 compute-0 openstack_network_exporter[367323]: ERROR   01:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:56:31 compute-0 openstack_network_exporter[367323]: ERROR   01:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:56:31 compute-0 openstack_network_exporter[367323]: ERROR   01:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:56:31 compute-0 openstack_network_exporter[367323]: ERROR   01:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:56:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:56:31 compute-0 openstack_network_exporter[367323]: ERROR   01:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:56:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:56:31 compute-0 nova_compute[350387]: 2025-11-26 01:56:31.542 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Nov 26 01:56:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:56:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Nov 26 01:56:35 compute-0 podman[421992]: 2025-11-26 01:56:35.539226176 +0000 UTC m=+0.090684765 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 01:56:35 compute-0 podman[421991]: 2025-11-26 01:56:35.584207774 +0000 UTC m=+0.131051663 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:56:35 compute-0 podman[421993]: 2025-11-26 01:56:35.615462624 +0000 UTC m=+0.156226021 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:56:36 compute-0 nova_compute[350387]: 2025-11-26 01:56:36.357 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:36 compute-0 nova_compute[350387]: 2025-11-26 01:56:36.547 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Nov 26 01:56:38 compute-0 podman[422050]: 2025-11-26 01:56:38.543704398 +0000 UTC m=+0.091919751 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:56:38 compute-0 podman[422051]: 2025-11-26 01:56:38.631368738 +0000 UTC m=+0.172328226 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 01:56:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Nov 26 01:56:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:56:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:56:41
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'images', 'vms', 'cephfs.cephfs.data', 'backups', '.mgr', 'cephfs.cephfs.meta']
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:56:41 compute-0 nova_compute[350387]: 2025-11-26 01:56:41.365 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:41 compute-0 nova_compute[350387]: 2025-11-26 01:56:41.549 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:41 compute-0 podman[422095]: 2025-11-26 01:56:41.559102636 +0000 UTC m=+0.100486322 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, com.redhat.component=ubi9-container, container_name=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, release=1214.1726694543, io.openshift.expose-services=, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30)
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:56:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.867 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.868 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.868 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.869 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.878 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:56:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:42.881 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}4e94a0ede5bb893797130fc39ee992faf1803b43b6582353b5619a442e3adefc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 01:56:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.629 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Wed, 26 Nov 2025 01:56:42 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ad01f10c-e38a-40f0-855c-b32318b57610 x-openstack-request-id: req-ad01f10c-e38a-40f0-855c-b32318b57610 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.629 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2", "name": "vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o", "status": "ACTIVE", "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "user_id": "b130e7a8bed3424f9f5ff63b35cd2b28", "metadata": {"metering.server_group": "366b90b6-2e85-40c4-9ca1-855cf9022409"}, "hostId": "2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1", "image": {"id": "48e08d00-37a3-4465-a949-ff0b8afe4def", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/48e08d00-37a3-4465-a949-ff0b8afe4def"}]}, "flavor": {"id": "030e95e2-5458-42ef-a5df-79a19c0b681d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/030e95e2-5458-42ef-a5df-79a19c0b681d"}]}, "created": "2025-11-26T01:55:08Z", "updated": "2025-11-26T01:55:20Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.36", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d6:c0:70"}, {"version": 4, "addr": "192.168.122.202", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d6:c0:70"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-26T01:55:19.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.630 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 used request id req-ad01f10c-e38a-40f0-855c-b32318b57610 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.631 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a8b199f7-8cd5-45ea-bc7e-af8352a6afa2', 'name': 'vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {'metering.server_group': '366b90b6-2e85-40c4-9ca1-855cf9022409'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.635 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b1c088bc-7a6b-4580-93ff-685731747189', 'name': 'test_0', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.639 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0e500d52-72e1-4501-b4d6-fc6ca575760f', 'name': 'vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {'metering.server_group': '366b90b6-2e85-40c4-9ca1-855cf9022409'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.639 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.640 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.640 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.640 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.640 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.641 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.641 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.641 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.641 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.641 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T01:56:43.640319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T01:56:43.641643) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.647 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 / tap867227e5-44 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.647 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.652 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.656 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.packets volume: 34 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.657 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.657 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.657 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.657 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.657 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.657 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.658 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.658 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.658 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.658 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.658 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.658 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.658 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.658 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.659 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.659 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.659 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.659 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.659 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.659 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.660 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.660 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.660 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.660 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.660 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.660 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.661 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.661 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.661 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.661 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.661 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.bytes volume: 2146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.661 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.662 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.bytes volume: 4740 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.662 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.662 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.662 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.662 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.662 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.663 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.662 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T01:56:43.657612) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.663 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T01:56:43.658554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.663 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T01:56:43.660011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.663 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T01:56:43.661566) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.663 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T01:56:43.663030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.700 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/cpu volume: 34480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.736 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/cpu volume: 40520000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.769 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/cpu volume: 274710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.770 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.770 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.770 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.770 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.771 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.771 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.771 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.771 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.771 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.772 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.772 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.772 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.772 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.772 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.772 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.772 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/memory.usage volume: 49.03515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.772 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/memory.usage volume: 48.859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.772 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T01:56:43.771137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.773 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/memory.usage volume: 49.125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.773 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.773 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.773 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.773 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.773 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.774 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.774 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.774 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o>]
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.774 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.774 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T01:56:43.772596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.774 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.775 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.775 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.775 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-26T01:56:43.774038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.775 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.775 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.775 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes volume: 2094 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.775 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.bytes volume: 4975 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.776 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.776 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.776 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.776 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.776 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.776 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.776 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.776 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.777 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.777 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T01:56:43.775104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.777 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.777 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.777 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.778 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.778 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.778 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.778 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.778 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.778 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.packets volume: 41 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.779 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.779 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.779 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.779 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.779 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.779 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.779 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.780 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.780 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.780 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.780 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.780 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.781 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.781 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.781 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.781 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.781 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.781 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.781 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T01:56:43.776509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.782 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.782 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.782 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.782 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.782 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.782 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T01:56:43.778191) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T01:56:43.779473) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.784 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T01:56:43.781231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.784 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T01:56:43.782558) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.810 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.811 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.811 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.837 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.838 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.838 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.864 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.865 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.866 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.867 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.867 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.867 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.867 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.868 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.868 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.869 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T01:56:43.868155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.967 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.968 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:43.968 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.050 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.051 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.051 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.143 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.144 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.145 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.146 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.146 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.146 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.146 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.147 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.147 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.147 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.147 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o>]
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.148 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.148 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.149 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.149 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.149 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.149 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.latency volume: 1818076010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-26T01:56:44.147203) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.150 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T01:56:44.149449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.150 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.latency volume: 286055535 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.151 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.latency volume: 221080770 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.151 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 2182324777 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.152 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 336768448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.152 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 176765271 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.153 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.latency volume: 2021453674 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.153 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.latency volume: 321911498 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.154 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.latency volume: 237452008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.155 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.155 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.155 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.155 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.156 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.156 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.156 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.156 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.157 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.157 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.158 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T01:56:44.156169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.158 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.159 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.159 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.160 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.160 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.161 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.162 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.162 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.162 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.162 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.162 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.163 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T01:56:44.162506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.164 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.164 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.164 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.165 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.165 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.166 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.166 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.167 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.167 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.168 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.168 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.168 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.168 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.168 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.bytes volume: 41713664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.169 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T01:56:44.168535) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.169 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.170 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.170 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.170 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.171 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.171 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.bytes volume: 41824256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.172 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.172 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.173 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.173 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.174 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.174 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.174 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.174 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.174 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.175 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.175 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.176 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.176 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.177 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T01:56:44.174636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.177 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.177 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.178 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T01:56:44.178014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.178 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.latency volume: 4902824567 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.179 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.latency volume: 30681884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.179 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.180 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 5787370869 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.180 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 30575996 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.181 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.181 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.latency volume: 8294131606 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.182 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.latency volume: 31365598 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.182 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.183 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.184 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.184 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.184 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.184 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.184 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.185 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.requests volume: 223 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.185 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.186 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T01:56:44.184722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.186 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.186 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.187 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.187 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.187 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.188 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.188 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.188 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.188 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.189 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.189 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.189 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.189 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.189 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.190 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.190 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.190 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.191 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.191 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.191 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.191 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T01:56:44.189425) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.192 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.193 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.193 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.193 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.193 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.193 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.196 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.196 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.196 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.196 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.196 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.196 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:56:44.196 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:56:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:56:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:56:45 compute-0 podman[422117]: 2025-11-26 01:56:45.567631866 +0000 UTC m=+0.120244759 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 01:56:46 compute-0 nova_compute[350387]: 2025-11-26 01:56:46.370 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:46 compute-0 nova_compute[350387]: 2025-11-26 01:56:46.552 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:56:48 compute-0 podman[422137]: 2025-11-26 01:56:48.575533133 +0000 UTC m=+0.116259096 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:56:48 compute-0 podman[422136]: 2025-11-26 01:56:48.592283355 +0000 UTC m=+0.152291191 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 26 01:56:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:56:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:56:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016576825714657811 of space, bias 1.0, pg target 0.49730477143973434 quantized to 32 (current 32)
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:56:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:56:51 compute-0 nova_compute[350387]: 2025-11-26 01:56:51.377 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:51 compute-0 nova_compute[350387]: 2025-11-26 01:56:51.556 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:56:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:56:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:56:56 compute-0 nova_compute[350387]: 2025-11-26 01:56:56.384 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:56 compute-0 nova_compute[350387]: 2025-11-26 01:56:56.559 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:56:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:56:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:56:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:56:59 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:56:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:56:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:56:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:56:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:56:59 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev a8a2cb90-2164-49f5-85d1-77e0427f7df6 does not exist
Nov 26 01:56:59 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 7d1c4800-4cdd-4b58-911b-c60ef979dd8f does not exist
Nov 26 01:56:59 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev eb40719b-2fef-4617-a00e-7e780c65ac1b does not exist
Nov 26 01:56:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:56:59 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:56:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:56:59 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:56:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:56:59 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:56:59 compute-0 podman[158021]: time="2025-11-26T01:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:56:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:56:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8644 "" "Go-http-client/1.1"
Nov 26 01:56:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.862772) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122219862899, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1355, "num_deletes": 505, "total_data_size": 1597372, "memory_usage": 1622856, "flush_reason": "Manual Compaction"}
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122219873683, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 950495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27564, "largest_seqno": 28918, "table_properties": {"data_size": 945752, "index_size": 1691, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14959, "raw_average_key_size": 19, "raw_value_size": 933555, "raw_average_value_size": 1195, "num_data_blocks": 77, "num_entries": 781, "num_filter_entries": 781, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764122113, "oldest_key_time": 1764122113, "file_creation_time": 1764122219, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 10991 microseconds, and 6383 cpu microseconds.
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.873765) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 950495 bytes OK
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.873787) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.877876) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.877900) EVENT_LOG_v1 {"time_micros": 1764122219877893, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.877921) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1590262, prev total WAL file size 1590262, number of live WAL files 2.
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.879637) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303034' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(928KB)], [62(8767KB)]
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122219879708, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 9928106, "oldest_snapshot_seqno": -1}
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 4820 keys, 7177132 bytes, temperature: kUnknown
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122219935535, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7177132, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7145896, "index_size": 18070, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12101, "raw_key_size": 121723, "raw_average_key_size": 25, "raw_value_size": 7059609, "raw_average_value_size": 1464, "num_data_blocks": 748, "num_entries": 4820, "num_filter_entries": 4820, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764122219, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.936039) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7177132 bytes
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.938585) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.6 rd, 128.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.6 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(18.0) write-amplify(7.6) OK, records in: 5795, records dropped: 975 output_compression: NoCompression
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.938614) EVENT_LOG_v1 {"time_micros": 1764122219938601, "job": 34, "event": "compaction_finished", "compaction_time_micros": 55900, "compaction_time_cpu_micros": 32204, "output_level": 6, "num_output_files": 1, "total_output_size": 7177132, "num_input_records": 5795, "num_output_records": 4820, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122219939141, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122219942476, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.879321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.942763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.942770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.942773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.942776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:56:59 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:56:59.942779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:57:00 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:57:00 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:57:00 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:57:00 compute-0 podman[422448]: 2025-11-26 01:57:00.248001784 +0000 UTC m=+0.087890207 container create 082962b828f4953af820f4b2a19724c244ae24b59414f11a75ac972bee55ddc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:57:00 compute-0 podman[422448]: 2025-11-26 01:57:00.20951056 +0000 UTC m=+0.049399073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:57:00 compute-0 systemd[1]: Started libpod-conmon-082962b828f4953af820f4b2a19724c244ae24b59414f11a75ac972bee55ddc0.scope.
Nov 26 01:57:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:57:00 compute-0 podman[422448]: 2025-11-26 01:57:00.429775206 +0000 UTC m=+0.269663699 container init 082962b828f4953af820f4b2a19724c244ae24b59414f11a75ac972bee55ddc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:57:00 compute-0 podman[422448]: 2025-11-26 01:57:00.446756534 +0000 UTC m=+0.286644967 container start 082962b828f4953af820f4b2a19724c244ae24b59414f11a75ac972bee55ddc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 01:57:00 compute-0 podman[422448]: 2025-11-26 01:57:00.452321631 +0000 UTC m=+0.292210084 container attach 082962b828f4953af820f4b2a19724c244ae24b59414f11a75ac972bee55ddc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 01:57:00 compute-0 nervous_wescoff[422464]: 167 167
Nov 26 01:57:00 compute-0 systemd[1]: libpod-082962b828f4953af820f4b2a19724c244ae24b59414f11a75ac972bee55ddc0.scope: Deactivated successfully.
Nov 26 01:57:00 compute-0 podman[422448]: 2025-11-26 01:57:00.459531734 +0000 UTC m=+0.299420157 container died 082962b828f4953af820f4b2a19724c244ae24b59414f11a75ac972bee55ddc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 01:57:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6859ec6340cb21139ebf3ae01fb6598e3ed266ca85839366d467f3237c938ba-merged.mount: Deactivated successfully.
Nov 26 01:57:00 compute-0 podman[422448]: 2025-11-26 01:57:00.542725828 +0000 UTC m=+0.382614251 container remove 082962b828f4953af820f4b2a19724c244ae24b59414f11a75ac972bee55ddc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 01:57:00 compute-0 systemd[1]: libpod-conmon-082962b828f4953af820f4b2a19724c244ae24b59414f11a75ac972bee55ddc0.scope: Deactivated successfully.
Nov 26 01:57:00 compute-0 podman[422487]: 2025-11-26 01:57:00.774898649 +0000 UTC m=+0.073744179 container create 5815e392e6d634dd57e51ad231a8986a80a9be1724b0791319f03d82216ca22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:57:00 compute-0 podman[422487]: 2025-11-26 01:57:00.735053956 +0000 UTC m=+0.033899516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:57:00 compute-0 systemd[1]: Started libpod-conmon-5815e392e6d634dd57e51ad231a8986a80a9be1724b0791319f03d82216ca22d.scope.
Nov 26 01:57:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:57:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5c2243c2d988f2cab4b04e89bb65acfc2a6d37dec6aaeb91f6f36fa81b05f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:57:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5c2243c2d988f2cab4b04e89bb65acfc2a6d37dec6aaeb91f6f36fa81b05f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:57:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5c2243c2d988f2cab4b04e89bb65acfc2a6d37dec6aaeb91f6f36fa81b05f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:57:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5c2243c2d988f2cab4b04e89bb65acfc2a6d37dec6aaeb91f6f36fa81b05f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:57:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5c2243c2d988f2cab4b04e89bb65acfc2a6d37dec6aaeb91f6f36fa81b05f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:57:00 compute-0 podman[422487]: 2025-11-26 01:57:00.922138377 +0000 UTC m=+0.220983917 container init 5815e392e6d634dd57e51ad231a8986a80a9be1724b0791319f03d82216ca22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chaplygin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 01:57:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Nov 26 01:57:00 compute-0 podman[422487]: 2025-11-26 01:57:00.933177218 +0000 UTC m=+0.232022778 container start 5815e392e6d634dd57e51ad231a8986a80a9be1724b0791319f03d82216ca22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chaplygin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:57:00 compute-0 podman[422487]: 2025-11-26 01:57:00.940785422 +0000 UTC m=+0.239630972 container attach 5815e392e6d634dd57e51ad231a8986a80a9be1724b0791319f03d82216ca22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 01:57:01 compute-0 nova_compute[350387]: 2025-11-26 01:57:01.387 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:01 compute-0 openstack_network_exporter[367323]: ERROR   01:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:57:01 compute-0 openstack_network_exporter[367323]: ERROR   01:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:57:01 compute-0 openstack_network_exporter[367323]: ERROR   01:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:57:01 compute-0 openstack_network_exporter[367323]: ERROR   01:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:57:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:57:01 compute-0 openstack_network_exporter[367323]: ERROR   01:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:57:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:57:01 compute-0 nova_compute[350387]: 2025-11-26 01:57:01.562 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:01 compute-0 determined_chaplygin[422503]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:57:01 compute-0 determined_chaplygin[422503]: --> relative data size: 1.0
Nov 26 01:57:01 compute-0 determined_chaplygin[422503]: --> All data devices are unavailable
Nov 26 01:57:02 compute-0 systemd[1]: libpod-5815e392e6d634dd57e51ad231a8986a80a9be1724b0791319f03d82216ca22d.scope: Deactivated successfully.
Nov 26 01:57:02 compute-0 systemd[1]: libpod-5815e392e6d634dd57e51ad231a8986a80a9be1724b0791319f03d82216ca22d.scope: Consumed 1.024s CPU time.
Nov 26 01:57:02 compute-0 podman[422487]: 2025-11-26 01:57:02.035417984 +0000 UTC m=+1.334263574 container died 5815e392e6d634dd57e51ad231a8986a80a9be1724b0791319f03d82216ca22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:57:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f5c2243c2d988f2cab4b04e89bb65acfc2a6d37dec6aaeb91f6f36fa81b05f0-merged.mount: Deactivated successfully.
Nov 26 01:57:02 compute-0 podman[422487]: 2025-11-26 01:57:02.119999407 +0000 UTC m=+1.418844927 container remove 5815e392e6d634dd57e51ad231a8986a80a9be1724b0791319f03d82216ca22d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:57:02 compute-0 systemd[1]: libpod-conmon-5815e392e6d634dd57e51ad231a8986a80a9be1724b0791319f03d82216ca22d.scope: Deactivated successfully.
Nov 26 01:57:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Nov 26 01:57:03 compute-0 podman[422683]: 2025-11-26 01:57:03.194495861 +0000 UTC m=+0.094538455 container create edc7ec78ca4ed1565e3292da8d0d3feac81344461176ea2dfb3145c014b10271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 01:57:03 compute-0 podman[422683]: 2025-11-26 01:57:03.162472028 +0000 UTC m=+0.062514702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:57:03 compute-0 systemd[1]: Started libpod-conmon-edc7ec78ca4ed1565e3292da8d0d3feac81344461176ea2dfb3145c014b10271.scope.
Nov 26 01:57:03 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:57:03 compute-0 podman[422683]: 2025-11-26 01:57:03.362566996 +0000 UTC m=+0.262609630 container init edc7ec78ca4ed1565e3292da8d0d3feac81344461176ea2dfb3145c014b10271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:57:03 compute-0 podman[422683]: 2025-11-26 01:57:03.377858927 +0000 UTC m=+0.277901521 container start edc7ec78ca4ed1565e3292da8d0d3feac81344461176ea2dfb3145c014b10271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 01:57:03 compute-0 podman[422683]: 2025-11-26 01:57:03.382714074 +0000 UTC m=+0.282756668 container attach edc7ec78ca4ed1565e3292da8d0d3feac81344461176ea2dfb3145c014b10271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 01:57:03 compute-0 lucid_vaughan[422698]: 167 167
Nov 26 01:57:03 compute-0 systemd[1]: libpod-edc7ec78ca4ed1565e3292da8d0d3feac81344461176ea2dfb3145c014b10271.scope: Deactivated successfully.
Nov 26 01:57:03 compute-0 podman[422683]: 2025-11-26 01:57:03.389220587 +0000 UTC m=+0.289263181 container died edc7ec78ca4ed1565e3292da8d0d3feac81344461176ea2dfb3145c014b10271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:57:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-775bcd92a1abb748e7e536db572b147d71e05bbe4e7a75b98dcb86a454e35975-merged.mount: Deactivated successfully.
Nov 26 01:57:03 compute-0 podman[422683]: 2025-11-26 01:57:03.454345312 +0000 UTC m=+0.354387916 container remove edc7ec78ca4ed1565e3292da8d0d3feac81344461176ea2dfb3145c014b10271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 01:57:03 compute-0 systemd[1]: libpod-conmon-edc7ec78ca4ed1565e3292da8d0d3feac81344461176ea2dfb3145c014b10271.scope: Deactivated successfully.
Nov 26 01:57:03 compute-0 podman[422723]: 2025-11-26 01:57:03.727003844 +0000 UTC m=+0.095025378 container create 203c3da98dc54c1dd09c214ed56ab91c1e3575eec0e75cc190a547864049cda5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 26 01:57:03 compute-0 podman[422723]: 2025-11-26 01:57:03.688582952 +0000 UTC m=+0.056604536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:57:03 compute-0 systemd[1]: Started libpod-conmon-203c3da98dc54c1dd09c214ed56ab91c1e3575eec0e75cc190a547864049cda5.scope.
Nov 26 01:57:03 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8357ec5980ad9bade9d3c950d28a1a964d224502e69c6e2ce9e7a9d85435615/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8357ec5980ad9bade9d3c950d28a1a964d224502e69c6e2ce9e7a9d85435615/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8357ec5980ad9bade9d3c950d28a1a964d224502e69c6e2ce9e7a9d85435615/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8357ec5980ad9bade9d3c950d28a1a964d224502e69c6e2ce9e7a9d85435615/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:57:03 compute-0 podman[422723]: 2025-11-26 01:57:03.890581383 +0000 UTC m=+0.258602927 container init 203c3da98dc54c1dd09c214ed56ab91c1e3575eec0e75cc190a547864049cda5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:57:03 compute-0 podman[422723]: 2025-11-26 01:57:03.92314768 +0000 UTC m=+0.291169204 container start 203c3da98dc54c1dd09c214ed56ab91c1e3575eec0e75cc190a547864049cda5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 01:57:03 compute-0 podman[422723]: 2025-11-26 01:57:03.928252764 +0000 UTC m=+0.296274338 container attach 203c3da98dc54c1dd09c214ed56ab91c1e3575eec0e75cc190a547864049cda5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]: {
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:    "0": [
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:        {
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "devices": [
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "/dev/loop3"
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            ],
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "lv_name": "ceph_lv0",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "lv_size": "21470642176",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "name": "ceph_lv0",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "tags": {
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.cluster_name": "ceph",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.crush_device_class": "",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.encrypted": "0",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.osd_id": "0",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.type": "block",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.vdo": "0"
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            },
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "type": "block",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "vg_name": "ceph_vg0"
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:        }
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:    ],
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:    "1": [
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:        {
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "devices": [
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "/dev/loop4"
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            ],
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "lv_name": "ceph_lv1",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "lv_size": "21470642176",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "name": "ceph_lv1",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "tags": {
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.cluster_name": "ceph",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.crush_device_class": "",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.encrypted": "0",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.osd_id": "1",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.type": "block",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.vdo": "0"
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            },
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "type": "block",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "vg_name": "ceph_vg1"
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:        }
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:    ],
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:    "2": [
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:        {
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "devices": [
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "/dev/loop5"
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            ],
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "lv_name": "ceph_lv2",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "lv_size": "21470642176",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "name": "ceph_lv2",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "tags": {
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.cluster_name": "ceph",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.crush_device_class": "",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.encrypted": "0",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.osd_id": "2",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.type": "block",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:                "ceph.vdo": "0"
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            },
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "type": "block",
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:            "vg_name": "ceph_vg2"
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:        }
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]:    ]
Nov 26 01:57:04 compute-0 infallible_chatterjee[422739]: }
Nov 26 01:57:04 compute-0 systemd[1]: libpod-203c3da98dc54c1dd09c214ed56ab91c1e3575eec0e75cc190a547864049cda5.scope: Deactivated successfully.
Nov 26 01:57:04 compute-0 podman[422723]: 2025-11-26 01:57:04.780457314 +0000 UTC m=+1.148478858 container died 203c3da98dc54c1dd09c214ed56ab91c1e3575eec0e75cc190a547864049cda5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 01:57:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8357ec5980ad9bade9d3c950d28a1a964d224502e69c6e2ce9e7a9d85435615-merged.mount: Deactivated successfully.
Nov 26 01:57:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:57:04 compute-0 podman[422723]: 2025-11-26 01:57:04.895912527 +0000 UTC m=+1.263934051 container remove 203c3da98dc54c1dd09c214ed56ab91c1e3575eec0e75cc190a547864049cda5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 26 01:57:04 compute-0 systemd[1]: libpod-conmon-203c3da98dc54c1dd09c214ed56ab91c1e3575eec0e75cc190a547864049cda5.scope: Deactivated successfully.
Nov 26 01:57:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Nov 26 01:57:06 compute-0 podman[422897]: 2025-11-26 01:57:06.048050929 +0000 UTC m=+0.097927230 container create eef399be0842714e1b21a6d213e120963ded65031a5095027cfcfcefb9926dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 01:57:06 compute-0 podman[422897]: 2025-11-26 01:57:06.01543321 +0000 UTC m=+0.065309561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:57:06 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:06.117 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 01:57:06 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:06.118 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 01:57:06 compute-0 nova_compute[350387]: 2025-11-26 01:57:06.121 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:06 compute-0 systemd[1]: Started libpod-conmon-eef399be0842714e1b21a6d213e120963ded65031a5095027cfcfcefb9926dd3.scope.
Nov 26 01:57:06 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:57:06 compute-0 podman[422897]: 2025-11-26 01:57:06.191718027 +0000 UTC m=+0.241594348 container init eef399be0842714e1b21a6d213e120963ded65031a5095027cfcfcefb9926dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 01:57:06 compute-0 podman[422897]: 2025-11-26 01:57:06.203947791 +0000 UTC m=+0.253824062 container start eef399be0842714e1b21a6d213e120963ded65031a5095027cfcfcefb9926dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 01:57:06 compute-0 podman[422897]: 2025-11-26 01:57:06.209110457 +0000 UTC m=+0.258986768 container attach eef399be0842714e1b21a6d213e120963ded65031a5095027cfcfcefb9926dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 01:57:06 compute-0 trusting_edison[422925]: 167 167
Nov 26 01:57:06 compute-0 systemd[1]: libpod-eef399be0842714e1b21a6d213e120963ded65031a5095027cfcfcefb9926dd3.scope: Deactivated successfully.
Nov 26 01:57:06 compute-0 podman[422914]: 2025-11-26 01:57:06.225491788 +0000 UTC m=+0.097726824 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 26 01:57:06 compute-0 podman[422915]: 2025-11-26 01:57:06.24221958 +0000 UTC m=+0.104914307 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:57:06 compute-0 podman[422911]: 2025-11-26 01:57:06.261468752 +0000 UTC m=+0.133488502 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ceilometer_agent_compute)
Nov 26 01:57:06 compute-0 podman[422971]: 2025-11-26 01:57:06.267696207 +0000 UTC m=+0.040635175 container died eef399be0842714e1b21a6d213e120963ded65031a5095027cfcfcefb9926dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:57:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-df0ebce610c52c8bb4ca6bb1b5ad665a29b53a2b87e8a9976339d21295071dee-merged.mount: Deactivated successfully.
Nov 26 01:57:06 compute-0 podman[422971]: 2025-11-26 01:57:06.310297238 +0000 UTC m=+0.083236206 container remove eef399be0842714e1b21a6d213e120963ded65031a5095027cfcfcefb9926dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:57:06 compute-0 systemd[1]: libpod-conmon-eef399be0842714e1b21a6d213e120963ded65031a5095027cfcfcefb9926dd3.scope: Deactivated successfully.
Nov 26 01:57:06 compute-0 nova_compute[350387]: 2025-11-26 01:57:06.389 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:06 compute-0 nova_compute[350387]: 2025-11-26 01:57:06.565 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:06 compute-0 podman[423000]: 2025-11-26 01:57:06.594014212 +0000 UTC m=+0.080304684 container create a37f2a100225c10fe10d4f4f448317c3b4253bb2598eb392862f43fea310aaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:57:06 compute-0 podman[423000]: 2025-11-26 01:57:06.557792671 +0000 UTC m=+0.044083143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:57:06 compute-0 systemd[1]: Started libpod-conmon-a37f2a100225c10fe10d4f4f448317c3b4253bb2598eb392862f43fea310aaba.scope.
Nov 26 01:57:06 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbc8854a8e34dc97a503709cd49ef5164baec08b16eed984ef2509fefee9250/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbc8854a8e34dc97a503709cd49ef5164baec08b16eed984ef2509fefee9250/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbc8854a8e34dc97a503709cd49ef5164baec08b16eed984ef2509fefee9250/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbc8854a8e34dc97a503709cd49ef5164baec08b16eed984ef2509fefee9250/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:57:06 compute-0 podman[423000]: 2025-11-26 01:57:06.730357873 +0000 UTC m=+0.216648395 container init a37f2a100225c10fe10d4f4f448317c3b4253bb2598eb392862f43fea310aaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 01:57:06 compute-0 podman[423000]: 2025-11-26 01:57:06.766898723 +0000 UTC m=+0.253189205 container start a37f2a100225c10fe10d4f4f448317c3b4253bb2598eb392862f43fea310aaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:57:06 compute-0 podman[423000]: 2025-11-26 01:57:06.773526919 +0000 UTC m=+0.259817421 container attach a37f2a100225c10fe10d4f4f448317c3b4253bb2598eb392862f43fea310aaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:57:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Nov 26 01:57:07 compute-0 trusting_jang[423018]: {
Nov 26 01:57:07 compute-0 trusting_jang[423018]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:57:07 compute-0 trusting_jang[423018]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:57:07 compute-0 trusting_jang[423018]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:57:07 compute-0 trusting_jang[423018]:        "osd_id": 0,
Nov 26 01:57:07 compute-0 trusting_jang[423018]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:57:07 compute-0 trusting_jang[423018]:        "type": "bluestore"
Nov 26 01:57:07 compute-0 trusting_jang[423018]:    },
Nov 26 01:57:07 compute-0 trusting_jang[423018]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:57:07 compute-0 trusting_jang[423018]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:57:07 compute-0 trusting_jang[423018]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:57:07 compute-0 trusting_jang[423018]:        "osd_id": 2,
Nov 26 01:57:07 compute-0 trusting_jang[423018]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:57:07 compute-0 trusting_jang[423018]:        "type": "bluestore"
Nov 26 01:57:07 compute-0 trusting_jang[423018]:    },
Nov 26 01:57:07 compute-0 trusting_jang[423018]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:57:07 compute-0 trusting_jang[423018]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:57:07 compute-0 trusting_jang[423018]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:57:07 compute-0 trusting_jang[423018]:        "osd_id": 1,
Nov 26 01:57:07 compute-0 trusting_jang[423018]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:57:07 compute-0 trusting_jang[423018]:        "type": "bluestore"
Nov 26 01:57:07 compute-0 trusting_jang[423018]:    }
Nov 26 01:57:07 compute-0 trusting_jang[423018]: }
Nov 26 01:57:07 compute-0 systemd[1]: libpod-a37f2a100225c10fe10d4f4f448317c3b4253bb2598eb392862f43fea310aaba.scope: Deactivated successfully.
Nov 26 01:57:07 compute-0 systemd[1]: libpod-a37f2a100225c10fe10d4f4f448317c3b4253bb2598eb392862f43fea310aaba.scope: Consumed 1.203s CPU time.
Nov 26 01:57:08 compute-0 podman[423052]: 2025-11-26 01:57:08.044693834 +0000 UTC m=+0.048255241 container died a37f2a100225c10fe10d4f4f448317c3b4253bb2598eb392862f43fea310aaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffbc8854a8e34dc97a503709cd49ef5164baec08b16eed984ef2509fefee9250-merged.mount: Deactivated successfully.
Nov 26 01:57:08 compute-0 podman[423052]: 2025-11-26 01:57:08.134118813 +0000 UTC m=+0.137680150 container remove a37f2a100225c10fe10d4f4f448317c3b4253bb2598eb392862f43fea310aaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 01:57:08 compute-0 systemd[1]: libpod-conmon-a37f2a100225c10fe10d4f4f448317c3b4253bb2598eb392862f43fea310aaba.scope: Deactivated successfully.
Nov 26 01:57:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:57:08 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:57:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:57:08 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:57:08 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 3606c59d-8de1-46f5-9d35-be8c003b56a3 does not exist
Nov 26 01:57:08 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 6a415e44-20cb-4c07-9e86-2e6a4fe0660a does not exist
Nov 26 01:57:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Nov 26 01:57:09 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:09.120 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:57:09 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:57:09 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:57:09 compute-0 podman[423117]: 2025-11-26 01:57:09.529098187 +0000 UTC m=+0.082420173 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 26 01:57:09 compute-0 podman[423118]: 2025-11-26 01:57:09.57464391 +0000 UTC m=+0.130455656 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:57:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:57:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Nov 26 01:57:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:57:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:57:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:57:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:57:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:57:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:57:11 compute-0 nova_compute[350387]: 2025-11-26 01:57:11.393 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:11 compute-0 nova_compute[350387]: 2025-11-26 01:57:11.568 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:12 compute-0 podman[423161]: 2025-11-26 01:57:12.602129189 +0000 UTC m=+0.146749586 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, com.redhat.component=ubi9-container, release=1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, vcs-type=git, build-date=2024-09-18T21:23:30, config_id=edpm, version=9.4, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, container_name=kepler)
Nov 26 01:57:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:57:12 compute-0 nova_compute[350387]: 2025-11-26 01:57:12.966 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "d32050dc-c041-47df-994e-7d05cf1f489a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:57:12 compute-0 nova_compute[350387]: 2025-11-26 01:57:12.966 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:57:12 compute-0 nova_compute[350387]: 2025-11-26 01:57:12.984 350391 DEBUG nova.compute.manager [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 01:57:13 compute-0 nova_compute[350387]: 2025-11-26 01:57:13.100 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:57:13 compute-0 nova_compute[350387]: 2025-11-26 01:57:13.101 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:57:13 compute-0 nova_compute[350387]: 2025-11-26 01:57:13.115 350391 DEBUG nova.virt.hardware [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 01:57:13 compute-0 nova_compute[350387]: 2025-11-26 01:57:13.116 350391 INFO nova.compute.claims [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 01:57:13 compute-0 nova_compute[350387]: 2025-11-26 01:57:13.309 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:57:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:57:13 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417978308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:57:13 compute-0 nova_compute[350387]: 2025-11-26 01:57:13.831 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:57:13 compute-0 nova_compute[350387]: 2025-11-26 01:57:13.843 350391 DEBUG nova.compute.provider_tree [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:57:13 compute-0 nova_compute[350387]: 2025-11-26 01:57:13.871 350391 DEBUG nova.scheduler.client.report [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:57:13 compute-0 nova_compute[350387]: 2025-11-26 01:57:13.900 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:57:13 compute-0 nova_compute[350387]: 2025-11-26 01:57:13.902 350391 DEBUG nova.compute.manager [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 01:57:13 compute-0 nova_compute[350387]: 2025-11-26 01:57:13.953 350391 DEBUG nova.compute.manager [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 01:57:13 compute-0 nova_compute[350387]: 2025-11-26 01:57:13.954 350391 DEBUG nova.network.neutron [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 01:57:13 compute-0 nova_compute[350387]: 2025-11-26 01:57:13.978 350391 INFO nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 01:57:14 compute-0 nova_compute[350387]: 2025-11-26 01:57:14.022 350391 DEBUG nova.compute.manager [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 01:57:14 compute-0 nova_compute[350387]: 2025-11-26 01:57:14.126 350391 DEBUG nova.compute.manager [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 01:57:14 compute-0 nova_compute[350387]: 2025-11-26 01:57:14.130 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 01:57:14 compute-0 nova_compute[350387]: 2025-11-26 01:57:14.131 350391 INFO nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Creating image(s)#033[00m
Nov 26 01:57:14 compute-0 nova_compute[350387]: 2025-11-26 01:57:14.195 350391 DEBUG nova.storage.rbd_utils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image d32050dc-c041-47df-994e-7d05cf1f489a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:57:14 compute-0 nova_compute[350387]: 2025-11-26 01:57:14.255 350391 DEBUG nova.storage.rbd_utils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image d32050dc-c041-47df-994e-7d05cf1f489a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:57:14 compute-0 nova_compute[350387]: 2025-11-26 01:57:14.316 350391 DEBUG nova.storage.rbd_utils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image d32050dc-c041-47df-994e-7d05cf1f489a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:57:14 compute-0 nova_compute[350387]: 2025-11-26 01:57:14.327 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:57:14 compute-0 nova_compute[350387]: 2025-11-26 01:57:14.428 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:57:14 compute-0 nova_compute[350387]: 2025-11-26 01:57:14.430 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "f456d938eec6117407d48c9debbc5604edb4194e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:57:14 compute-0 nova_compute[350387]: 2025-11-26 01:57:14.431 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "f456d938eec6117407d48c9debbc5604edb4194e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:57:14 compute-0 nova_compute[350387]: 2025-11-26 01:57:14.432 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "f456d938eec6117407d48c9debbc5604edb4194e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:57:14 compute-0 nova_compute[350387]: 2025-11-26 01:57:14.477 350391 DEBUG nova.storage.rbd_utils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image d32050dc-c041-47df-994e-7d05cf1f489a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:57:14 compute-0 nova_compute[350387]: 2025-11-26 01:57:14.492 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e d32050dc-c041-47df-994e-7d05cf1f489a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:57:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:57:14 compute-0 nova_compute[350387]: 2025-11-26 01:57:14.929 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e d32050dc-c041-47df-994e-7d05cf1f489a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:57:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 201 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 170 B/s wr, 2 op/s
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.052 350391 DEBUG nova.storage.rbd_utils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] resizing rbd image d32050dc-c041-47df-994e-7d05cf1f489a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.292 350391 DEBUG nova.objects.instance [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lazy-loading 'migration_context' on Instance uuid d32050dc-c041-47df-994e-7d05cf1f489a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.347 350391 DEBUG nova.storage.rbd_utils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image d32050dc-c041-47df-994e-7d05cf1f489a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.397 350391 DEBUG nova.storage.rbd_utils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image d32050dc-c041-47df-994e-7d05cf1f489a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.408 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.496 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.497 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.498 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.498 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.547 350391 DEBUG nova.storage.rbd_utils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image d32050dc-c041-47df-994e-7d05cf1f489a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.558 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 d32050dc-c041-47df-994e-7d05cf1f489a_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.584 350391 DEBUG nova.network.neutron [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Successfully updated port: 25d715a2-34af-4ad1-bc6d-0303fb8763f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.603 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.604 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquired lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.604 350391 DEBUG nova.network.neutron [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.679 350391 DEBUG nova.compute.manager [req-fee133fa-3201-495a-a7df-6cffe5b4aa36 req-0fe4d2e6-4d21-43fa-b2cd-e3483621cb9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Received event network-changed-25d715a2-34af-4ad1-bc6d-0303fb8763f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.679 350391 DEBUG nova.compute.manager [req-fee133fa-3201-495a-a7df-6cffe5b4aa36 req-0fe4d2e6-4d21-43fa-b2cd-e3483621cb9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Refreshing instance network info cache due to event network-changed-25d715a2-34af-4ad1-bc6d-0303fb8763f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.679 350391 DEBUG oslo_concurrency.lockutils [req-fee133fa-3201-495a-a7df-6cffe5b4aa36 req-0fe4d2e6-4d21-43fa-b2cd-e3483621cb9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:57:15 compute-0 nova_compute[350387]: 2025-11-26 01:57:15.796 350391 DEBUG nova.network.neutron [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.037 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 d32050dc-c041-47df-994e-7d05cf1f489a_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.302 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.304 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Ensure instance console log exists: /var/lib/nova/instances/d32050dc-c041-47df-994e-7d05cf1f489a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.305 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.306 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.307 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.397 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:16 compute-0 podman[423500]: 2025-11-26 01:57:16.568497991 +0000 UTC m=+0.122874313 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.570 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.776 350391 DEBUG nova.network.neutron [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Updating instance_info_cache with network_info: [{"id": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "address": "fa:16:3e:99:2d:81", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d715a2-34", "ovs_interfaceid": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.807 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Releasing lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.807 350391 DEBUG nova.compute.manager [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Instance network_info: |[{"id": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "address": "fa:16:3e:99:2d:81", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d715a2-34", "ovs_interfaceid": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.808 350391 DEBUG oslo_concurrency.lockutils [req-fee133fa-3201-495a-a7df-6cffe5b4aa36 req-0fe4d2e6-4d21-43fa-b2cd-e3483621cb9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.809 350391 DEBUG nova.network.neutron [req-fee133fa-3201-495a-a7df-6cffe5b4aa36 req-0fe4d2e6-4d21-43fa-b2cd-e3483621cb9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Refreshing network info cache for port 25d715a2-34af-4ad1-bc6d-0303fb8763f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.815 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Start _get_guest_xml network_info=[{"id": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "address": "fa:16:3e:99:2d:81", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d715a2-34", "ovs_interfaceid": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T01:48:44Z,direct_url=<?>,disk_format='qcow2',id=48e08d00-37a3-4465-a949-ff0b8afe4def,min_disk=0,min_ram=0,name='cirros',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T01:48:48Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}], 'ephemerals': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'size': 1, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.827 350391 WARNING nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.850 350391 DEBUG nova.virt.libvirt.host [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.852 350391 DEBUG nova.virt.libvirt.host [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.859 350391 DEBUG nova.virt.libvirt.host [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.860 350391 DEBUG nova.virt.libvirt.host [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.860 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.861 350391 DEBUG nova.virt.hardware [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T01:48:51Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='030e95e2-5458-42ef-a5df-79a19c0b681d',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T01:48:44Z,direct_url=<?>,disk_format='qcow2',id=48e08d00-37a3-4465-a949-ff0b8afe4def,min_disk=0,min_ram=0,name='cirros',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T01:48:48Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.862 350391 DEBUG nova.virt.hardware [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.862 350391 DEBUG nova.virt.hardware [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.863 350391 DEBUG nova.virt.hardware [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.863 350391 DEBUG nova.virt.hardware [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.863 350391 DEBUG nova.virt.hardware [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.864 350391 DEBUG nova.virt.hardware [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.864 350391 DEBUG nova.virt.hardware [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.865 350391 DEBUG nova.virt.hardware [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.865 350391 DEBUG nova.virt.hardware [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.866 350391 DEBUG nova.virt.hardware [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 01:57:16 compute-0 nova_compute[350387]: 2025-11-26 01:57:16.871 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:57:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 321 active+clean; 215 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 418 KiB/s wr, 6 op/s
Nov 26 01:57:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 01:57:17 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3293147021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 01:57:17 compute-0 nova_compute[350387]: 2025-11-26 01:57:17.394 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:57:17 compute-0 nova_compute[350387]: 2025-11-26 01:57:17.396 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:57:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 01:57:17 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2518570362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 01:57:17 compute-0 nova_compute[350387]: 2025-11-26 01:57:17.876 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:57:17 compute-0 nova_compute[350387]: 2025-11-26 01:57:17.920 350391 DEBUG nova.storage.rbd_utils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image d32050dc-c041-47df-994e-7d05cf1f489a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:57:17 compute-0 nova_compute[350387]: 2025-11-26 01:57:17.942 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:57:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 01:57:18 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2840104818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.436 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.439 350391 DEBUG nova.virt.libvirt.vif [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T01:57:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt',id=4,image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='366b90b6-2e85-40c4-9ca1-855cf9022409'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d902f6105ab4c81a51a4751fa89a83e',ramdisk_id='',reservation_id='r-4lu5o0hu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T01:57:14Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MzI3MTgyMzMyNTg2Mzg4NjM2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgzMjcxODIzMzI1ODYzODg2MzY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODMyNzE4MjMzMjU4NjM4ODYzNj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgzMjcxODIzMzI1ODYzODg2MzY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MzI3MTgyMzMyNTg2Mzg4NjM2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MzI3MTgyMzMyNTg2Mzg4NjM2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Nov 26 01:57:18 compute-0 nova_compute[350387]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODMyNzE4MjMzMjU4NjM4ODYzNj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgzMjcxODIzMzI1ODYzODg2MzY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MzI3MTgyMzMyNTg2Mzg4NjM2PT0tLQo=',user_id='b130e7a8bed3424f9f5ff63b35cd2b28',uuid=d32050dc-c041-47df-994e-7d05cf1f489a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "address": "fa:16:3e:99:2d:81", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d715a2-34", "ovs_interfaceid": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.440 350391 DEBUG nova.network.os_vif_util [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converting VIF {"id": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "address": "fa:16:3e:99:2d:81", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d715a2-34", "ovs_interfaceid": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.441 350391 DEBUG nova.network.os_vif_util [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:2d:81,bridge_name='br-int',has_traffic_filtering=True,id=25d715a2-34af-4ad1-bc6d-0303fb8763f1,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25d715a2-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.444 350391 DEBUG nova.objects.instance [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lazy-loading 'pci_devices' on Instance uuid d32050dc-c041-47df-994e-7d05cf1f489a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.468 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] End _get_guest_xml xml=<domain type="kvm">
Nov 26 01:57:18 compute-0 nova_compute[350387]:  <uuid>d32050dc-c041-47df-994e-7d05cf1f489a</uuid>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  <name>instance-00000004</name>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  <memory>524288</memory>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  <metadata>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <nova:name>vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt</nova:name>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 01:57:16</nova:creationTime>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <nova:flavor name="m1.small">
Nov 26 01:57:18 compute-0 nova_compute[350387]:        <nova:memory>512</nova:memory>
Nov 26 01:57:18 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 01:57:18 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 01:57:18 compute-0 nova_compute[350387]:        <nova:ephemeral>1</nova:ephemeral>
Nov 26 01:57:18 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 01:57:18 compute-0 nova_compute[350387]:        <nova:user uuid="b130e7a8bed3424f9f5ff63b35cd2b28">admin</nova:user>
Nov 26 01:57:18 compute-0 nova_compute[350387]:        <nova:project uuid="4d902f6105ab4c81a51a4751fa89a83e">admin</nova:project>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="48e08d00-37a3-4465-a949-ff0b8afe4def"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <nova:ports>
Nov 26 01:57:18 compute-0 nova_compute[350387]:        <nova:port uuid="25d715a2-34af-4ad1-bc6d-0303fb8763f1">
Nov 26 01:57:18 compute-0 nova_compute[350387]:          <nova:ip type="fixed" address="192.168.0.232" ipVersion="4"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:        </nova:port>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      </nova:ports>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  </metadata>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <system>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <entry name="serial">d32050dc-c041-47df-994e-7d05cf1f489a</entry>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <entry name="uuid">d32050dc-c041-47df-994e-7d05cf1f489a</entry>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    </system>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  <os>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  </os>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  <features>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <apic/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  </features>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  </clock>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  </cpu>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  <devices>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/d32050dc-c041-47df-994e-7d05cf1f489a_disk">
Nov 26 01:57:18 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      </source>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 01:57:18 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      </auth>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/d32050dc-c041-47df-994e-7d05cf1f489a_disk.eph0">
Nov 26 01:57:18 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      </source>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 01:57:18 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      </auth>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <target dev="vdb" bus="virtio"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/d32050dc-c041-47df-994e-7d05cf1f489a_disk.config">
Nov 26 01:57:18 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      </source>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 01:57:18 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      </auth>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    </disk>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <interface type="ethernet">
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <mac address="fa:16:3e:99:2d:81"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <mtu size="1442"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <target dev="tap25d715a2-34"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    </interface>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/d32050dc-c041-47df-994e-7d05cf1f489a/console.log" append="off"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    </serial>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <video>
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    </video>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    </rng>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 01:57:18 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 01:57:18 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 01:57:18 compute-0 nova_compute[350387]:  </devices>
Nov 26 01:57:18 compute-0 nova_compute[350387]: </domain>
Nov 26 01:57:18 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.469 350391 DEBUG nova.compute.manager [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Preparing to wait for external event network-vif-plugged-25d715a2-34af-4ad1-bc6d-0303fb8763f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.469 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.469 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.470 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.470 350391 DEBUG nova.virt.libvirt.vif [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T01:57:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt',id=4,image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='366b90b6-2e85-40c4-9ca1-855cf9022409'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d902f6105ab4c81a51a4751fa89a83e',ramdisk_id='',reservation_id='r-4lu5o0hu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T01:57:14Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MzI3MTgyMzMyNTg2Mzg4NjM2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgzMjcxODIzMzI1ODYzODg2MzY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODMyNzE4MjMzMjU4NjM4ODYzNj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgzMjcxODIzMzI1ODYzODg2MzY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MzI3MTgyMzMyNTg2Mzg4NjM2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MzI3MTgyMzMyNTg2Mzg4NjM2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.470 350391 DEBUG nova.network.os_vif_util [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converting VIF {"id": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "address": "fa:16:3e:99:2d:81", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d715a2-34", "ovs_interfaceid": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.471 350391 DEBUG nova.network.os_vif_util [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:2d:81,bridge_name='br-int',has_traffic_filtering=True,id=25d715a2-34af-4ad1-bc6d-0303fb8763f1,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25d715a2-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.471 350391 DEBUG os_vif [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:2d:81,bridge_name='br-int',has_traffic_filtering=True,id=25d715a2-34af-4ad1-bc6d-0303fb8763f1,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25d715a2-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.472 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.472 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.472 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.476 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.476 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25d715a2-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.476 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25d715a2-34, col_values=(('external_ids', {'iface-id': '25d715a2-34af-4ad1-bc6d-0303fb8763f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:2d:81', 'vm-uuid': 'd32050dc-c041-47df-994e-7d05cf1f489a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:57:18 compute-0 NetworkManager[48886]: <info>  [1764122238.4791] manager: (tap25d715a2-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.482 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.490 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.493 350391 INFO os_vif [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:2d:81,bridge_name='br-int',has_traffic_filtering=True,id=25d715a2-34af-4ad1-bc6d-0303fb8763f1,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25d715a2-34')#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.566 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.567 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.567 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.568 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No VIF found with MAC fa:16:3e:99:2d:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.568 350391 INFO nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Using config drive#033[00m
Nov 26 01:57:18 compute-0 rsyslogd[188548]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 01:57:18.439 350391 DEBUG nova.virt.libvirt.vif [None req-a81a97d1-d2 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 01:57:18 compute-0 nova_compute[350387]: 2025-11-26 01:57:18.643 350391 DEBUG nova.storage.rbd_utils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image d32050dc-c041-47df-994e-7d05cf1f489a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:57:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 221 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 783 KiB/s wr, 16 op/s
Nov 26 01:57:19 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 26 01:57:19 compute-0 podman[423627]: 2025-11-26 01:57:19.183754326 +0000 UTC m=+0.114471707 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:57:19 compute-0 podman[423626]: 2025-11-26 01:57:19.188927641 +0000 UTC m=+0.124430036 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Nov 26 01:57:19 compute-0 nova_compute[350387]: 2025-11-26 01:57:19.805 350391 INFO nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Creating config drive at /var/lib/nova/instances/d32050dc-c041-47df-994e-7d05cf1f489a/disk.config#033[00m
Nov 26 01:57:19 compute-0 nova_compute[350387]: 2025-11-26 01:57:19.814 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d32050dc-c041-47df-994e-7d05cf1f489a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnnowmc08 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:57:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:57:19 compute-0 nova_compute[350387]: 2025-11-26 01:57:19.961 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d32050dc-c041-47df-994e-7d05cf1f489a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnnowmc08" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:57:20 compute-0 nova_compute[350387]: 2025-11-26 01:57:20.003 350391 DEBUG nova.storage.rbd_utils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image d32050dc-c041-47df-994e-7d05cf1f489a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 01:57:20 compute-0 nova_compute[350387]: 2025-11-26 01:57:20.016 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d32050dc-c041-47df-994e-7d05cf1f489a/disk.config d32050dc-c041-47df-994e-7d05cf1f489a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:57:20 compute-0 nova_compute[350387]: 2025-11-26 01:57:20.045 350391 DEBUG nova.network.neutron [req-fee133fa-3201-495a-a7df-6cffe5b4aa36 req-0fe4d2e6-4d21-43fa-b2cd-e3483621cb9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Updated VIF entry in instance network info cache for port 25d715a2-34af-4ad1-bc6d-0303fb8763f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 01:57:20 compute-0 nova_compute[350387]: 2025-11-26 01:57:20.046 350391 DEBUG nova.network.neutron [req-fee133fa-3201-495a-a7df-6cffe5b4aa36 req-0fe4d2e6-4d21-43fa-b2cd-e3483621cb9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Updating instance_info_cache with network_info: [{"id": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "address": "fa:16:3e:99:2d:81", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d715a2-34", "ovs_interfaceid": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:57:20 compute-0 nova_compute[350387]: 2025-11-26 01:57:20.063 350391 DEBUG oslo_concurrency.lockutils [req-fee133fa-3201-495a-a7df-6cffe5b4aa36 req-0fe4d2e6-4d21-43fa-b2cd-e3483621cb9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:57:20 compute-0 nova_compute[350387]: 2025-11-26 01:57:20.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:57:20 compute-0 nova_compute[350387]: 2025-11-26 01:57:20.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:57:20 compute-0 nova_compute[350387]: 2025-11-26 01:57:20.324 350391 DEBUG oslo_concurrency.processutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d32050dc-c041-47df-994e-7d05cf1f489a/disk.config d32050dc-c041-47df-994e-7d05cf1f489a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:57:20 compute-0 nova_compute[350387]: 2025-11-26 01:57:20.327 350391 INFO nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Deleting local config drive /var/lib/nova/instances/d32050dc-c041-47df-994e-7d05cf1f489a/disk.config because it was imported into RBD.#033[00m
Nov 26 01:57:20 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 26 01:57:20 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 26 01:57:20 compute-0 NetworkManager[48886]: <info>  [1764122240.5195] manager: (tap25d715a2-34): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Nov 26 01:57:20 compute-0 kernel: tap25d715a2-34: entered promiscuous mode
Nov 26 01:57:20 compute-0 ovn_controller[89102]: 2025-11-26T01:57:20Z|00045|binding|INFO|Claiming lport 25d715a2-34af-4ad1-bc6d-0303fb8763f1 for this chassis.
Nov 26 01:57:20 compute-0 nova_compute[350387]: 2025-11-26 01:57:20.523 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:20 compute-0 ovn_controller[89102]: 2025-11-26T01:57:20Z|00046|binding|INFO|25d715a2-34af-4ad1-bc6d-0303fb8763f1: Claiming fa:16:3e:99:2d:81 192.168.0.232
Nov 26 01:57:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:20.540 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:2d:81 192.168.0.232'], port_security=['fa:16:3e:99:2d:81 192.168.0.232'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vnceagrg57o4-2ev52kuax77s-ynduxzek5ukb-port-7xnbby5gttbg', 'neutron:cidrs': '192.168.0.232/24', 'neutron:device_id': 'd32050dc-c041-47df-994e-7d05cf1f489a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97f5f89-70be-4349-beb5-5f8e6065072e', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vnceagrg57o4-2ev52kuax77s-ynduxzek5ukb-port-7xnbby5gttbg', 'neutron:project_id': '4d902f6105ab4c81a51a4751fa89a83e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd3202a1a-8d71-42b1-ae70-18469fa18607', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5f5986b-4ad4-4edf-b238-68c26c7002dd, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=25d715a2-34af-4ad1-bc6d-0303fb8763f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 01:57:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:20.542 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 25d715a2-34af-4ad1-bc6d-0303fb8763f1 in datapath c97f5f89-70be-4349-beb5-5f8e6065072e bound to our chassis#033[00m
Nov 26 01:57:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:20.544 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97f5f89-70be-4349-beb5-5f8e6065072e#033[00m
Nov 26 01:57:20 compute-0 ovn_controller[89102]: 2025-11-26T01:57:20Z|00047|binding|INFO|Setting lport 25d715a2-34af-4ad1-bc6d-0303fb8763f1 ovn-installed in OVS
Nov 26 01:57:20 compute-0 ovn_controller[89102]: 2025-11-26T01:57:20Z|00048|binding|INFO|Setting lport 25d715a2-34af-4ad1-bc6d-0303fb8763f1 up in Southbound
Nov 26 01:57:20 compute-0 nova_compute[350387]: 2025-11-26 01:57:20.559 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:20 compute-0 nova_compute[350387]: 2025-11-26 01:57:20.562 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:20.570 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[6d15ef10-48d6-4b16-8454-ec9e7d647a87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:57:20 compute-0 systemd-machined[138512]: New machine qemu-4-instance-00000004.
Nov 26 01:57:20 compute-0 systemd-udevd[423747]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 01:57:20 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Nov 26 01:57:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:20.624 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1a3ff2-9ba1-4fb1-98cf-59d7da37e387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:57:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:20.628 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[e8288350-071b-4d2f-8f06-a574c6eef3dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:57:20 compute-0 NetworkManager[48886]: <info>  [1764122240.6400] device (tap25d715a2-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 01:57:20 compute-0 NetworkManager[48886]: <info>  [1764122240.6410] device (tap25d715a2-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 01:57:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:20.667 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[09eada75-4466-406b-be17-43521cb5bbc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:57:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:20.708 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[4e98eccd-2c06-4fad-a76f-7401bf56a4de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97f5f89-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:e8:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 10, 'rx_bytes': 532, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 10, 'rx_bytes': 532, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544483, 'reachable_time': 19796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423755, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:57:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:20.739 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[9f73af64-b423-470f-a188-c2253350ef8f]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc97f5f89-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544500, 'tstamp': 544500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423758, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc97f5f89-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544503, 'tstamp': 544503}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423758, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 01:57:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:20.743 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97f5f89-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:57:20 compute-0 nova_compute[350387]: 2025-11-26 01:57:20.746 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:20 compute-0 nova_compute[350387]: 2025-11-26 01:57:20.748 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:20.748 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97f5f89-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:57:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:20.749 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 01:57:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:20.750 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97f5f89-70, col_values=(('external_ids', {'iface-id': '3824ec63-7278-42dc-8c72-8ec8e06c2f0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 01:57:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:20.751 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 01:57:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.339 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764122241.338029, d32050dc-c041-47df-994e-7d05cf1f489a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.339 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] VM Started (Lifecycle Event)#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.360 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.369 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764122241.338201, d32050dc-c041-47df-994e-7d05cf1f489a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.370 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] VM Paused (Lifecycle Event)#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.389 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.406 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.412 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.438 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.544 350391 DEBUG nova.compute.manager [req-8ca741f1-b5fa-4769-94c7-a0215a6a7ef0 req-159ade6a-f897-4d20-8e52-bf469835443e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Received event network-vif-plugged-25d715a2-34af-4ad1-bc6d-0303fb8763f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.545 350391 DEBUG oslo_concurrency.lockutils [req-8ca741f1-b5fa-4769-94c7-a0215a6a7ef0 req-159ade6a-f897-4d20-8e52-bf469835443e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.546 350391 DEBUG oslo_concurrency.lockutils [req-8ca741f1-b5fa-4769-94c7-a0215a6a7ef0 req-159ade6a-f897-4d20-8e52-bf469835443e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.546 350391 DEBUG oslo_concurrency.lockutils [req-8ca741f1-b5fa-4769-94c7-a0215a6a7ef0 req-159ade6a-f897-4d20-8e52-bf469835443e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.546 350391 DEBUG nova.compute.manager [req-8ca741f1-b5fa-4769-94c7-a0215a6a7ef0 req-159ade6a-f897-4d20-8e52-bf469835443e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Processing event network-vif-plugged-25d715a2-34af-4ad1-bc6d-0303fb8763f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.547 350391 DEBUG nova.compute.manager [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.553 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764122241.5529933, d32050dc-c041-47df-994e-7d05cf1f489a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.553 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] VM Resumed (Lifecycle Event)#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.556 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.566 350391 INFO nova.virt.libvirt.driver [-] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Instance spawned successfully.#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.567 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.575 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.592 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.606 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.607 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.607 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.608 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.609 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.609 350391 DEBUG nova.virt.libvirt.driver [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.624 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.677 350391 INFO nova.compute.manager [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Took 7.55 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.678 350391 DEBUG nova.compute.manager [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.772 350391 INFO nova.compute.manager [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Took 8.72 seconds to build instance.#033[00m
Nov 26 01:57:21 compute-0 nova_compute[350387]: 2025-11-26 01:57:21.790 350391 DEBUG oslo_concurrency.lockutils [None req-a81a97d1-d20b-4845-89d4-c4e20b2be0ea b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:57:22 compute-0 nova_compute[350387]: 2025-11-26 01:57:22.319 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:57:22 compute-0 nova_compute[350387]: 2025-11-26 01:57:22.319 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:57:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.4 MiB/s wr, 44 op/s
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.324 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.325 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.326 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.326 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.326 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.480 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.698 350391 DEBUG nova.compute.manager [req-d72e7634-6b58-4dda-817d-d5129c61e644 req-dcdad33f-3cc5-433d-b4de-711024d80963 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Received event network-vif-plugged-25d715a2-34af-4ad1-bc6d-0303fb8763f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.698 350391 DEBUG oslo_concurrency.lockutils [req-d72e7634-6b58-4dda-817d-d5129c61e644 req-dcdad33f-3cc5-433d-b4de-711024d80963 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.699 350391 DEBUG oslo_concurrency.lockutils [req-d72e7634-6b58-4dda-817d-d5129c61e644 req-dcdad33f-3cc5-433d-b4de-711024d80963 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.700 350391 DEBUG oslo_concurrency.lockutils [req-d72e7634-6b58-4dda-817d-d5129c61e644 req-dcdad33f-3cc5-433d-b4de-711024d80963 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.701 350391 DEBUG nova.compute.manager [req-d72e7634-6b58-4dda-817d-d5129c61e644 req-dcdad33f-3cc5-433d-b4de-711024d80963 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] No waiting events found dispatching network-vif-plugged-25d715a2-34af-4ad1-bc6d-0303fb8763f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.702 350391 WARNING nova.compute.manager [req-d72e7634-6b58-4dda-817d-d5129c61e644 req-dcdad33f-3cc5-433d-b4de-711024d80963 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Received unexpected event network-vif-plugged-25d715a2-34af-4ad1-bc6d-0303fb8763f1 for instance with vm_state active and task_state None.#033[00m
Nov 26 01:57:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:57:23 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1616947700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:57:23 compute-0 nova_compute[350387]: 2025-11-26 01:57:23.867 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.022 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.023 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.024 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.029 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.029 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.029 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.032 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.032 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.033 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.037 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.037 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.037 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.528 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.530 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3363MB free_disk=59.87269973754883GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.530 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.530 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.757 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.758 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 0e500d52-72e1-4501-b4d6-fc6ca575760f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.758 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.759 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance d32050dc-c041-47df-994e-7d05cf1f489a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.759 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.759 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:57:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:57:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 683 KiB/s rd, 1.4 MiB/s wr, 70 op/s
Nov 26 01:57:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:24.975 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:57:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:24.976 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:57:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:57:24.977 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:57:24 compute-0 nova_compute[350387]: 2025-11-26 01:57:24.997 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:57:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:57:25 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043424967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:57:25 compute-0 nova_compute[350387]: 2025-11-26 01:57:25.525 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:57:25 compute-0 nova_compute[350387]: 2025-11-26 01:57:25.536 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:57:25 compute-0 nova_compute[350387]: 2025-11-26 01:57:25.564 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:57:25 compute-0 nova_compute[350387]: 2025-11-26 01:57:25.599 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:57:25 compute-0 nova_compute[350387]: 2025-11-26 01:57:25.599 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:57:26 compute-0 nova_compute[350387]: 2025-11-26 01:57:26.406 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:26 compute-0 nova_compute[350387]: 2025-11-26 01:57:26.601 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:57:26 compute-0 nova_compute[350387]: 2025-11-26 01:57:26.601 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:57:26 compute-0 nova_compute[350387]: 2025-11-26 01:57:26.716 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:57:26 compute-0 nova_compute[350387]: 2025-11-26 01:57:26.717 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:57:26 compute-0 nova_compute[350387]: 2025-11-26 01:57:26.870 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:57:26 compute-0 nova_compute[350387]: 2025-11-26 01:57:26.871 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:57:26 compute-0 nova_compute[350387]: 2025-11-26 01:57:26.871 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 01:57:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1400: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 82 op/s
Nov 26 01:57:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:57:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1961436188' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:57:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:57:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1961436188' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:57:28 compute-0 nova_compute[350387]: 2025-11-26 01:57:28.470 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Updating instance_info_cache with network_info: [{"id": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "address": "fa:16:3e:70:20:57", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7c212d-f2", "ovs_interfaceid": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:57:28 compute-0 nova_compute[350387]: 2025-11-26 01:57:28.484 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:28 compute-0 nova_compute[350387]: 2025-11-26 01:57:28.486 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:57:28 compute-0 nova_compute[350387]: 2025-11-26 01:57:28.486 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 01:57:28 compute-0 nova_compute[350387]: 2025-11-26 01:57:28.486 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:57:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 993 KiB/s wr, 90 op/s
Nov 26 01:57:29 compute-0 podman[158021]: time="2025-11-26T01:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:57:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:57:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8630 "" "Go-http-client/1.1"
Nov 26 01:57:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:57:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 628 KiB/s wr, 81 op/s
Nov 26 01:57:31 compute-0 nova_compute[350387]: 2025-11-26 01:57:31.410 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:31 compute-0 openstack_network_exporter[367323]: ERROR   01:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:57:31 compute-0 openstack_network_exporter[367323]: ERROR   01:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:57:31 compute-0 openstack_network_exporter[367323]: ERROR   01:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:57:31 compute-0 openstack_network_exporter[367323]: ERROR   01:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:57:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:57:31 compute-0 openstack_network_exporter[367323]: ERROR   01:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:57:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:57:32 compute-0 nova_compute[350387]: 2025-11-26 01:57:32.835 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:57:32 compute-0 nova_compute[350387]: 2025-11-26 01:57:32.867 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Triggering sync for uuid b1c088bc-7a6b-4580-93ff-685731747189 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 26 01:57:32 compute-0 nova_compute[350387]: 2025-11-26 01:57:32.867 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Triggering sync for uuid 0e500d52-72e1-4501-b4d6-fc6ca575760f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 26 01:57:32 compute-0 nova_compute[350387]: 2025-11-26 01:57:32.868 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Triggering sync for uuid a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 26 01:57:32 compute-0 nova_compute[350387]: 2025-11-26 01:57:32.868 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Triggering sync for uuid d32050dc-c041-47df-994e-7d05cf1f489a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 26 01:57:32 compute-0 nova_compute[350387]: 2025-11-26 01:57:32.868 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "b1c088bc-7a6b-4580-93ff-685731747189" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:57:32 compute-0 nova_compute[350387]: 2025-11-26 01:57:32.869 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "b1c088bc-7a6b-4580-93ff-685731747189" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:57:32 compute-0 nova_compute[350387]: 2025-11-26 01:57:32.869 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "0e500d52-72e1-4501-b4d6-fc6ca575760f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:57:32 compute-0 nova_compute[350387]: 2025-11-26 01:57:32.870 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:57:32 compute-0 nova_compute[350387]: 2025-11-26 01:57:32.870 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:57:32 compute-0 nova_compute[350387]: 2025-11-26 01:57:32.870 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:57:32 compute-0 nova_compute[350387]: 2025-11-26 01:57:32.871 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "d32050dc-c041-47df-994e-7d05cf1f489a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:57:32 compute-0 nova_compute[350387]: 2025-11-26 01:57:32.872 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "d32050dc-c041-47df-994e-7d05cf1f489a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:57:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 59 op/s
Nov 26 01:57:33 compute-0 nova_compute[350387]: 2025-11-26 01:57:33.047 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "b1c088bc-7a6b-4580-93ff-685731747189" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:57:33 compute-0 nova_compute[350387]: 2025-11-26 01:57:33.051 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:57:33 compute-0 nova_compute[350387]: 2025-11-26 01:57:33.070 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:57:33 compute-0 nova_compute[350387]: 2025-11-26 01:57:33.073 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "d32050dc-c041-47df-994e-7d05cf1f489a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:57:33 compute-0 nova_compute[350387]: 2025-11-26 01:57:33.490 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:34 compute-0 nova_compute[350387]: 2025-11-26 01:57:34.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:57:34 compute-0 nova_compute[350387]: 2025-11-26 01:57:34.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 01:57:34 compute-0 nova_compute[350387]: 2025-11-26 01:57:34.317 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 01:57:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:57:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 234 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 85 B/s wr, 75 op/s
Nov 26 01:57:36 compute-0 nova_compute[350387]: 2025-11-26 01:57:36.413 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:36 compute-0 podman[423866]: 2025-11-26 01:57:36.580045104 +0000 UTC m=+0.110983578 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 26 01:57:36 compute-0 podman[423868]: 2025-11-26 01:57:36.590094007 +0000 UTC m=+0.106917273 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:57:36 compute-0 podman[423867]: 2025-11-26 01:57:36.590646322 +0000 UTC m=+0.117077309 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 01:57:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 234 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 887 KiB/s rd, 0 B/s wr, 68 op/s
Nov 26 01:57:37 compute-0 nova_compute[350387]: 2025-11-26 01:57:37.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:57:37 compute-0 nova_compute[350387]: 2025-11-26 01:57:37.301 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 01:57:38 compute-0 nova_compute[350387]: 2025-11-26 01:57:38.499 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 234 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 0 B/s wr, 72 op/s
Nov 26 01:57:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:57:40 compute-0 podman[423928]: 2025-11-26 01:57:40.602542609 +0000 UTC m=+0.154404421 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 01:57:40 compute-0 podman[423929]: 2025-11-26 01:57:40.621971427 +0000 UTC m=+0.165823173 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 01:57:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1407: 321 pgs: 321 active+clean; 234 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 0 B/s wr, 60 op/s
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:57:41
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['images', 'default.rgw.control', 'vms', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'backups']
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:57:41 compute-0 nova_compute[350387]: 2025-11-26 01:57:41.414 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:57:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:57:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 321 active+clean; 234 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 01:57:43 compute-0 nova_compute[350387]: 2025-11-26 01:57:43.508 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:43 compute-0 podman[423975]: 2025-11-26 01:57:43.631194271 +0000 UTC m=+0.174865857 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, config_id=edpm, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Nov 26 01:57:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:57:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 234 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 01:57:46 compute-0 nova_compute[350387]: 2025-11-26 01:57:46.418 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1410: 321 pgs: 321 active+clean; 234 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Nov 26 01:57:47 compute-0 podman[423994]: 2025-11-26 01:57:47.584502005 +0000 UTC m=+0.135476508 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 26 01:57:48 compute-0 nova_compute[350387]: 2025-11-26 01:57:48.514 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 234 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 26 01:57:49 compute-0 podman[424014]: 2025-11-26 01:57:49.555752825 +0000 UTC m=+0.107091748 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:57:49 compute-0 podman[424013]: 2025-11-26 01:57:49.595167136 +0000 UTC m=+0.144454111 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 26 01:57:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:57:50 compute-0 ovn_controller[89102]: 2025-11-26T01:57:50Z|00049|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Nov 26 01:57:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 234 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019267348562544769 of space, bias 1.0, pg target 0.5780204568763431 quantized to 32 (current 32)
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:57:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:57:51 compute-0 nova_compute[350387]: 2025-11-26 01:57:51.422 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 234 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:57:53 compute-0 nova_compute[350387]: 2025-11-26 01:57:53.518 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:57:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 234 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 01:57:56 compute-0 nova_compute[350387]: 2025-11-26 01:57:56.424 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 234 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 341 B/s wr, 1 op/s
Nov 26 01:57:57 compute-0 ovn_controller[89102]: 2025-11-26T01:57:57Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:2d:81 192.168.0.232
Nov 26 01:57:57 compute-0 ovn_controller[89102]: 2025-11-26T01:57:57Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:2d:81 192.168.0.232
Nov 26 01:57:58 compute-0 nova_compute[350387]: 2025-11-26 01:57:58.522 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:57:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 240 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 520 KiB/s wr, 16 op/s
Nov 26 01:57:59 compute-0 podman[158021]: time="2025-11-26T01:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:57:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:57:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8632 "" "Go-http-client/1.1"
Nov 26 01:57:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:58:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 261 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 1.5 MiB/s wr, 49 op/s
Nov 26 01:58:01 compute-0 openstack_network_exporter[367323]: ERROR   01:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:58:01 compute-0 openstack_network_exporter[367323]: ERROR   01:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:58:01 compute-0 openstack_network_exporter[367323]: ERROR   01:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:58:01 compute-0 openstack_network_exporter[367323]: ERROR   01:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:58:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:58:01 compute-0 openstack_network_exporter[367323]: ERROR   01:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:58:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:58:01 compute-0 nova_compute[350387]: 2025-11-26 01:58:01.427 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Nov 26 01:58:03 compute-0 nova_compute[350387]: 2025-11-26 01:58:03.527 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:58:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Nov 26 01:58:06 compute-0 nova_compute[350387]: 2025-11-26 01:58:06.431 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Nov 26 01:58:07 compute-0 podman[424062]: 2025-11-26 01:58:07.582775945 +0000 UTC m=+0.106958475 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 01:58:07 compute-0 podman[424061]: 2025-11-26 01:58:07.588340672 +0000 UTC m=+0.123870751 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:58:07 compute-0 podman[424060]: 2025-11-26 01:58:07.590947155 +0000 UTC m=+0.130472877 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118)
Nov 26 01:58:08 compute-0 nova_compute[350387]: 2025-11-26 01:58:08.532 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Nov 26 01:58:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:58:09 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:58:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:58:09 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:58:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:58:10 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:58:10 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:58:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:58:10 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:58:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:58:10 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:58:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:58:10 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:58:10 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 8899caf9-841f-440d-b13d-84e8a395bcb6 does not exist
Nov 26 01:58:10 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 689049e1-1e71-4e0a-b6e7-c63bfe4d5c9f does not exist
Nov 26 01:58:10 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 198c8347-2f46-49e1-92fd-edebc3dab620 does not exist
Nov 26 01:58:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:58:10 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:58:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:58:10 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:58:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:58:10 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:58:10 compute-0 podman[424389]: 2025-11-26 01:58:10.951467247 +0000 UTC m=+0.144461481 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Nov 26 01:58:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 1000 KiB/s wr, 41 op/s
Nov 26 01:58:11 compute-0 podman[424390]: 2025-11-26 01:58:11.017190179 +0000 UTC m=+0.201114768 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 26 01:58:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:58:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:58:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:58:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:58:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:58:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:58:11 compute-0 nova_compute[350387]: 2025-11-26 01:58:11.432 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:11 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:58:11 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:58:11 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:58:11 compute-0 podman[424548]: 2025-11-26 01:58:11.676101234 +0000 UTC m=+0.087858557 container create 5ba01f6f4448153ec6a9b86e079bb30cf2feb28d49c8ea563b677eea423273a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_albattani, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 01:58:11 compute-0 podman[424548]: 2025-11-26 01:58:11.633464032 +0000 UTC m=+0.045221395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:58:11 compute-0 systemd[1]: Started libpod-conmon-5ba01f6f4448153ec6a9b86e079bb30cf2feb28d49c8ea563b677eea423273a7.scope.
Nov 26 01:58:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:58:11 compute-0 podman[424548]: 2025-11-26 01:58:11.838265473 +0000 UTC m=+0.250022826 container init 5ba01f6f4448153ec6a9b86e079bb30cf2feb28d49c8ea563b677eea423273a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_albattani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 01:58:11 compute-0 podman[424548]: 2025-11-26 01:58:11.856500647 +0000 UTC m=+0.268257970 container start 5ba01f6f4448153ec6a9b86e079bb30cf2feb28d49c8ea563b677eea423273a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_albattani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 01:58:11 compute-0 podman[424548]: 2025-11-26 01:58:11.863064811 +0000 UTC m=+0.274822184 container attach 5ba01f6f4448153ec6a9b86e079bb30cf2feb28d49c8ea563b677eea423273a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 01:58:11 compute-0 happy_albattani[424563]: 167 167
Nov 26 01:58:11 compute-0 systemd[1]: libpod-5ba01f6f4448153ec6a9b86e079bb30cf2feb28d49c8ea563b677eea423273a7.scope: Deactivated successfully.
Nov 26 01:58:11 compute-0 podman[424548]: 2025-11-26 01:58:11.870997675 +0000 UTC m=+0.282754958 container died 5ba01f6f4448153ec6a9b86e079bb30cf2feb28d49c8ea563b677eea423273a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e34d21f52b3ac74b0a4720ebbeae9553d96b73d2be584ea72783db4a571ae34-merged.mount: Deactivated successfully.
Nov 26 01:58:11 compute-0 podman[424548]: 2025-11-26 01:58:11.934241117 +0000 UTC m=+0.345998400 container remove 5ba01f6f4448153ec6a9b86e079bb30cf2feb28d49c8ea563b677eea423273a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_albattani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 01:58:11 compute-0 systemd[1]: libpod-conmon-5ba01f6f4448153ec6a9b86e079bb30cf2feb28d49c8ea563b677eea423273a7.scope: Deactivated successfully.
Nov 26 01:58:12 compute-0 podman[424588]: 2025-11-26 01:58:12.203811521 +0000 UTC m=+0.082293308 container create 9c70f284d9e7e9f0c6f7e43f1f757df00edf89a277d798bd6ad543a97c4cd5ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 01:58:12 compute-0 podman[424588]: 2025-11-26 01:58:12.170438021 +0000 UTC m=+0.048919858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:58:12 compute-0 systemd[1]: Started libpod-conmon-9c70f284d9e7e9f0c6f7e43f1f757df00edf89a277d798bd6ad543a97c4cd5ff.scope.
Nov 26 01:58:12 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:58:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c71eb5f444650066168a141cacae51c032a1f3821a46fa7b01675a800c949aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:58:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c71eb5f444650066168a141cacae51c032a1f3821a46fa7b01675a800c949aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:58:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c71eb5f444650066168a141cacae51c032a1f3821a46fa7b01675a800c949aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:58:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c71eb5f444650066168a141cacae51c032a1f3821a46fa7b01675a800c949aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:58:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c71eb5f444650066168a141cacae51c032a1f3821a46fa7b01675a800c949aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:58:12 compute-0 podman[424588]: 2025-11-26 01:58:12.389687278 +0000 UTC m=+0.268169055 container init 9c70f284d9e7e9f0c6f7e43f1f757df00edf89a277d798bd6ad543a97c4cd5ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 01:58:12 compute-0 podman[424588]: 2025-11-26 01:58:12.411894244 +0000 UTC m=+0.290376001 container start 9c70f284d9e7e9f0c6f7e43f1f757df00edf89a277d798bd6ad543a97c4cd5ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:58:12 compute-0 podman[424588]: 2025-11-26 01:58:12.417377488 +0000 UTC m=+0.295859255 container attach 9c70f284d9e7e9f0c6f7e43f1f757df00edf89a277d798bd6ad543a97c4cd5ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:58:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 14 KiB/s wr, 8 op/s
Nov 26 01:58:13 compute-0 nova_compute[350387]: 2025-11-26 01:58:13.537 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:13 compute-0 kind_lamarr[424604]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:58:13 compute-0 kind_lamarr[424604]: --> relative data size: 1.0
Nov 26 01:58:13 compute-0 kind_lamarr[424604]: --> All data devices are unavailable
Nov 26 01:58:13 compute-0 systemd[1]: libpod-9c70f284d9e7e9f0c6f7e43f1f757df00edf89a277d798bd6ad543a97c4cd5ff.scope: Deactivated successfully.
Nov 26 01:58:13 compute-0 podman[424588]: 2025-11-26 01:58:13.63653469 +0000 UTC m=+1.515016477 container died 9c70f284d9e7e9f0c6f7e43f1f757df00edf89a277d798bd6ad543a97c4cd5ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:58:13 compute-0 systemd[1]: libpod-9c70f284d9e7e9f0c6f7e43f1f757df00edf89a277d798bd6ad543a97c4cd5ff.scope: Consumed 1.120s CPU time.
Nov 26 01:58:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c71eb5f444650066168a141cacae51c032a1f3821a46fa7b01675a800c949aa-merged.mount: Deactivated successfully.
Nov 26 01:58:13 compute-0 podman[424588]: 2025-11-26 01:58:13.748931986 +0000 UTC m=+1.627413753 container remove 9c70f284d9e7e9f0c6f7e43f1f757df00edf89a277d798bd6ad543a97c4cd5ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:58:13 compute-0 systemd[1]: libpod-conmon-9c70f284d9e7e9f0c6f7e43f1f757df00edf89a277d798bd6ad543a97c4cd5ff.scope: Deactivated successfully.
Nov 26 01:58:13 compute-0 podman[424641]: 2025-11-26 01:58:13.848413379 +0000 UTC m=+0.138538484 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, name=ubi9, distribution-scope=public, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30)
Nov 26 01:58:14 compute-0 podman[424801]: 2025-11-26 01:58:14.849475325 +0000 UTC m=+0.070456956 container create 81dd4ba21b52d373bacbc0d435acba87583e063ad7fe266e27afc1de5b720e0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gates, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:58:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:58:14 compute-0 podman[424801]: 2025-11-26 01:58:14.821770265 +0000 UTC m=+0.042751926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:58:14 compute-0 systemd[1]: Started libpod-conmon-81dd4ba21b52d373bacbc0d435acba87583e063ad7fe266e27afc1de5b720e0b.scope.
Nov 26 01:58:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Nov 26 01:58:14 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:58:15 compute-0 podman[424801]: 2025-11-26 01:58:15.01362552 +0000 UTC m=+0.234607221 container init 81dd4ba21b52d373bacbc0d435acba87583e063ad7fe266e27afc1de5b720e0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 01:58:15 compute-0 podman[424801]: 2025-11-26 01:58:15.030987309 +0000 UTC m=+0.251968940 container start 81dd4ba21b52d373bacbc0d435acba87583e063ad7fe266e27afc1de5b720e0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gates, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:58:15 compute-0 sad_gates[424817]: 167 167
Nov 26 01:58:15 compute-0 podman[424801]: 2025-11-26 01:58:15.043742469 +0000 UTC m=+0.264724100 container attach 81dd4ba21b52d373bacbc0d435acba87583e063ad7fe266e27afc1de5b720e0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 01:58:15 compute-0 systemd[1]: libpod-81dd4ba21b52d373bacbc0d435acba87583e063ad7fe266e27afc1de5b720e0b.scope: Deactivated successfully.
Nov 26 01:58:15 compute-0 podman[424801]: 2025-11-26 01:58:15.044226172 +0000 UTC m=+0.265207803 container died 81dd4ba21b52d373bacbc0d435acba87583e063ad7fe266e27afc1de5b720e0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:58:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-444ca8950f0698353849cb12d338917e800985d5c3cbeaf9219d8a289570b4aa-merged.mount: Deactivated successfully.
Nov 26 01:58:15 compute-0 podman[424801]: 2025-11-26 01:58:15.104977264 +0000 UTC m=+0.325958895 container remove 81dd4ba21b52d373bacbc0d435acba87583e063ad7fe266e27afc1de5b720e0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gates, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:58:15 compute-0 systemd[1]: libpod-conmon-81dd4ba21b52d373bacbc0d435acba87583e063ad7fe266e27afc1de5b720e0b.scope: Deactivated successfully.
Nov 26 01:58:15 compute-0 podman[424840]: 2025-11-26 01:58:15.369064155 +0000 UTC m=+0.094234816 container create 562b1bd44c91e096357c7a894674b199936fb8e1bcd7d908e9f9ef2d392f0ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_shaw, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 01:58:15 compute-0 podman[424840]: 2025-11-26 01:58:15.33446936 +0000 UTC m=+0.059640081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:58:15 compute-0 systemd[1]: Started libpod-conmon-562b1bd44c91e096357c7a894674b199936fb8e1bcd7d908e9f9ef2d392f0ec3.scope.
Nov 26 01:58:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16e42318c58b7eed73fb5b25694dd8a2ba2247aa671d0204122ee62bef2c6a0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16e42318c58b7eed73fb5b25694dd8a2ba2247aa671d0204122ee62bef2c6a0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16e42318c58b7eed73fb5b25694dd8a2ba2247aa671d0204122ee62bef2c6a0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16e42318c58b7eed73fb5b25694dd8a2ba2247aa671d0204122ee62bef2c6a0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:58:15 compute-0 podman[424840]: 2025-11-26 01:58:15.554198582 +0000 UTC m=+0.279369293 container init 562b1bd44c91e096357c7a894674b199936fb8e1bcd7d908e9f9ef2d392f0ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 01:58:15 compute-0 podman[424840]: 2025-11-26 01:58:15.580970306 +0000 UTC m=+0.306140967 container start 562b1bd44c91e096357c7a894674b199936fb8e1bcd7d908e9f9ef2d392f0ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:58:15 compute-0 podman[424840]: 2025-11-26 01:58:15.587161401 +0000 UTC m=+0.312332122 container attach 562b1bd44c91e096357c7a894674b199936fb8e1bcd7d908e9f9ef2d392f0ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_shaw, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]: {
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:    "0": [
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:        {
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "devices": [
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "/dev/loop3"
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            ],
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "lv_name": "ceph_lv0",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "lv_size": "21470642176",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "name": "ceph_lv0",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "tags": {
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.cluster_name": "ceph",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.crush_device_class": "",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.encrypted": "0",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.osd_id": "0",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.type": "block",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.vdo": "0"
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            },
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "type": "block",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "vg_name": "ceph_vg0"
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:        }
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:    ],
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:    "1": [
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:        {
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "devices": [
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "/dev/loop4"
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            ],
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "lv_name": "ceph_lv1",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "lv_size": "21470642176",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "name": "ceph_lv1",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "tags": {
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.cluster_name": "ceph",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.crush_device_class": "",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.encrypted": "0",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.osd_id": "1",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.type": "block",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.vdo": "0"
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            },
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "type": "block",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "vg_name": "ceph_vg1"
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:        }
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:    ],
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:    "2": [
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:        {
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "devices": [
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "/dev/loop5"
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            ],
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "lv_name": "ceph_lv2",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "lv_size": "21470642176",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "name": "ceph_lv2",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "tags": {
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.cluster_name": "ceph",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.crush_device_class": "",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.encrypted": "0",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.osd_id": "2",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.type": "block",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:                "ceph.vdo": "0"
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            },
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "type": "block",
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:            "vg_name": "ceph_vg2"
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:        }
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]:    ]
Nov 26 01:58:16 compute-0 suspicious_shaw[424856]: }
Nov 26 01:58:16 compute-0 systemd[1]: libpod-562b1bd44c91e096357c7a894674b199936fb8e1bcd7d908e9f9ef2d392f0ec3.scope: Deactivated successfully.
Nov 26 01:58:16 compute-0 nova_compute[350387]: 2025-11-26 01:58:16.434 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:16 compute-0 podman[424865]: 2025-11-26 01:58:16.520409134 +0000 UTC m=+0.064443456 container died 562b1bd44c91e096357c7a894674b199936fb8e1bcd7d908e9f9ef2d392f0ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_shaw, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 01:58:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-16e42318c58b7eed73fb5b25694dd8a2ba2247aa671d0204122ee62bef2c6a0e-merged.mount: Deactivated successfully.
Nov 26 01:58:16 compute-0 podman[424865]: 2025-11-26 01:58:16.625631839 +0000 UTC m=+0.169666081 container remove 562b1bd44c91e096357c7a894674b199936fb8e1bcd7d908e9f9ef2d392f0ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_shaw, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:58:16 compute-0 systemd[1]: libpod-conmon-562b1bd44c91e096357c7a894674b199936fb8e1bcd7d908e9f9ef2d392f0ec3.scope: Deactivated successfully.
Nov 26 01:58:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Nov 26 01:58:17 compute-0 podman[425019]: 2025-11-26 01:58:17.785629312 +0000 UTC m=+0.091054846 container create ca8a559577e33880757f9f26f48dd3022ca295dcde22ed76c8d3d6bedc3ab4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:58:17 compute-0 podman[425019]: 2025-11-26 01:58:17.748576298 +0000 UTC m=+0.054001882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:58:17 compute-0 systemd[1]: Started libpod-conmon-ca8a559577e33880757f9f26f48dd3022ca295dcde22ed76c8d3d6bedc3ab4d7.scope.
Nov 26 01:58:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:58:17 compute-0 podman[425019]: 2025-11-26 01:58:17.946121904 +0000 UTC m=+0.251547448 container init ca8a559577e33880757f9f26f48dd3022ca295dcde22ed76c8d3d6bedc3ab4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:58:17 compute-0 podman[425019]: 2025-11-26 01:58:17.963554085 +0000 UTC m=+0.268979619 container start ca8a559577e33880757f9f26f48dd3022ca295dcde22ed76c8d3d6bedc3ab4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 01:58:17 compute-0 podman[425019]: 2025-11-26 01:58:17.970296525 +0000 UTC m=+0.275722069 container attach ca8a559577e33880757f9f26f48dd3022ca295dcde22ed76c8d3d6bedc3ab4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:58:17 compute-0 pensive_boyd[425041]: 167 167
Nov 26 01:58:17 compute-0 systemd[1]: libpod-ca8a559577e33880757f9f26f48dd3022ca295dcde22ed76c8d3d6bedc3ab4d7.scope: Deactivated successfully.
Nov 26 01:58:17 compute-0 podman[425019]: 2025-11-26 01:58:17.976794368 +0000 UTC m=+0.282219852 container died ca8a559577e33880757f9f26f48dd3022ca295dcde22ed76c8d3d6bedc3ab4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:58:17 compute-0 podman[425033]: 2025-11-26 01:58:17.997489001 +0000 UTC m=+0.127111062 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 26 01:58:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f2b4dd3e5aa8fc80401de1c1046e15222c2d0ec4de7c68ad324e1ca526f05c0-merged.mount: Deactivated successfully.
Nov 26 01:58:18 compute-0 podman[425019]: 2025-11-26 01:58:18.032901649 +0000 UTC m=+0.338327153 container remove ca8a559577e33880757f9f26f48dd3022ca295dcde22ed76c8d3d6bedc3ab4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 01:58:18 compute-0 systemd[1]: libpod-conmon-ca8a559577e33880757f9f26f48dd3022ca295dcde22ed76c8d3d6bedc3ab4d7.scope: Deactivated successfully.
Nov 26 01:58:18 compute-0 podman[425078]: 2025-11-26 01:58:18.284508978 +0000 UTC m=+0.080814798 container create 2914d88218c9fe8b4018403fc4c3de2e94666cbb91edd9c42d6a1f0549019b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_robinson, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 01:58:18 compute-0 podman[425078]: 2025-11-26 01:58:18.24732034 +0000 UTC m=+0.043626250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:58:18 compute-0 systemd[1]: Started libpod-conmon-2914d88218c9fe8b4018403fc4c3de2e94666cbb91edd9c42d6a1f0549019b83.scope.
Nov 26 01:58:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a789287163efb1cd8c319484ee444c1f357820f018962e98d61240f390516662/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a789287163efb1cd8c319484ee444c1f357820f018962e98d61240f390516662/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a789287163efb1cd8c319484ee444c1f357820f018962e98d61240f390516662/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a789287163efb1cd8c319484ee444c1f357820f018962e98d61240f390516662/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:58:18 compute-0 podman[425078]: 2025-11-26 01:58:18.451484533 +0000 UTC m=+0.247790393 container init 2914d88218c9fe8b4018403fc4c3de2e94666cbb91edd9c42d6a1f0549019b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 01:58:18 compute-0 podman[425078]: 2025-11-26 01:58:18.470118238 +0000 UTC m=+0.266424098 container start 2914d88218c9fe8b4018403fc4c3de2e94666cbb91edd9c42d6a1f0549019b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:58:18 compute-0 podman[425078]: 2025-11-26 01:58:18.476538139 +0000 UTC m=+0.272843999 container attach 2914d88218c9fe8b4018403fc4c3de2e94666cbb91edd9c42d6a1f0549019b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_robinson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 01:58:18 compute-0 nova_compute[350387]: 2025-11-26 01:58:18.543 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:58:19 compute-0 interesting_robinson[425092]: {
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:        "osd_id": 0,
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:        "type": "bluestore"
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:    },
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:        "osd_id": 2,
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:        "type": "bluestore"
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:    },
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:        "osd_id": 1,
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:        "type": "bluestore"
Nov 26 01:58:19 compute-0 interesting_robinson[425092]:    }
Nov 26 01:58:19 compute-0 interesting_robinson[425092]: }
Nov 26 01:58:19 compute-0 systemd[1]: libpod-2914d88218c9fe8b4018403fc4c3de2e94666cbb91edd9c42d6a1f0549019b83.scope: Deactivated successfully.
Nov 26 01:58:19 compute-0 podman[425078]: 2025-11-26 01:58:19.535908576 +0000 UTC m=+1.332214436 container died 2914d88218c9fe8b4018403fc4c3de2e94666cbb91edd9c42d6a1f0549019b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_robinson, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:58:19 compute-0 systemd[1]: libpod-2914d88218c9fe8b4018403fc4c3de2e94666cbb91edd9c42d6a1f0549019b83.scope: Consumed 1.070s CPU time.
Nov 26 01:58:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a789287163efb1cd8c319484ee444c1f357820f018962e98d61240f390516662-merged.mount: Deactivated successfully.
Nov 26 01:58:19 compute-0 podman[425078]: 2025-11-26 01:58:19.633589458 +0000 UTC m=+1.429895278 container remove 2914d88218c9fe8b4018403fc4c3de2e94666cbb91edd9c42d6a1f0549019b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_robinson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 01:58:19 compute-0 systemd[1]: libpod-conmon-2914d88218c9fe8b4018403fc4c3de2e94666cbb91edd9c42d6a1f0549019b83.scope: Deactivated successfully.
Nov 26 01:58:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:58:19 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:58:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:58:19 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:58:19 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 20f9b15d-4514-4d4e-8444-3848ac89877b does not exist
Nov 26 01:58:19 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev fccc7b66-ae57-4c24-a5d8-b2d2ee097a64 does not exist
Nov 26 01:58:19 compute-0 podman[425139]: 2025-11-26 01:58:19.734494831 +0000 UTC m=+0.121992228 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 01:58:19 compute-0 podman[425153]: 2025-11-26 01:58:19.818640962 +0000 UTC m=+0.126151696 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 26 01:58:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:58:20 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:58:20 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:58:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:58:21 compute-0 nova_compute[350387]: 2025-11-26 01:58:21.320 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:58:21 compute-0 nova_compute[350387]: 2025-11-26 01:58:21.438 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:22 compute-0 nova_compute[350387]: 2025-11-26 01:58:22.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:58:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:58:23 compute-0 nova_compute[350387]: 2025-11-26 01:58:23.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:58:23 compute-0 nova_compute[350387]: 2025-11-26 01:58:23.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:58:23 compute-0 nova_compute[350387]: 2025-11-26 01:58:23.468 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:58:23 compute-0 nova_compute[350387]: 2025-11-26 01:58:23.470 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:58:23 compute-0 nova_compute[350387]: 2025-11-26 01:58:23.471 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:58:23 compute-0 nova_compute[350387]: 2025-11-26 01:58:23.471 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:58:23 compute-0 nova_compute[350387]: 2025-11-26 01:58:23.472 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:58:23 compute-0 nova_compute[350387]: 2025-11-26 01:58:23.548 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:58:23 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3633859901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.023 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.151 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.153 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.154 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.166 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.166 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.167 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.177 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.178 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.178 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.187 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.187 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.188 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:58:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.913 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.915 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3198MB free_disk=59.85565948486328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.916 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:58:24 compute-0 nova_compute[350387]: 2025-11-26 01:58:24.917 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:58:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:58:24.977 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:58:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:58:24.978 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:58:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:58:24.979 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:58:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:58:25 compute-0 nova_compute[350387]: 2025-11-26 01:58:25.035 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:58:25 compute-0 nova_compute[350387]: 2025-11-26 01:58:25.035 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 0e500d52-72e1-4501-b4d6-fc6ca575760f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:58:25 compute-0 nova_compute[350387]: 2025-11-26 01:58:25.036 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:58:25 compute-0 nova_compute[350387]: 2025-11-26 01:58:25.037 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance d32050dc-c041-47df-994e-7d05cf1f489a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:58:25 compute-0 nova_compute[350387]: 2025-11-26 01:58:25.038 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:58:25 compute-0 nova_compute[350387]: 2025-11-26 01:58:25.038 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:58:25 compute-0 nova_compute[350387]: 2025-11-26 01:58:25.204 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:58:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:58:25 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1752126670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:58:25 compute-0 nova_compute[350387]: 2025-11-26 01:58:25.701 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:58:25 compute-0 nova_compute[350387]: 2025-11-26 01:58:25.714 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:58:25 compute-0 nova_compute[350387]: 2025-11-26 01:58:25.733 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:58:25 compute-0 nova_compute[350387]: 2025-11-26 01:58:25.735 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:58:25 compute-0 nova_compute[350387]: 2025-11-26 01:58:25.735 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:58:26 compute-0 nova_compute[350387]: 2025-11-26 01:58:26.443 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:26 compute-0 nova_compute[350387]: 2025-11-26 01:58:26.735 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:58:26 compute-0 nova_compute[350387]: 2025-11-26 01:58:26.736 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:58:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:58:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3171116067' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:58:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:58:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3171116067' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:58:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:58:27 compute-0 nova_compute[350387]: 2025-11-26 01:58:27.740 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:58:27 compute-0 nova_compute[350387]: 2025-11-26 01:58:27.742 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:58:27 compute-0 nova_compute[350387]: 2025-11-26 01:58:27.743 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 01:58:28 compute-0 nova_compute[350387]: 2025-11-26 01:58:28.554 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:58:29 compute-0 podman[158021]: time="2025-11-26T01:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:58:29 compute-0 nova_compute[350387]: 2025-11-26 01:58:29.753 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Updating instance_info_cache with network_info: [{"id": "867227e5-4422-4cfb-93d9-0589612717db", "address": "fa:16:3e:d6:c0:70", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.36", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867227e5-44", "ovs_interfaceid": "867227e5-4422-4cfb-93d9-0589612717db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:58:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:58:29 compute-0 nova_compute[350387]: 2025-11-26 01:58:29.780 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:58:29 compute-0 nova_compute[350387]: 2025-11-26 01:58:29.781 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 01:58:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8636 "" "Go-http-client/1.1"
Nov 26 01:58:29 compute-0 nova_compute[350387]: 2025-11-26 01:58:29.782 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:58:29 compute-0 nova_compute[350387]: 2025-11-26 01:58:29.782 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:58:29 compute-0 nova_compute[350387]: 2025-11-26 01:58:29.783 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:58:29 compute-0 nova_compute[350387]: 2025-11-26 01:58:29.783 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:58:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:58:30 compute-0 nova_compute[350387]: 2025-11-26 01:58:30.340 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:58:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 5.4 KiB/s wr, 0 op/s
Nov 26 01:58:31 compute-0 openstack_network_exporter[367323]: ERROR   01:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:58:31 compute-0 openstack_network_exporter[367323]: ERROR   01:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:58:31 compute-0 openstack_network_exporter[367323]: ERROR   01:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:58:31 compute-0 openstack_network_exporter[367323]: ERROR   01:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:58:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:58:31 compute-0 openstack_network_exporter[367323]: ERROR   01:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:58:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:58:31 compute-0 nova_compute[350387]: 2025-11-26 01:58:31.445 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Nov 26 01:58:33 compute-0 nova_compute[350387]: 2025-11-26 01:58:33.557 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:58:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Nov 26 01:58:36 compute-0 nova_compute[350387]: 2025-11-26 01:58:36.448 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Nov 26 01:58:38 compute-0 nova_compute[350387]: 2025-11-26 01:58:38.561 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:38 compute-0 podman[425279]: 2025-11-26 01:58:38.611079492 +0000 UTC m=+0.145187032 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 01:58:38 compute-0 podman[425280]: 2025-11-26 01:58:38.616056522 +0000 UTC m=+0.147878567 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:58:38 compute-0 podman[425278]: 2025-11-26 01:58:38.623520042 +0000 UTC m=+0.165061711 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 26 01:58:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Nov 26 01:58:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:58:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:58:41
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'vms', 'images', 'volumes', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', '.mgr']
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:58:41 compute-0 nova_compute[350387]: 2025-11-26 01:58:41.451 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:41 compute-0 podman[425337]: 2025-11-26 01:58:41.56048115 +0000 UTC m=+0.109486326 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true)
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:58:41 compute-0 podman[425338]: 2025-11-26 01:58:41.621460738 +0000 UTC m=+0.165844144 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:58:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.868 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.868 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.870 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.870 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.882 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a8b199f7-8cd5-45ea-bc7e-af8352a6afa2', 'name': 'vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {'metering.server_group': '366b90b6-2e85-40c4-9ca1-855cf9022409'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.890 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance d32050dc-c041-47df-994e-7d05cf1f489a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 01:58:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:42.892 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/d32050dc-c041-47df-994e-7d05cf1f489a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}4e94a0ede5bb893797130fc39ee992faf1803b43b6582353b5619a442e3adefc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 01:58:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Nov 26 01:58:43 compute-0 nova_compute[350387]: 2025-11-26 01:58:43.565 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.748 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Wed, 26 Nov 2025 01:58:42 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-62e41d25-9a0d-4bdf-a026-9ffe8ea79232 x-openstack-request-id: req-62e41d25-9a0d-4bdf-a026-9ffe8ea79232 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.749 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "d32050dc-c041-47df-994e-7d05cf1f489a", "name": "vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt", "status": "ACTIVE", "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "user_id": "b130e7a8bed3424f9f5ff63b35cd2b28", "metadata": {"metering.server_group": "366b90b6-2e85-40c4-9ca1-855cf9022409"}, "hostId": "2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1", "image": {"id": "48e08d00-37a3-4465-a949-ff0b8afe4def", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/48e08d00-37a3-4465-a949-ff0b8afe4def"}]}, "flavor": {"id": "030e95e2-5458-42ef-a5df-79a19c0b681d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/030e95e2-5458-42ef-a5df-79a19c0b681d"}]}, "created": "2025-11-26T01:57:10Z", "updated": "2025-11-26T01:57:21Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.232", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:99:2d:81"}, {"version": 4, "addr": "192.168.122.234", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:99:2d:81"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/d32050dc-c041-47df-994e-7d05cf1f489a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/d32050dc-c041-47df-994e-7d05cf1f489a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-26T01:57:21.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.749 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/d32050dc-c041-47df-994e-7d05cf1f489a used request id req-62e41d25-9a0d-4bdf-a026-9ffe8ea79232 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.751 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd32050dc-c041-47df-994e-7d05cf1f489a', 'name': 'vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {'metering.server_group': '366b90b6-2e85-40c4-9ca1-855cf9022409'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.758 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b1c088bc-7a6b-4580-93ff-685731747189', 'name': 'test_0', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.764 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0e500d52-72e1-4501-b4d6-fc6ca575760f', 'name': 'vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {'metering.server_group': '366b90b6-2e85-40c4-9ca1-855cf9022409'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.765 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.765 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.766 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.766 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.767 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T01:58:43.766648) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.769 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.770 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.770 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.770 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.770 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.771 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.773 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T01:58:43.771114) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.778 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.785 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for d32050dc-c041-47df-994e-7d05cf1f489a / tap25d715a2-34 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.785 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.792 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.798 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.packets volume: 55 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.799 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.799 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.800 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.800 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.800 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.800 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.802 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.802 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.803 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.803 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.803 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.804 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.804 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.804 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.805 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.805 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.806 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.807 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.807 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.808 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.808 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.808 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.808 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.809 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.809 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T01:58:43.800599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.809 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.810 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.811 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.811 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.812 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.812 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T01:58:43.804048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.812 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.812 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T01:58:43.808485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.812 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.812 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.813 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.813 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.bytes volume: 2146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.814 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.814 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.bytes volume: 7148 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.815 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T01:58:43.812722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.816 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.817 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.817 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.817 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.817 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.818 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T01:58:43.817451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.859 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/cpu volume: 36360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.919 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/cpu volume: 34910000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:43.969 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/cpu volume: 42540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.022 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/cpu volume: 308000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.022 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.023 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.023 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.023 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.023 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.024 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.024 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.024 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T01:58:44.023992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.025 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.026 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.bytes.delta volume: 2408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.027 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.027 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.027 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.027 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.028 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.028 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.028 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/memory.usage volume: 49.03515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T01:58:44.028270) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.029 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/memory.usage volume: 49.0546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.029 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/memory.usage volume: 48.828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.030 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/memory.usage volume: 48.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.031 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.031 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.031 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.031 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.031 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.032 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-26T01:58:44.032046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.032 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.032 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt>]
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.033 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.034 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.034 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.034 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.034 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.035 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.035 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T01:58:44.034646) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.035 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.036 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes volume: 2178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.036 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.bytes volume: 8406 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.037 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.038 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.038 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.038 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.038 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.038 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.039 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T01:58:44.038739) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.039 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.040 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.040 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.041 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.042 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.042 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.042 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.042 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.042 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.042 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.043 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.043 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.044 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.packets volume: 60 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.045 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.045 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.046 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T01:58:44.042616) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.046 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.046 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.046 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.047 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T01:58:44.046473) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.047 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.047 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.048 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.048 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.048 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.048 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.048 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.049 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.049 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.049 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.049 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.050 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.051 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.051 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.051 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.051 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.051 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.051 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.052 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T01:58:44.049116) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T01:58:44.052063) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.080 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.081 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.082 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.120 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.121 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.122 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.154 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.155 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.155 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.191 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.192 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.192 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.193 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.193 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.194 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.194 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.194 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.194 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T01:58:44.194623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.288 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.289 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.289 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.397 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.398 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.399 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.491 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.493 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.494 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.587 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.588 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.589 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.590 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.591 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.591 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.591 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.592 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.592 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.593 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-26T01:58:44.592437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.593 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt>]
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.594 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.594 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.594 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.595 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.595 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T01:58:44.595164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.596 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.latency volume: 1818076010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.597 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.latency volume: 286055535 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.597 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.latency volume: 221080770 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.597 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.latency volume: 2007436788 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.598 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.latency volume: 283353651 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.598 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.latency volume: 197487344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.599 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 2182324777 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.599 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 336768448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.600 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 176765271 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.600 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.latency volume: 2021453674 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.601 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.latency volume: 321911498 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 podman[425381]: 2025-11-26 01:58:44.601904611 +0000 UTC m=+0.143144804 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, config_id=edpm, release=1214.1726694543, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.601 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.latency volume: 237452008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.602 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.603 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.603 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.603 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.604 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.605 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.605 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T01:58:44.604392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.605 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.606 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.606 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.607 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.607 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.608 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.608 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.608 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.609 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.609 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.609 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.609 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.610 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.610 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.610 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.611 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.611 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.611 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.612 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.611 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T01:58:44.611237) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.612 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.612 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.613 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.613 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.613 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.613 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.614 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.614 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.614 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.615 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.615 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.615 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.616 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.616 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.616 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.616 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.616 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.617 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T01:58:44.616664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.617 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.618 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.618 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.618 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.619 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.619 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.619 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.620 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.620 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.620 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.620 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.630 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.630 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.630 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.630 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.631 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.631 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.632 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.632 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T01:58:44.631158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.632 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.632 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.633 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.633 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.633 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.633 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.634 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.634 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.634 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.634 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.latency volume: 5109418941 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.634 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T01:58:44.634222) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.635 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.latency volume: 30681884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.635 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.635 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.latency volume: 5738822785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.636 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.latency volume: 28688069 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.636 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.636 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 5787370869 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.636 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 30575996 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.637 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.637 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.latency volume: 8335163051 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.637 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.latency volume: 31365598 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.638 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.638 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.639 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.639 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.639 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.639 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.639 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.640 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T01:58:44.639465) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.640 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.640 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.640 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.641 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.641 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.641 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.641 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.642 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.642 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.642 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.643 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.643 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.643 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.644 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.644 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.644 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.644 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.644 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.645 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.645 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T01:58:44.644372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.645 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.645 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.645 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.646 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.646 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.646 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.646 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.646 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.646 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.647 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.647 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.648 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.649 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.649 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.649 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.649 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.649 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.649 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.649 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.649 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 01:58:44.649 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 01:58:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:58:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:58:46 compute-0 nova_compute[350387]: 2025-11-26 01:58:46.453 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:58:48 compute-0 nova_compute[350387]: 2025-11-26 01:58:48.570 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:48 compute-0 podman[425401]: 2025-11-26 01:58:48.612478179 +0000 UTC m=+0.156852191 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd)
Nov 26 01:58:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:58:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:58:50 compute-0 podman[425423]: 2025-11-26 01:58:50.603561821 +0000 UTC m=+0.145845291 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:58:50 compute-0 podman[425422]: 2025-11-26 01:58:50.603790317 +0000 UTC m=+0.140822349 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc.)
Nov 26 01:58:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022107945480888194 of space, bias 1.0, pg target 0.6632383644266459 quantized to 32 (current 32)
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:58:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:58:51 compute-0 nova_compute[350387]: 2025-11-26 01:58:51.456 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:58:53 compute-0 nova_compute[350387]: 2025-11-26 01:58:53.573 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:58:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.091049) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122336091152, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1192, "num_deletes": 251, "total_data_size": 1798210, "memory_usage": 1822336, "flush_reason": "Manual Compaction"}
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122336106021, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1759200, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28919, "largest_seqno": 30110, "table_properties": {"data_size": 1753495, "index_size": 3100, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12173, "raw_average_key_size": 19, "raw_value_size": 1742048, "raw_average_value_size": 2837, "num_data_blocks": 139, "num_entries": 614, "num_filter_entries": 614, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764122220, "oldest_key_time": 1764122220, "file_creation_time": 1764122336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 15030 microseconds, and 9219 cpu microseconds.
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.106101) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1759200 bytes OK
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.106123) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.109625) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.109649) EVENT_LOG_v1 {"time_micros": 1764122336109642, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.109671) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1792759, prev total WAL file size 1792759, number of live WAL files 2.
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.111738) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1717KB)], [65(7008KB)]
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122336111806, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 8936332, "oldest_snapshot_seqno": -1}
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 4920 keys, 7217479 bytes, temperature: kUnknown
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122336159354, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7217479, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7185618, "index_size": 18439, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 124469, "raw_average_key_size": 25, "raw_value_size": 7097536, "raw_average_value_size": 1442, "num_data_blocks": 759, "num_entries": 4920, "num_filter_entries": 4920, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764122336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.159666) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7217479 bytes
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.162226) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.9 rd, 151.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 6.8 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(9.2) write-amplify(4.1) OK, records in: 5434, records dropped: 514 output_compression: NoCompression
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.162260) EVENT_LOG_v1 {"time_micros": 1764122336162243, "job": 36, "event": "compaction_finished", "compaction_time_micros": 47557, "compaction_time_cpu_micros": 33553, "output_level": 6, "num_output_files": 1, "total_output_size": 7217479, "num_input_records": 5434, "num_output_records": 4920, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122336163088, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122336165573, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.111469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.165802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.165808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.165812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.165815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:58:56 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-01:58:56.165883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 01:58:56 compute-0 nova_compute[350387]: 2025-11-26 01:58:56.459 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:58:58 compute-0 nova_compute[350387]: 2025-11-26 01:58:58.579 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:58:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:58:59 compute-0 podman[158021]: time="2025-11-26T01:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:58:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:58:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8633 "" "Go-http-client/1.1"
Nov 26 01:58:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:59:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:01 compute-0 openstack_network_exporter[367323]: ERROR   01:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:59:01 compute-0 openstack_network_exporter[367323]: ERROR   01:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:59:01 compute-0 openstack_network_exporter[367323]: ERROR   01:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:59:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:59:01 compute-0 openstack_network_exporter[367323]: ERROR   01:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:59:01 compute-0 openstack_network_exporter[367323]: ERROR   01:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:59:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:59:01 compute-0 nova_compute[350387]: 2025-11-26 01:59:01.463 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:03 compute-0 nova_compute[350387]: 2025-11-26 01:59:03.583 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:59:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:06 compute-0 nova_compute[350387]: 2025-11-26 01:59:06.466 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:08 compute-0 nova_compute[350387]: 2025-11-26 01:59:08.587 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:09 compute-0 podman[425467]: 2025-11-26 01:59:09.5729265 +0000 UTC m=+0.106882842 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:59:09 compute-0 podman[425466]: 2025-11-26 01:59:09.595676802 +0000 UTC m=+0.135952612 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Nov 26 01:59:09 compute-0 podman[425468]: 2025-11-26 01:59:09.613088342 +0000 UTC m=+0.141508298 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 01:59:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:59:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:59:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:59:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:59:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:59:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:59:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:59:11 compute-0 nova_compute[350387]: 2025-11-26 01:59:11.470 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:12 compute-0 podman[425522]: 2025-11-26 01:59:12.592413895 +0000 UTC m=+0.138684298 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 01:59:12 compute-0 podman[425523]: 2025-11-26 01:59:12.61104855 +0000 UTC m=+0.155993106 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 26 01:59:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:13 compute-0 nova_compute[350387]: 2025-11-26 01:59:13.590 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:59:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:15 compute-0 podman[425566]: 2025-11-26 01:59:15.017602894 +0000 UTC m=+0.123365277 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, name=ubi9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.component=ubi9-container, config_id=edpm)
Nov 26 01:59:16 compute-0 nova_compute[350387]: 2025-11-26 01:59:16.473 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:18 compute-0 nova_compute[350387]: 2025-11-26 01:59:18.596 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1456: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:19 compute-0 podman[425587]: 2025-11-26 01:59:19.598179112 +0000 UTC m=+0.130719914 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:59:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:59:20 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 26 01:59:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 26 01:59:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 01:59:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:59:21 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:59:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 01:59:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:59:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 01:59:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:59:21 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev d268bea3-028b-4e5a-acac-6e8cbadf8884 does not exist
Nov 26 01:59:21 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 17c83b7f-f6b5-401e-b095-0685fa292f0b does not exist
Nov 26 01:59:21 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 3a2ff960-0489-4c07-85e2-be5cadfe7e1a does not exist
Nov 26 01:59:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 01:59:21 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 01:59:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 01:59:21 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:59:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 01:59:21 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 01:59:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 01:59:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 01:59:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:59:21 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 01:59:21 compute-0 nova_compute[350387]: 2025-11-26 01:59:21.475 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:21 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 26 01:59:21 compute-0 podman[425762]: 2025-11-26 01:59:21.583304963 +0000 UTC m=+0.156788639 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.expose-services=)
Nov 26 01:59:21 compute-0 podman[425763]: 2025-11-26 01:59:21.584141156 +0000 UTC m=+0.151661074 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 01:59:22 compute-0 nova_compute[350387]: 2025-11-26 01:59:22.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:59:22 compute-0 podman[425923]: 2025-11-26 01:59:22.358603997 +0000 UTC m=+0.088583107 container create 2cd99fa42fbc2c36f25dba529aeb5a81219bb5d012c87c39b40c0072b4cfde1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_poitras, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 26 01:59:22 compute-0 podman[425923]: 2025-11-26 01:59:22.329883488 +0000 UTC m=+0.059862588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:59:22 compute-0 systemd[1]: Started libpod-conmon-2cd99fa42fbc2c36f25dba529aeb5a81219bb5d012c87c39b40c0072b4cfde1e.scope.
Nov 26 01:59:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:59:22 compute-0 podman[425923]: 2025-11-26 01:59:22.52270598 +0000 UTC m=+0.252685070 container init 2cd99fa42fbc2c36f25dba529aeb5a81219bb5d012c87c39b40c0072b4cfde1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 01:59:22 compute-0 podman[425923]: 2025-11-26 01:59:22.536354465 +0000 UTC m=+0.266333575 container start 2cd99fa42fbc2c36f25dba529aeb5a81219bb5d012c87c39b40c0072b4cfde1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 01:59:22 compute-0 elastic_poitras[425939]: 167 167
Nov 26 01:59:22 compute-0 systemd[1]: libpod-2cd99fa42fbc2c36f25dba529aeb5a81219bb5d012c87c39b40c0072b4cfde1e.scope: Deactivated successfully.
Nov 26 01:59:22 compute-0 podman[425923]: 2025-11-26 01:59:22.551577814 +0000 UTC m=+0.281556944 container attach 2cd99fa42fbc2c36f25dba529aeb5a81219bb5d012c87c39b40c0072b4cfde1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 01:59:22 compute-0 podman[425923]: 2025-11-26 01:59:22.552125529 +0000 UTC m=+0.282104619 container died 2cd99fa42fbc2c36f25dba529aeb5a81219bb5d012c87c39b40c0072b4cfde1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:59:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-90106d91194c0b24562ccbc5f111a94de2c70a6ea907a6b7dc870ca79e7a2c55-merged.mount: Deactivated successfully.
Nov 26 01:59:22 compute-0 podman[425923]: 2025-11-26 01:59:22.612750197 +0000 UTC m=+0.342729287 container remove 2cd99fa42fbc2c36f25dba529aeb5a81219bb5d012c87c39b40c0072b4cfde1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_poitras, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 01:59:22 compute-0 systemd[1]: libpod-conmon-2cd99fa42fbc2c36f25dba529aeb5a81219bb5d012c87c39b40c0072b4cfde1e.scope: Deactivated successfully.
Nov 26 01:59:22 compute-0 podman[425962]: 2025-11-26 01:59:22.912575935 +0000 UTC m=+0.087701322 container create a8ecef2c88075da0adf636ffbe202a5fdd9f44893eac71e8eb38070b26f574df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:59:22 compute-0 podman[425962]: 2025-11-26 01:59:22.887946401 +0000 UTC m=+0.063071778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:59:22 compute-0 systemd[1]: Started libpod-conmon-a8ecef2c88075da0adf636ffbe202a5fdd9f44893eac71e8eb38070b26f574df.scope.
Nov 26 01:59:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:59:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fc60a57b5a54a052938c2544fe0f1df94a4ee06c0af3b7b3f94b8ce8376a5ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:59:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fc60a57b5a54a052938c2544fe0f1df94a4ee06c0af3b7b3f94b8ce8376a5ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:59:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fc60a57b5a54a052938c2544fe0f1df94a4ee06c0af3b7b3f94b8ce8376a5ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:59:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fc60a57b5a54a052938c2544fe0f1df94a4ee06c0af3b7b3f94b8ce8376a5ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:59:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fc60a57b5a54a052938c2544fe0f1df94a4ee06c0af3b7b3f94b8ce8376a5ae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 01:59:23 compute-0 podman[425962]: 2025-11-26 01:59:23.106628963 +0000 UTC m=+0.281754380 container init a8ecef2c88075da0adf636ffbe202a5fdd9f44893eac71e8eb38070b26f574df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bhabha, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:59:23 compute-0 podman[425962]: 2025-11-26 01:59:23.135681471 +0000 UTC m=+0.310806858 container start a8ecef2c88075da0adf636ffbe202a5fdd9f44893eac71e8eb38070b26f574df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 01:59:23 compute-0 podman[425962]: 2025-11-26 01:59:23.143250814 +0000 UTC m=+0.318376251 container attach a8ecef2c88075da0adf636ffbe202a5fdd9f44893eac71e8eb38070b26f574df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 01:59:23 compute-0 nova_compute[350387]: 2025-11-26 01:59:23.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:59:23 compute-0 nova_compute[350387]: 2025-11-26 01:59:23.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:59:23 compute-0 nova_compute[350387]: 2025-11-26 01:59:23.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:59:23 compute-0 nova_compute[350387]: 2025-11-26 01:59:23.336 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:59:23 compute-0 nova_compute[350387]: 2025-11-26 01:59:23.337 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:59:23 compute-0 nova_compute[350387]: 2025-11-26 01:59:23.337 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:59:23 compute-0 nova_compute[350387]: 2025-11-26 01:59:23.338 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 01:59:23 compute-0 nova_compute[350387]: 2025-11-26 01:59:23.338 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:59:23 compute-0 nova_compute[350387]: 2025-11-26 01:59:23.600 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:23 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:59:23 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4231059027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:59:23 compute-0 nova_compute[350387]: 2025-11-26 01:59:23.861 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.003 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.004 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.004 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.012 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.012 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.013 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.021 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.022 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.022 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.029 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.031 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.031 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 01:59:24 compute-0 intelligent_bhabha[425978]: --> passed data devices: 0 physical, 3 LVM
Nov 26 01:59:24 compute-0 intelligent_bhabha[425978]: --> relative data size: 1.0
Nov 26 01:59:24 compute-0 intelligent_bhabha[425978]: --> All data devices are unavailable
Nov 26 01:59:24 compute-0 systemd[1]: libpod-a8ecef2c88075da0adf636ffbe202a5fdd9f44893eac71e8eb38070b26f574df.scope: Deactivated successfully.
Nov 26 01:59:24 compute-0 podman[425962]: 2025-11-26 01:59:24.48008052 +0000 UTC m=+1.655205877 container died a8ecef2c88075da0adf636ffbe202a5fdd9f44893eac71e8eb38070b26f574df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 01:59:24 compute-0 systemd[1]: libpod-a8ecef2c88075da0adf636ffbe202a5fdd9f44893eac71e8eb38070b26f574df.scope: Consumed 1.232s CPU time.
Nov 26 01:59:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fc60a57b5a54a052938c2544fe0f1df94a4ee06c0af3b7b3f94b8ce8376a5ae-merged.mount: Deactivated successfully.
Nov 26 01:59:24 compute-0 podman[425962]: 2025-11-26 01:59:24.554380933 +0000 UTC m=+1.729506310 container remove a8ecef2c88075da0adf636ffbe202a5fdd9f44893eac71e8eb38070b26f574df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bhabha, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 01:59:24 compute-0 systemd[1]: libpod-conmon-a8ecef2c88075da0adf636ffbe202a5fdd9f44893eac71e8eb38070b26f574df.scope: Deactivated successfully.
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.575 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.576 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3155MB free_disk=59.85565948486328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.577 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.577 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.702 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.703 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 0e500d52-72e1-4501-b4d6-fc6ca575760f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.703 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.703 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance d32050dc-c041-47df-994e-7d05cf1f489a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.703 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.703 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 01:59:24 compute-0 nova_compute[350387]: 2025-11-26 01:59:24.860 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 01:59:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:59:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:59:24.977 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 01:59:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:59:24.979 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 01:59:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 01:59:24.979 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:59:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 01:59:25 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/622904872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 01:59:25 compute-0 nova_compute[350387]: 2025-11-26 01:59:25.389 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 01:59:25 compute-0 nova_compute[350387]: 2025-11-26 01:59:25.401 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 01:59:25 compute-0 nova_compute[350387]: 2025-11-26 01:59:25.422 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 01:59:25 compute-0 nova_compute[350387]: 2025-11-26 01:59:25.423 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 01:59:25 compute-0 nova_compute[350387]: 2025-11-26 01:59:25.424 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 01:59:25 compute-0 podman[426203]: 2025-11-26 01:59:25.618068673 +0000 UTC m=+0.096914202 container create 532035087df2648477aa6a515ceb6466ce8cc7d12fc7555069f52e9279098b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 26 01:59:25 compute-0 podman[426203]: 2025-11-26 01:59:25.582198362 +0000 UTC m=+0.061043961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:59:25 compute-0 systemd[1]: Started libpod-conmon-532035087df2648477aa6a515ceb6466ce8cc7d12fc7555069f52e9279098b32.scope.
Nov 26 01:59:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:59:25 compute-0 podman[426203]: 2025-11-26 01:59:25.806938254 +0000 UTC m=+0.285783823 container init 532035087df2648477aa6a515ceb6466ce8cc7d12fc7555069f52e9279098b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 01:59:25 compute-0 podman[426203]: 2025-11-26 01:59:25.824459078 +0000 UTC m=+0.303304577 container start 532035087df2648477aa6a515ceb6466ce8cc7d12fc7555069f52e9279098b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 26 01:59:25 compute-0 podman[426203]: 2025-11-26 01:59:25.829802489 +0000 UTC m=+0.308648068 container attach 532035087df2648477aa6a515ceb6466ce8cc7d12fc7555069f52e9279098b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 01:59:25 compute-0 funny_greider[426219]: 167 167
Nov 26 01:59:25 compute-0 systemd[1]: libpod-532035087df2648477aa6a515ceb6466ce8cc7d12fc7555069f52e9279098b32.scope: Deactivated successfully.
Nov 26 01:59:25 compute-0 podman[426203]: 2025-11-26 01:59:25.839233324 +0000 UTC m=+0.318078843 container died 532035087df2648477aa6a515ceb6466ce8cc7d12fc7555069f52e9279098b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 01:59:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-43eacf2d08bbfc1a77693062129a04f9dde8e19c9d140bac9b5b5ebdd034f2c1-merged.mount: Deactivated successfully.
Nov 26 01:59:25 compute-0 podman[426203]: 2025-11-26 01:59:25.920810723 +0000 UTC m=+0.399656242 container remove 532035087df2648477aa6a515ceb6466ce8cc7d12fc7555069f52e9279098b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 01:59:25 compute-0 systemd[1]: libpod-conmon-532035087df2648477aa6a515ceb6466ce8cc7d12fc7555069f52e9279098b32.scope: Deactivated successfully.
Nov 26 01:59:26 compute-0 podman[426242]: 2025-11-26 01:59:26.235585042 +0000 UTC m=+0.095364758 container create f8af9c897c23cf815a94213541ed80515007544f456333982c4befef7c7c02fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 01:59:26 compute-0 podman[426242]: 2025-11-26 01:59:26.200954466 +0000 UTC m=+0.060734212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:59:26 compute-0 systemd[1]: Started libpod-conmon-f8af9c897c23cf815a94213541ed80515007544f456333982c4befef7c7c02fc.scope.
Nov 26 01:59:26 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b8db83291b319eac9061d60b8bb832a17048ea4aae0abe8af3c3a8f030ba5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b8db83291b319eac9061d60b8bb832a17048ea4aae0abe8af3c3a8f030ba5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b8db83291b319eac9061d60b8bb832a17048ea4aae0abe8af3c3a8f030ba5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:59:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b8db83291b319eac9061d60b8bb832a17048ea4aae0abe8af3c3a8f030ba5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:59:26 compute-0 podman[426242]: 2025-11-26 01:59:26.380948268 +0000 UTC m=+0.240728034 container init f8af9c897c23cf815a94213541ed80515007544f456333982c4befef7c7c02fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_vaughan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 01:59:26 compute-0 podman[426242]: 2025-11-26 01:59:26.4048157 +0000 UTC m=+0.264595406 container start f8af9c897c23cf815a94213541ed80515007544f456333982c4befef7c7c02fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_vaughan, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:59:26 compute-0 podman[426242]: 2025-11-26 01:59:26.412467866 +0000 UTC m=+0.272247582 container attach f8af9c897c23cf815a94213541ed80515007544f456333982c4befef7c7c02fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 01:59:26 compute-0 nova_compute[350387]: 2025-11-26 01:59:26.424 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:59:26 compute-0 nova_compute[350387]: 2025-11-26 01:59:26.424 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 01:59:26 compute-0 nova_compute[350387]: 2025-11-26 01:59:26.425 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 01:59:26 compute-0 nova_compute[350387]: 2025-11-26 01:59:26.481 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 01:59:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1608180734' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 01:59:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 01:59:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1608180734' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 01:59:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]: {
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:    "0": [
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:        {
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "devices": [
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "/dev/loop3"
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            ],
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "lv_name": "ceph_lv0",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "lv_size": "21470642176",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "name": "ceph_lv0",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "tags": {
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.cluster_name": "ceph",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.crush_device_class": "",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.encrypted": "0",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.osd_id": "0",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.type": "block",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.vdo": "0"
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            },
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "type": "block",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "vg_name": "ceph_vg0"
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:        }
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:    ],
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:    "1": [
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:        {
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "devices": [
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "/dev/loop4"
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            ],
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "lv_name": "ceph_lv1",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "lv_size": "21470642176",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "name": "ceph_lv1",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "tags": {
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.cluster_name": "ceph",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.crush_device_class": "",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.encrypted": "0",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.osd_id": "1",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.type": "block",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.vdo": "0"
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            },
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "type": "block",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "vg_name": "ceph_vg1"
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:        }
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:    ],
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:    "2": [
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:        {
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "devices": [
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "/dev/loop5"
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            ],
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "lv_name": "ceph_lv2",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "lv_size": "21470642176",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "name": "ceph_lv2",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "tags": {
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.cephx_lockbox_secret": "",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.cluster_name": "ceph",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.crush_device_class": "",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.encrypted": "0",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.osd_id": "2",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.type": "block",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:                "ceph.vdo": "0"
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            },
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "type": "block",
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:            "vg_name": "ceph_vg2"
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:        }
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]:    ]
Nov 26 01:59:27 compute-0 gallant_vaughan[426258]: }
Nov 26 01:59:27 compute-0 systemd[1]: libpod-f8af9c897c23cf815a94213541ed80515007544f456333982c4befef7c7c02fc.scope: Deactivated successfully.
Nov 26 01:59:27 compute-0 podman[426242]: 2025-11-26 01:59:27.217259881 +0000 UTC m=+1.077039597 container died f8af9c897c23cf815a94213541ed80515007544f456333982c4befef7c7c02fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_vaughan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 01:59:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4b8db83291b319eac9061d60b8bb832a17048ea4aae0abe8af3c3a8f030ba5e-merged.mount: Deactivated successfully.
Nov 26 01:59:27 compute-0 podman[426242]: 2025-11-26 01:59:27.317174495 +0000 UTC m=+1.176954201 container remove f8af9c897c23cf815a94213541ed80515007544f456333982c4befef7c7c02fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 01:59:27 compute-0 systemd[1]: libpod-conmon-f8af9c897c23cf815a94213541ed80515007544f456333982c4befef7c7c02fc.scope: Deactivated successfully.
Nov 26 01:59:27 compute-0 nova_compute[350387]: 2025-11-26 01:59:27.572 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 01:59:27 compute-0 nova_compute[350387]: 2025-11-26 01:59:27.574 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 01:59:27 compute-0 nova_compute[350387]: 2025-11-26 01:59:27.574 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 01:59:27 compute-0 nova_compute[350387]: 2025-11-26 01:59:27.574 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid b1c088bc-7a6b-4580-93ff-685731747189 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 01:59:28 compute-0 podman[426420]: 2025-11-26 01:59:28.338163512 +0000 UTC m=+0.061980638 container create b086dfdd6b2110d0f44e89c01a096d234f6ea886173663a4a05377293462e8cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:59:28 compute-0 podman[426420]: 2025-11-26 01:59:28.318325963 +0000 UTC m=+0.042143109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:59:28 compute-0 systemd[1]: Started libpod-conmon-b086dfdd6b2110d0f44e89c01a096d234f6ea886173663a4a05377293462e8cb.scope.
Nov 26 01:59:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:59:28 compute-0 podman[426420]: 2025-11-26 01:59:28.476454168 +0000 UTC m=+0.200271344 container init b086dfdd6b2110d0f44e89c01a096d234f6ea886173663a4a05377293462e8cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 01:59:28 compute-0 podman[426420]: 2025-11-26 01:59:28.492002846 +0000 UTC m=+0.215819992 container start b086dfdd6b2110d0f44e89c01a096d234f6ea886173663a4a05377293462e8cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bohr, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 01:59:28 compute-0 podman[426420]: 2025-11-26 01:59:28.497547762 +0000 UTC m=+0.221364908 container attach b086dfdd6b2110d0f44e89c01a096d234f6ea886173663a4a05377293462e8cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 01:59:28 compute-0 loving_bohr[426436]: 167 167
Nov 26 01:59:28 compute-0 systemd[1]: libpod-b086dfdd6b2110d0f44e89c01a096d234f6ea886173663a4a05377293462e8cb.scope: Deactivated successfully.
Nov 26 01:59:28 compute-0 conmon[426436]: conmon b086dfdd6b2110d0f44e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b086dfdd6b2110d0f44e89c01a096d234f6ea886173663a4a05377293462e8cb.scope/container/memory.events
Nov 26 01:59:28 compute-0 podman[426420]: 2025-11-26 01:59:28.501147024 +0000 UTC m=+0.224964150 container died b086dfdd6b2110d0f44e89c01a096d234f6ea886173663a4a05377293462e8cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bohr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:59:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb8594b2e9d3e994ca4e569a1515a45492956ed7152d5b8479901bb11a22cad4-merged.mount: Deactivated successfully.
Nov 26 01:59:28 compute-0 podman[426420]: 2025-11-26 01:59:28.555918107 +0000 UTC m=+0.279735243 container remove b086dfdd6b2110d0f44e89c01a096d234f6ea886173663a4a05377293462e8cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bohr, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 01:59:28 compute-0 systemd[1]: libpod-conmon-b086dfdd6b2110d0f44e89c01a096d234f6ea886173663a4a05377293462e8cb.scope: Deactivated successfully.
Nov 26 01:59:28 compute-0 nova_compute[350387]: 2025-11-26 01:59:28.605 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:28 compute-0 podman[426458]: 2025-11-26 01:59:28.801756903 +0000 UTC m=+0.064078096 container create 05fe9bd23937263ce150de341e1b6cc9c82e1dc930780ce9a7372434064f0d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 01:59:28 compute-0 podman[426458]: 2025-11-26 01:59:28.776150972 +0000 UTC m=+0.038472175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 01:59:28 compute-0 systemd[1]: Started libpod-conmon-05fe9bd23937263ce150de341e1b6cc9c82e1dc930780ce9a7372434064f0d34.scope.
Nov 26 01:59:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 01:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4070c22354072d38ad0ea996660e23c6043bce90c72178bc5f6592ab67b4901/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 01:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4070c22354072d38ad0ea996660e23c6043bce90c72178bc5f6592ab67b4901/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 01:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4070c22354072d38ad0ea996660e23c6043bce90c72178bc5f6592ab67b4901/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 01:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4070c22354072d38ad0ea996660e23c6043bce90c72178bc5f6592ab67b4901/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 01:59:29 compute-0 podman[426458]: 2025-11-26 01:59:29.001403888 +0000 UTC m=+0.263725131 container init 05fe9bd23937263ce150de341e1b6cc9c82e1dc930780ce9a7372434064f0d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_brahmagupta, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 01:59:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:29 compute-0 podman[426458]: 2025-11-26 01:59:29.020951689 +0000 UTC m=+0.283272912 container start 05fe9bd23937263ce150de341e1b6cc9c82e1dc930780ce9a7372434064f0d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_brahmagupta, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 01:59:29 compute-0 podman[426458]: 2025-11-26 01:59:29.026576138 +0000 UTC m=+0.288897401 container attach 05fe9bd23937263ce150de341e1b6cc9c82e1dc930780ce9a7372434064f0d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 01:59:29 compute-0 podman[158021]: time="2025-11-26T01:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:59:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45386 "" "Go-http-client/1.1"
Nov 26 01:59:29 compute-0 podman[158021]: @ - - [26/Nov/2025:01:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9055 "" "Go-http-client/1.1"
Nov 26 01:59:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:59:30 compute-0 nova_compute[350387]: 2025-11-26 01:59:30.132 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updating instance_info_cache with network_info: [{"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]: {
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:        "osd_id": 0,
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:        "type": "bluestore"
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:    },
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:        "osd_id": 2,
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:        "type": "bluestore"
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:    },
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:        "osd_id": 1,
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:        "type": "bluestore"
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]:    }
Nov 26 01:59:30 compute-0 frosty_brahmagupta[426474]: }
Nov 26 01:59:30 compute-0 nova_compute[350387]: 2025-11-26 01:59:30.158 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 01:59:30 compute-0 nova_compute[350387]: 2025-11-26 01:59:30.159 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 01:59:30 compute-0 nova_compute[350387]: 2025-11-26 01:59:30.160 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:59:30 compute-0 nova_compute[350387]: 2025-11-26 01:59:30.160 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:59:30 compute-0 nova_compute[350387]: 2025-11-26 01:59:30.161 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 01:59:30 compute-0 systemd[1]: libpod-05fe9bd23937263ce150de341e1b6cc9c82e1dc930780ce9a7372434064f0d34.scope: Deactivated successfully.
Nov 26 01:59:30 compute-0 systemd[1]: libpod-05fe9bd23937263ce150de341e1b6cc9c82e1dc930780ce9a7372434064f0d34.scope: Consumed 1.157s CPU time.
Nov 26 01:59:30 compute-0 podman[426507]: 2025-11-26 01:59:30.269292331 +0000 UTC m=+0.051682127 container died 05fe9bd23937263ce150de341e1b6cc9c82e1dc930780ce9a7372434064f0d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_brahmagupta, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 01:59:30 compute-0 nova_compute[350387]: 2025-11-26 01:59:30.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:59:30 compute-0 nova_compute[350387]: 2025-11-26 01:59:30.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:59:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4070c22354072d38ad0ea996660e23c6043bce90c72178bc5f6592ab67b4901-merged.mount: Deactivated successfully.
Nov 26 01:59:30 compute-0 nova_compute[350387]: 2025-11-26 01:59:30.339 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 01:59:30 compute-0 podman[426507]: 2025-11-26 01:59:30.380002681 +0000 UTC m=+0.162392397 container remove 05fe9bd23937263ce150de341e1b6cc9c82e1dc930780ce9a7372434064f0d34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_brahmagupta, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 26 01:59:30 compute-0 systemd[1]: libpod-conmon-05fe9bd23937263ce150de341e1b6cc9c82e1dc930780ce9a7372434064f0d34.scope: Deactivated successfully.
Nov 26 01:59:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 01:59:30 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:59:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 01:59:30 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:59:30 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 73dda418-ff56-480b-bd97-128e95ef7f6a does not exist
Nov 26 01:59:30 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev fa1ca15d-506d-45be-af90-2a31598b0d1b does not exist
Nov 26 01:59:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:31 compute-0 openstack_network_exporter[367323]: ERROR   01:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:59:31 compute-0 openstack_network_exporter[367323]: ERROR   01:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 01:59:31 compute-0 openstack_network_exporter[367323]: ERROR   01:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 01:59:31 compute-0 openstack_network_exporter[367323]: ERROR   01:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 01:59:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:59:31 compute-0 openstack_network_exporter[367323]: ERROR   01:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 01:59:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 01:59:31 compute-0 nova_compute[350387]: 2025-11-26 01:59:31.534 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:59:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 01:59:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:33 compute-0 nova_compute[350387]: 2025-11-26 01:59:33.610 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:59:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:36 compute-0 nova_compute[350387]: 2025-11-26 01:59:36.532 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:38 compute-0 nova_compute[350387]: 2025-11-26 01:59:38.614 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:59:40 compute-0 podman[426571]: 2025-11-26 01:59:40.590917086 +0000 UTC m=+0.127336349 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 26 01:59:40 compute-0 podman[426573]: 2025-11-26 01:59:40.60100705 +0000 UTC m=+0.131009172 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 01:59:40 compute-0 podman[426572]: 2025-11-26 01:59:40.607691139 +0000 UTC m=+0.145177072 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_01:59:41
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'backups', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'vms']
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 01:59:41 compute-0 nova_compute[350387]: 2025-11-26 01:59:41.535 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:59:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 01:59:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:43 compute-0 podman[426628]: 2025-11-26 01:59:43.58380275 +0000 UTC m=+0.136020424 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 26 01:59:43 compute-0 nova_compute[350387]: 2025-11-26 01:59:43.616 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:43 compute-0 podman[426629]: 2025-11-26 01:59:43.651579759 +0000 UTC m=+0.192282368 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 01:59:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:59:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:45 compute-0 podman[426671]: 2025-11-26 01:59:45.589052648 +0000 UTC m=+0.133691568 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., release-0.7.12=)
Nov 26 01:59:46 compute-0 nova_compute[350387]: 2025-11-26 01:59:46.538 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:48 compute-0 nova_compute[350387]: 2025-11-26 01:59:48.620 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:59:50 compute-0 podman[426691]: 2025-11-26 01:59:50.599409935 +0000 UTC m=+0.146770166 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022107945480888194 of space, bias 1.0, pg target 0.6632383644266459 quantized to 32 (current 32)
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 01:59:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 01:59:51 compute-0 nova_compute[350387]: 2025-11-26 01:59:51.542 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:52 compute-0 podman[426711]: 2025-11-26 01:59:52.603449859 +0000 UTC m=+0.132538025 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 01:59:52 compute-0 podman[426710]: 2025-11-26 01:59:52.620145569 +0000 UTC m=+0.158692142 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, release=1755695350, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=)
Nov 26 01:59:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:53 compute-0 nova_compute[350387]: 2025-11-26 01:59:53.627 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 01:59:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:56 compute-0 nova_compute[350387]: 2025-11-26 01:59:56.545 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:58 compute-0 nova_compute[350387]: 2025-11-26 01:59:58.633 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 01:59:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 01:59:59 compute-0 podman[158021]: time="2025-11-26T01:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 01:59:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 01:59:59 compute-0 podman[158021]: @ - - [26/Nov/2025:01:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8636 "" "Go-http-client/1.1"
Nov 26 01:59:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:00:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:01 compute-0 openstack_network_exporter[367323]: ERROR   02:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:00:01 compute-0 openstack_network_exporter[367323]: ERROR   02:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:00:01 compute-0 openstack_network_exporter[367323]: ERROR   02:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:00:01 compute-0 openstack_network_exporter[367323]: ERROR   02:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:00:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:00:01 compute-0 openstack_network_exporter[367323]: ERROR   02:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:00:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:00:01 compute-0 nova_compute[350387]: 2025-11-26 02:00:01.548 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:03 compute-0 nova_compute[350387]: 2025-11-26 02:00:03.638 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:00:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:06 compute-0 nova_compute[350387]: 2025-11-26 02:00:06.552 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:08 compute-0 nova_compute[350387]: 2025-11-26 02:00:08.642 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:00:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:00:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:00:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:00:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:00:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:00:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:00:11 compute-0 nova_compute[350387]: 2025-11-26 02:00:11.558 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:11 compute-0 podman[426754]: 2025-11-26 02:00:11.562522519 +0000 UTC m=+0.098599609 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:00:11 compute-0 podman[426753]: 2025-11-26 02:00:11.575381151 +0000 UTC m=+0.118386566 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:00:11 compute-0 podman[426752]: 2025-11-26 02:00:11.611673504 +0000 UTC m=+0.156619314 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:00:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:13 compute-0 nova_compute[350387]: 2025-11-26 02:00:13.647 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:14 compute-0 podman[426813]: 2025-11-26 02:00:14.595645428 +0000 UTC m=+0.150072119 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 02:00:14 compute-0 podman[426814]: 2025-11-26 02:00:14.653521949 +0000 UTC m=+0.210014549 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 26 02:00:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:00:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:16 compute-0 nova_compute[350387]: 2025-11-26 02:00:16.563 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:16 compute-0 podman[426856]: 2025-11-26 02:00:16.591277976 +0000 UTC m=+0.131929869 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, release=1214.1726694543, name=ubi9, version=9.4, io.buildah.version=1.29.0, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, io.openshift.tags=base rhel9, config_id=edpm, com.redhat.component=ubi9-container, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 02:00:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:18 compute-0 nova_compute[350387]: 2025-11-26 02:00:18.651 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:00:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:21 compute-0 nova_compute[350387]: 2025-11-26 02:00:21.566 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:21 compute-0 podman[426876]: 2025-11-26 02:00:21.584742456 +0000 UTC m=+0.128701677 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 02:00:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:23 compute-0 nova_compute[350387]: 2025-11-26 02:00:23.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:00:23 compute-0 podman[426895]: 2025-11-26 02:00:23.593556806 +0000 UTC m=+0.146090618 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 26 02:00:23 compute-0 podman[426896]: 2025-11-26 02:00:23.613527418 +0000 UTC m=+0.157934531 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:00:23 compute-0 nova_compute[350387]: 2025-11-26 02:00:23.655 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:24 compute-0 nova_compute[350387]: 2025-11-26 02:00:24.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:00:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:00:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:00:24.979 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:00:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:00:24.979 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:00:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:00:24.980 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:00:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:25 compute-0 nova_compute[350387]: 2025-11-26 02:00:25.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:00:25 compute-0 nova_compute[350387]: 2025-11-26 02:00:25.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:00:26 compute-0 nova_compute[350387]: 2025-11-26 02:00:26.569 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:00:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/31864052' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:00:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:00:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/31864052' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:00:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:27 compute-0 nova_compute[350387]: 2025-11-26 02:00:27.593 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:00:27 compute-0 nova_compute[350387]: 2025-11-26 02:00:27.594 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:00:27 compute-0 nova_compute[350387]: 2025-11-26 02:00:27.595 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:00:28 compute-0 nova_compute[350387]: 2025-11-26 02:00:28.659 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:29 compute-0 podman[158021]: time="2025-11-26T02:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:00:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:00:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8631 "" "Go-http-client/1.1"
Nov 26 02:00:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:00:30 compute-0 nova_compute[350387]: 2025-11-26 02:00:30.339 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Updating instance_info_cache with network_info: [{"id": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "address": "fa:16:3e:70:20:57", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7c212d-f2", "ovs_interfaceid": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:00:30 compute-0 nova_compute[350387]: 2025-11-26 02:00:30.365 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:00:30 compute-0 nova_compute[350387]: 2025-11-26 02:00:30.366 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:00:30 compute-0 nova_compute[350387]: 2025-11-26 02:00:30.367 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:00:30 compute-0 nova_compute[350387]: 2025-11-26 02:00:30.367 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:00:30 compute-0 nova_compute[350387]: 2025-11-26 02:00:30.367 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:00:30 compute-0 nova_compute[350387]: 2025-11-26 02:00:30.368 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:00:30 compute-0 nova_compute[350387]: 2025-11-26 02:00:30.368 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:00:30 compute-0 nova_compute[350387]: 2025-11-26 02:00:30.408 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:00:30 compute-0 nova_compute[350387]: 2025-11-26 02:00:30.408 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:00:30 compute-0 nova_compute[350387]: 2025-11-26 02:00:30.409 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:00:30 compute-0 nova_compute[350387]: 2025-11-26 02:00:30.409 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:00:30 compute-0 nova_compute[350387]: 2025-11-26 02:00:30.410 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:00:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:00:30 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/369296469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:00:30 compute-0 nova_compute[350387]: 2025-11-26 02:00:30.916 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.046 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.046 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.047 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:00:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.055 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.057 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.057 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.065 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.065 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.066 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.075 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.075 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.076 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:00:31 compute-0 openstack_network_exporter[367323]: ERROR   02:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:00:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:00:31 compute-0 openstack_network_exporter[367323]: ERROR   02:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:00:31 compute-0 openstack_network_exporter[367323]: ERROR   02:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:00:31 compute-0 openstack_network_exporter[367323]: ERROR   02:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:00:31 compute-0 openstack_network_exporter[367323]: ERROR   02:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:00:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.572 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.650 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.651 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3190MB free_disk=59.85565948486328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.652 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.652 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.767 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.768 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 0e500d52-72e1-4501-b4d6-fc6ca575760f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.768 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.769 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance d32050dc-c041-47df-994e-7d05cf1f489a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.769 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.770 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.789 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing inventories for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.807 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating ProviderTree inventory for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.807 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating inventory in ProviderTree for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.855 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing aggregate associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.885 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing trait associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, traits: COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 02:00:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:00:31 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:00:31 compute-0 nova_compute[350387]: 2025-11-26 02:00:31.995 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:00:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:00:31 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:00:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:00:32 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:00:32 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5179a0df-e14e-4ccf-a916-1d18a7457a74 does not exist
Nov 26 02:00:32 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 481925a7-a404-48a7-bace-485293be7464 does not exist
Nov 26 02:00:32 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 1c00c8df-b329-43c0-a00f-e162ae357537 does not exist
Nov 26 02:00:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:00:32 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:00:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:00:32 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:00:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:00:32 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:00:32 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:00:32 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:00:32 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:00:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:00:32 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1515539067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:00:32 compute-0 nova_compute[350387]: 2025-11-26 02:00:32.523 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:00:32 compute-0 nova_compute[350387]: 2025-11-26 02:00:32.533 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:00:32 compute-0 nova_compute[350387]: 2025-11-26 02:00:32.550 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:00:32 compute-0 nova_compute[350387]: 2025-11-26 02:00:32.553 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:00:32 compute-0 nova_compute[350387]: 2025-11-26 02:00:32.554 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:00:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:33 compute-0 podman[427249]: 2025-11-26 02:00:33.066588466 +0000 UTC m=+0.089678997 container create e7ada2e81a71f7673c2e888ba96f840cf561e5edde1beee468a221305965bccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 26 02:00:33 compute-0 podman[427249]: 2025-11-26 02:00:33.028149533 +0000 UTC m=+0.051240064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:00:33 compute-0 systemd[1]: Started libpod-conmon-e7ada2e81a71f7673c2e888ba96f840cf561e5edde1beee468a221305965bccb.scope.
Nov 26 02:00:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:00:33 compute-0 podman[427249]: 2025-11-26 02:00:33.226560804 +0000 UTC m=+0.249651375 container init e7ada2e81a71f7673c2e888ba96f840cf561e5edde1beee468a221305965bccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:00:33 compute-0 podman[427249]: 2025-11-26 02:00:33.242626956 +0000 UTC m=+0.265717487 container start e7ada2e81a71f7673c2e888ba96f840cf561e5edde1beee468a221305965bccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 02:00:33 compute-0 podman[427249]: 2025-11-26 02:00:33.248738368 +0000 UTC m=+0.271828969 container attach e7ada2e81a71f7673c2e888ba96f840cf561e5edde1beee468a221305965bccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:00:33 compute-0 heuristic_snyder[427264]: 167 167
Nov 26 02:00:33 compute-0 systemd[1]: libpod-e7ada2e81a71f7673c2e888ba96f840cf561e5edde1beee468a221305965bccb.scope: Deactivated successfully.
Nov 26 02:00:33 compute-0 podman[427249]: 2025-11-26 02:00:33.25980124 +0000 UTC m=+0.282891771 container died e7ada2e81a71f7673c2e888ba96f840cf561e5edde1beee468a221305965bccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_snyder, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:00:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-13f2cbe4c707326e5b239747e27b5a5e9d70ee5ac1e429291d79804398b3d8d7-merged.mount: Deactivated successfully.
Nov 26 02:00:33 compute-0 podman[427249]: 2025-11-26 02:00:33.34925448 +0000 UTC m=+0.372345021 container remove e7ada2e81a71f7673c2e888ba96f840cf561e5edde1beee468a221305965bccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 02:00:33 compute-0 systemd[1]: libpod-conmon-e7ada2e81a71f7673c2e888ba96f840cf561e5edde1beee468a221305965bccb.scope: Deactivated successfully.
Nov 26 02:00:33 compute-0 nova_compute[350387]: 2025-11-26 02:00:33.485 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:00:33 compute-0 nova_compute[350387]: 2025-11-26 02:00:33.486 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:00:33 compute-0 podman[427286]: 2025-11-26 02:00:33.651348612 +0000 UTC m=+0.079717887 container create ab4c4eb3e684b29933caa684162d216f8bd61aabe6981de105381cbd2a6f7bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 02:00:33 compute-0 nova_compute[350387]: 2025-11-26 02:00:33.664 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:33 compute-0 podman[427286]: 2025-11-26 02:00:33.619786323 +0000 UTC m=+0.048155558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:00:33 compute-0 systemd[1]: Started libpod-conmon-ab4c4eb3e684b29933caa684162d216f8bd61aabe6981de105381cbd2a6f7bb0.scope.
Nov 26 02:00:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9531dc290941a1e1463322aad0ed55846c60c41180bf8933b181eb39aaa35173/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9531dc290941a1e1463322aad0ed55846c60c41180bf8933b181eb39aaa35173/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9531dc290941a1e1463322aad0ed55846c60c41180bf8933b181eb39aaa35173/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9531dc290941a1e1463322aad0ed55846c60c41180bf8933b181eb39aaa35173/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9531dc290941a1e1463322aad0ed55846c60c41180bf8933b181eb39aaa35173/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:00:33 compute-0 podman[427286]: 2025-11-26 02:00:33.834911584 +0000 UTC m=+0.263280839 container init ab4c4eb3e684b29933caa684162d216f8bd61aabe6981de105381cbd2a6f7bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mahavira, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:00:33 compute-0 podman[427286]: 2025-11-26 02:00:33.862789889 +0000 UTC m=+0.291159124 container start ab4c4eb3e684b29933caa684162d216f8bd61aabe6981de105381cbd2a6f7bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mahavira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 02:00:33 compute-0 podman[427286]: 2025-11-26 02:00:33.867359998 +0000 UTC m=+0.295729243 container attach ab4c4eb3e684b29933caa684162d216f8bd61aabe6981de105381cbd2a6f7bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:00:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:00:35 compute-0 cranky_mahavira[427302]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:00:35 compute-0 cranky_mahavira[427302]: --> relative data size: 1.0
Nov 26 02:00:35 compute-0 cranky_mahavira[427302]: --> All data devices are unavailable
Nov 26 02:00:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:35 compute-0 systemd[1]: libpod-ab4c4eb3e684b29933caa684162d216f8bd61aabe6981de105381cbd2a6f7bb0.scope: Deactivated successfully.
Nov 26 02:00:35 compute-0 systemd[1]: libpod-ab4c4eb3e684b29933caa684162d216f8bd61aabe6981de105381cbd2a6f7bb0.scope: Consumed 1.197s CPU time.
Nov 26 02:00:35 compute-0 podman[427286]: 2025-11-26 02:00:35.126546626 +0000 UTC m=+1.554915871 container died ab4c4eb3e684b29933caa684162d216f8bd61aabe6981de105381cbd2a6f7bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 02:00:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9531dc290941a1e1463322aad0ed55846c60c41180bf8933b181eb39aaa35173-merged.mount: Deactivated successfully.
Nov 26 02:00:35 compute-0 podman[427286]: 2025-11-26 02:00:35.222724246 +0000 UTC m=+1.651093491 container remove ab4c4eb3e684b29933caa684162d216f8bd61aabe6981de105381cbd2a6f7bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mahavira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:00:35 compute-0 systemd[1]: libpod-conmon-ab4c4eb3e684b29933caa684162d216f8bd61aabe6981de105381cbd2a6f7bb0.scope: Deactivated successfully.
Nov 26 02:00:36 compute-0 podman[427481]: 2025-11-26 02:00:36.316427672 +0000 UTC m=+0.071695771 container create f35ca088da8f57059c2c11559850bbbdf7bd4273e090ffba303d2c1cbbb159b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:00:36 compute-0 systemd[1]: Started libpod-conmon-f35ca088da8f57059c2c11559850bbbdf7bd4273e090ffba303d2c1cbbb159b5.scope.
Nov 26 02:00:36 compute-0 podman[427481]: 2025-11-26 02:00:36.28725342 +0000 UTC m=+0.042521549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:00:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:00:36 compute-0 podman[427481]: 2025-11-26 02:00:36.436587227 +0000 UTC m=+0.191855336 container init f35ca088da8f57059c2c11559850bbbdf7bd4273e090ffba303d2c1cbbb159b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldberg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 02:00:36 compute-0 podman[427481]: 2025-11-26 02:00:36.448635387 +0000 UTC m=+0.203903476 container start f35ca088da8f57059c2c11559850bbbdf7bd4273e090ffba303d2c1cbbb159b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:00:36 compute-0 podman[427481]: 2025-11-26 02:00:36.45515963 +0000 UTC m=+0.210427699 container attach f35ca088da8f57059c2c11559850bbbdf7bd4273e090ffba303d2c1cbbb159b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 02:00:36 compute-0 infallible_goldberg[427496]: 167 167
Nov 26 02:00:36 compute-0 systemd[1]: libpod-f35ca088da8f57059c2c11559850bbbdf7bd4273e090ffba303d2c1cbbb159b5.scope: Deactivated successfully.
Nov 26 02:00:36 compute-0 podman[427481]: 2025-11-26 02:00:36.460088419 +0000 UTC m=+0.215356518 container died f35ca088da8f57059c2c11559850bbbdf7bd4273e090ffba303d2c1cbbb159b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 26 02:00:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-529191b7b75e3a2da03beecb6faef71bbe388ac73f286351d523f72811d4839e-merged.mount: Deactivated successfully.
Nov 26 02:00:36 compute-0 podman[427481]: 2025-11-26 02:00:36.52966717 +0000 UTC m=+0.284935239 container remove f35ca088da8f57059c2c11559850bbbdf7bd4273e090ffba303d2c1cbbb159b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldberg, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:00:36 compute-0 systemd[1]: libpod-conmon-f35ca088da8f57059c2c11559850bbbdf7bd4273e090ffba303d2c1cbbb159b5.scope: Deactivated successfully.
Nov 26 02:00:36 compute-0 nova_compute[350387]: 2025-11-26 02:00:36.574 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:36 compute-0 podman[427519]: 2025-11-26 02:00:36.869911127 +0000 UTC m=+0.111802042 container create 3b4baf069a017ae2374f89c1070e84f9efe80050aa064fac67a6bdb1856b97b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 02:00:36 compute-0 podman[427519]: 2025-11-26 02:00:36.831225686 +0000 UTC m=+0.073116672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:00:36 compute-0 systemd[1]: Started libpod-conmon-3b4baf069a017ae2374f89c1070e84f9efe80050aa064fac67a6bdb1856b97b8.scope.
Nov 26 02:00:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e7977e439492a63b830f8d1b5ee8265807af5fce3e937a32fe36902bfe2be7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e7977e439492a63b830f8d1b5ee8265807af5fce3e937a32fe36902bfe2be7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e7977e439492a63b830f8d1b5ee8265807af5fce3e937a32fe36902bfe2be7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:00:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e7977e439492a63b830f8d1b5ee8265807af5fce3e937a32fe36902bfe2be7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:00:37 compute-0 podman[427519]: 2025-11-26 02:00:37.038740983 +0000 UTC m=+0.280631948 container init 3b4baf069a017ae2374f89c1070e84f9efe80050aa064fac67a6bdb1856b97b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:00:37 compute-0 podman[427519]: 2025-11-26 02:00:37.058598833 +0000 UTC m=+0.300489718 container start 3b4baf069a017ae2374f89c1070e84f9efe80050aa064fac67a6bdb1856b97b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 02:00:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:37 compute-0 podman[427519]: 2025-11-26 02:00:37.064002685 +0000 UTC m=+0.305893560 container attach 3b4baf069a017ae2374f89c1070e84f9efe80050aa064fac67a6bdb1856b97b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_swirles, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]: {
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:    "0": [
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:        {
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "devices": [
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "/dev/loop3"
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            ],
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "lv_name": "ceph_lv0",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "lv_size": "21470642176",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "name": "ceph_lv0",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "tags": {
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.cluster_name": "ceph",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.crush_device_class": "",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.encrypted": "0",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.osd_id": "0",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.type": "block",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.vdo": "0"
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            },
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "type": "block",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "vg_name": "ceph_vg0"
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:        }
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:    ],
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:    "1": [
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:        {
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "devices": [
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "/dev/loop4"
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            ],
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "lv_name": "ceph_lv1",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "lv_size": "21470642176",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "name": "ceph_lv1",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "tags": {
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.cluster_name": "ceph",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.crush_device_class": "",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.encrypted": "0",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.osd_id": "1",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.type": "block",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.vdo": "0"
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            },
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "type": "block",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "vg_name": "ceph_vg1"
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:        }
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:    ],
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:    "2": [
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:        {
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "devices": [
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "/dev/loop5"
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            ],
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "lv_name": "ceph_lv2",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "lv_size": "21470642176",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "name": "ceph_lv2",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "tags": {
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.cluster_name": "ceph",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.crush_device_class": "",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.encrypted": "0",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.osd_id": "2",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.type": "block",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:                "ceph.vdo": "0"
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            },
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "type": "block",
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:            "vg_name": "ceph_vg2"
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:        }
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]:    ]
Nov 26 02:00:37 compute-0 vibrant_swirles[427534]: }
Nov 26 02:00:37 compute-0 systemd[1]: libpod-3b4baf069a017ae2374f89c1070e84f9efe80050aa064fac67a6bdb1856b97b8.scope: Deactivated successfully.
Nov 26 02:00:38 compute-0 podman[427544]: 2025-11-26 02:00:38.055734617 +0000 UTC m=+0.060054963 container died 3b4baf069a017ae2374f89c1070e84f9efe80050aa064fac67a6bdb1856b97b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_swirles, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:00:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8e7977e439492a63b830f8d1b5ee8265807af5fce3e937a32fe36902bfe2be7-merged.mount: Deactivated successfully.
Nov 26 02:00:38 compute-0 podman[427544]: 2025-11-26 02:00:38.190674279 +0000 UTC m=+0.194994575 container remove 3b4baf069a017ae2374f89c1070e84f9efe80050aa064fac67a6bdb1856b97b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:00:38 compute-0 systemd[1]: libpod-conmon-3b4baf069a017ae2374f89c1070e84f9efe80050aa064fac67a6bdb1856b97b8.scope: Deactivated successfully.
Nov 26 02:00:38 compute-0 nova_compute[350387]: 2025-11-26 02:00:38.669 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:39 compute-0 podman[427695]: 2025-11-26 02:00:39.379056701 +0000 UTC m=+0.089395520 container create 0a0ef88d8bcacf32d0276b6091cab495337a06f2b7f41f6765b3289ff5786fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 26 02:00:39 compute-0 podman[427695]: 2025-11-26 02:00:39.341220685 +0000 UTC m=+0.051559554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:00:39 compute-0 systemd[1]: Started libpod-conmon-0a0ef88d8bcacf32d0276b6091cab495337a06f2b7f41f6765b3289ff5786fb0.scope.
Nov 26 02:00:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:00:39 compute-0 podman[427695]: 2025-11-26 02:00:39.541599951 +0000 UTC m=+0.251938820 container init 0a0ef88d8bcacf32d0276b6091cab495337a06f2b7f41f6765b3289ff5786fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 02:00:39 compute-0 podman[427695]: 2025-11-26 02:00:39.56215321 +0000 UTC m=+0.272492039 container start 0a0ef88d8bcacf32d0276b6091cab495337a06f2b7f41f6765b3289ff5786fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hellman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 02:00:39 compute-0 podman[427695]: 2025-11-26 02:00:39.568470968 +0000 UTC m=+0.278809847 container attach 0a0ef88d8bcacf32d0276b6091cab495337a06f2b7f41f6765b3289ff5786fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 02:00:39 compute-0 hungry_hellman[427711]: 167 167
Nov 26 02:00:39 compute-0 systemd[1]: libpod-0a0ef88d8bcacf32d0276b6091cab495337a06f2b7f41f6765b3289ff5786fb0.scope: Deactivated successfully.
Nov 26 02:00:39 compute-0 podman[427695]: 2025-11-26 02:00:39.573536221 +0000 UTC m=+0.283875050 container died 0a0ef88d8bcacf32d0276b6091cab495337a06f2b7f41f6765b3289ff5786fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:00:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9eac2974ed8a29c6d96444f130fbe51be64b053ed1a4717e9e7e530c0b36691-merged.mount: Deactivated successfully.
Nov 26 02:00:39 compute-0 podman[427695]: 2025-11-26 02:00:39.645800237 +0000 UTC m=+0.356139056 container remove 0a0ef88d8bcacf32d0276b6091cab495337a06f2b7f41f6765b3289ff5786fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hellman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:00:39 compute-0 systemd[1]: libpod-conmon-0a0ef88d8bcacf32d0276b6091cab495337a06f2b7f41f6765b3289ff5786fb0.scope: Deactivated successfully.
Nov 26 02:00:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:00:39 compute-0 podman[427733]: 2025-11-26 02:00:39.967542122 +0000 UTC m=+0.090861491 container create 0aa0427f4268b5d0e83429f9cf5365c0e96ba89480f580817ba251f6b32bd28e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elbakyan, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:00:40 compute-0 podman[427733]: 2025-11-26 02:00:39.940577972 +0000 UTC m=+0.063897411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:00:40 compute-0 systemd[1]: Started libpod-conmon-0aa0427f4268b5d0e83429f9cf5365c0e96ba89480f580817ba251f6b32bd28e.scope.
Nov 26 02:00:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:00:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f86ebbbd30d39fcc8a4841ba26797e37d82bcffa0de3f5484dc24181b0c748d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:00:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f86ebbbd30d39fcc8a4841ba26797e37d82bcffa0de3f5484dc24181b0c748d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:00:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f86ebbbd30d39fcc8a4841ba26797e37d82bcffa0de3f5484dc24181b0c748d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:00:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f86ebbbd30d39fcc8a4841ba26797e37d82bcffa0de3f5484dc24181b0c748d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:00:40 compute-0 podman[427733]: 2025-11-26 02:00:40.125689928 +0000 UTC m=+0.249009387 container init 0aa0427f4268b5d0e83429f9cf5365c0e96ba89480f580817ba251f6b32bd28e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:00:40 compute-0 podman[427733]: 2025-11-26 02:00:40.159748337 +0000 UTC m=+0.283067706 container start 0aa0427f4268b5d0e83429f9cf5365c0e96ba89480f580817ba251f6b32bd28e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elbakyan, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 26 02:00:40 compute-0 podman[427733]: 2025-11-26 02:00:40.167350961 +0000 UTC m=+0.290670440 container attach 0aa0427f4268b5d0e83429f9cf5365c0e96ba89480f580817ba251f6b32bd28e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elbakyan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:00:41
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.control', 'backups', 'images', 'default.rgw.meta']
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]: {
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:        "osd_id": 0,
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:        "type": "bluestore"
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:    },
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:        "osd_id": 2,
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:        "type": "bluestore"
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:    },
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:        "osd_id": 1,
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:        "type": "bluestore"
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]:    }
Nov 26 02:00:41 compute-0 silly_elbakyan[427749]: }
Nov 26 02:00:41 compute-0 systemd[1]: libpod-0aa0427f4268b5d0e83429f9cf5365c0e96ba89480f580817ba251f6b32bd28e.scope: Deactivated successfully.
Nov 26 02:00:41 compute-0 podman[427733]: 2025-11-26 02:00:41.356069984 +0000 UTC m=+1.479389363 container died 0aa0427f4268b5d0e83429f9cf5365c0e96ba89480f580817ba251f6b32bd28e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:00:41 compute-0 systemd[1]: libpod-0aa0427f4268b5d0e83429f9cf5365c0e96ba89480f580817ba251f6b32bd28e.scope: Consumed 1.183s CPU time.
Nov 26 02:00:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f86ebbbd30d39fcc8a4841ba26797e37d82bcffa0de3f5484dc24181b0c748d-merged.mount: Deactivated successfully.
Nov 26 02:00:41 compute-0 podman[427733]: 2025-11-26 02:00:41.441040378 +0000 UTC m=+1.564359747 container remove 0aa0427f4268b5d0e83429f9cf5365c0e96ba89480f580817ba251f6b32bd28e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elbakyan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:00:41 compute-0 systemd[1]: libpod-conmon-0aa0427f4268b5d0e83429f9cf5365c0e96ba89480f580817ba251f6b32bd28e.scope: Deactivated successfully.
Nov 26 02:00:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:00:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:00:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:00:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 24dd887f-9c3d-4137-bff3-d809c085d729 does not exist
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 1d8edde7-f2ff-44e7-86a8-6c6ad8a9330d does not exist
Nov 26 02:00:41 compute-0 nova_compute[350387]: 2025-11-26 02:00:41.576 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:00:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:00:41 compute-0 podman[427819]: 2025-11-26 02:00:41.789631329 +0000 UTC m=+0.090826310 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 02:00:41 compute-0 podman[427818]: 2025-11-26 02:00:41.815417206 +0000 UTC m=+0.127622857 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 26 02:00:41 compute-0 podman[427817]: 2025-11-26 02:00:41.819683106 +0000 UTC m=+0.127217825 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.schema-version=1.0)
Nov 26 02:00:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:00:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.869 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.869 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.869 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.870 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.880 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a8b199f7-8cd5-45ea-bc7e-af8352a6afa2', 'name': 'vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {'metering.server_group': '366b90b6-2e85-40c4-9ca1-855cf9022409'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.886 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd32050dc-c041-47df-994e-7d05cf1f489a', 'name': 'vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {'metering.server_group': '366b90b6-2e85-40c4-9ca1-855cf9022409'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.892 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b1c088bc-7a6b-4580-93ff-685731747189', 'name': 'test_0', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.897 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0e500d52-72e1-4501-b4d6-fc6ca575760f', 'name': 'vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {'metering.server_group': '366b90b6-2e85-40c4-9ca1-855cf9022409'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.898 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.898 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.898 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.899 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T02:00:42.899066) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.901 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.901 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.902 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.902 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.902 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.902 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T02:00:42.902490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.912 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.920 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.927 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.933 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.packets volume: 55 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.934 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.934 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.934 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.935 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.935 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.935 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.937 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.937 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.938 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T02:00:42.935433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.938 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.938 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.938 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.938 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.939 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T02:00:42.938682) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.939 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.939 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.940 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.940 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.941 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.941 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.942 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.942 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.942 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.942 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.943 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.943 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.944 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.944 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.946 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.946 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.946 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T02:00:42.942629) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.947 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.947 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.947 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.947 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.947 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.948 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.949 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.949 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.bytes volume: 7218 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.950 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.951 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T02:00:42.947523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.951 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.951 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.951 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.952 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.952 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.953 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T02:00:42.952146) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:42.989 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/cpu volume: 38410000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.028 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/cpu volume: 36920000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.064 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/cpu volume: 44570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.103 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/cpu volume: 310020000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.105 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.105 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.106 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.106 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.107 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.107 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.108 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T02:00:43.107533) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.109 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.110 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.110 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.111 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.112 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.112 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.113 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.114 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.114 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T02:00:43.114695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.115 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/memory.usage volume: 49.03515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.116 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/memory.usage volume: 49.0546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.117 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/memory.usage volume: 48.828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.117 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/memory.usage volume: 48.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.118 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.119 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.119 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.120 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.120 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.121 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.121 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.122 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T02:00:43.122285) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.123 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.124 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.125 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes volume: 2178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.126 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.bytes volume: 8406 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.127 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.128 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.128 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.129 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.129 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.130 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T02:00:43.130247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.130 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.131 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.132 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.133 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.133 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.134 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.134 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.134 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.134 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.134 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.134 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.135 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.135 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.135 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.packets volume: 61 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.136 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.136 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.136 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.136 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T02:00:43.134633) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.136 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.136 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.137 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.137 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.137 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.137 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.138 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.138 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.139 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.139 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.139 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.139 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T02:00:43.136770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.139 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.139 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.140 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.140 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.140 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.140 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.141 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.141 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.141 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T02:00:43.139214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T02:00:43.141528) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.173 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.174 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.174 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.207 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.208 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.208 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.225 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.225 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.226 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.244 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.244 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.244 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.245 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.245 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.245 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.245 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.245 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.245 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T02:00:43.245960) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.299 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.299 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.300 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.352 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.353 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.353 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.440 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.440 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.441 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.527 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.528 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.528 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.529 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.530 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.530 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.530 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.531 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.531 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.531 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.532 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.532 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.latency volume: 1818076010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T02:00:43.532006) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.533 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.latency volume: 286055535 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.533 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.latency volume: 221080770 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.533 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.latency volume: 2007436788 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.534 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.latency volume: 283353651 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.534 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.latency volume: 197487344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.535 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 2182324777 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.535 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 336768448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.536 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 176765271 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.536 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.latency volume: 2021453674 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.537 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.latency volume: 321911498 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.537 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.latency volume: 237452008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.538 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.539 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.539 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.539 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.539 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.539 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.540 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.540 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.540 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.541 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.541 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.542 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.542 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.543 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T02:00:43.539654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.544 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.545 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.545 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.546 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.547 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.547 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.548 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.548 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.548 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.548 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.548 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.549 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.549 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.550 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.550 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T02:00:43.548611) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.551 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.551 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.552 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.552 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.553 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.553 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.554 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.554 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.555 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.555 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.557 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.557 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.557 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.558 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.558 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.558 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.559 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.560 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.560 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.561 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.561 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.562 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.562 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.563 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.563 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.563 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T02:00:43.558029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.566 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.566 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.566 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.566 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.567 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.567 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.568 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.568 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.569 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.569 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.570 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.570 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T02:00:43.567475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.570 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.571 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.572 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.572 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T02:00:43.572158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.574 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.latency volume: 5109418941 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.574 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.latency volume: 30681884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.574 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.575 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.latency volume: 5738822785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.576 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.latency volume: 28688069 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.577 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.577 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 5787370869 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.578 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 30575996 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.578 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.578 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.latency volume: 8335163051 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.578 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.latency volume: 31365598 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.579 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.579 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.579 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.580 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.580 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.580 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.580 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.580 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.580 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.581 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.581 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.581 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.582 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.582 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.582 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.582 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.583 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T02:00:43.580391) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.583 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.583 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.584 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.584 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.584 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.584 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.585 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.585 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.585 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.585 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.585 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.586 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T02:00:43.585105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.586 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.586 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.587 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.587 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.587 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.587 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.588 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.588 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.588 15 DEBUG ceilometer.compute.pollsters [-] 0e500d52-72e1-4501-b4d6-fc6ca575760f/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.589 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.589 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.589 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.590 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.590 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.590 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.590 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.590 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.590 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.591 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.591 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.591 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.591 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.591 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.591 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.591 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.592 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.592 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.592 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.592 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.592 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.592 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.592 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.593 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.593 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.593 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:00:43.593 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:00:43 compute-0 nova_compute[350387]: 2025-11-26 02:00:43.674 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:44 compute-0 podman[427901]: 2025-11-26 02:00:44.836950188 +0000 UTC m=+0.120831226 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:00:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:00:44 compute-0 podman[427902]: 2025-11-26 02:00:44.907976079 +0000 UTC m=+0.176612347 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 26 02:00:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:46 compute-0 nova_compute[350387]: 2025-11-26 02:00:46.579 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:47 compute-0 podman[427944]: 2025-11-26 02:00:47.56139629 +0000 UTC m=+0.109942768 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.buildah.version=1.29.0, architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=)
Nov 26 02:00:48 compute-0 nova_compute[350387]: 2025-11-26 02:00:48.679 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022107945480888194 of space, bias 1.0, pg target 0.6632383644266459 quantized to 32 (current 32)
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:00:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:00:51 compute-0 nova_compute[350387]: 2025-11-26 02:00:51.583 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:52 compute-0 podman[427964]: 2025-11-26 02:00:52.542267526 +0000 UTC m=+0.107517150 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 26 02:00:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:53 compute-0 nova_compute[350387]: 2025-11-26 02:00:53.684 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:54 compute-0 podman[427983]: 2025-11-26 02:00:54.593970743 +0000 UTC m=+0.143986098 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 26 02:00:54 compute-0 podman[427984]: 2025-11-26 02:00:54.5977775 +0000 UTC m=+0.138229646 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 02:00:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:00:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:56 compute-0 nova_compute[350387]: 2025-11-26 02:00:56.586 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:58 compute-0 nova_compute[350387]: 2025-11-26 02:00:58.689 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:00:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:00:59 compute-0 podman[158021]: time="2025-11-26T02:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:00:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:00:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8635 "" "Go-http-client/1.1"
Nov 26 02:00:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:01:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:01 compute-0 openstack_network_exporter[367323]: ERROR   02:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:01:01 compute-0 openstack_network_exporter[367323]: ERROR   02:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:01:01 compute-0 openstack_network_exporter[367323]: ERROR   02:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:01:01 compute-0 openstack_network_exporter[367323]: ERROR   02:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:01:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:01:01 compute-0 openstack_network_exporter[367323]: ERROR   02:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:01:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:01:01 compute-0 nova_compute[350387]: 2025-11-26 02:01:01.589 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.175 350391 DEBUG oslo_concurrency.lockutils [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "0e500d52-72e1-4501-b4d6-fc6ca575760f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.176 350391 DEBUG oslo_concurrency.lockutils [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.177 350391 DEBUG oslo_concurrency.lockutils [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.178 350391 DEBUG oslo_concurrency.lockutils [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.178 350391 DEBUG oslo_concurrency.lockutils [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.181 350391 INFO nova.compute.manager [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Terminating instance#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.183 350391 DEBUG nova.compute.manager [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 02:01:02 compute-0 kernel: tapcc7c212d-f2 (unregistering): left promiscuous mode
Nov 26 02:01:02 compute-0 NetworkManager[48886]: <info>  [1764122462.3906] device (tapcc7c212d-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.399 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:02 compute-0 ovn_controller[89102]: 2025-11-26T02:01:02Z|00050|binding|INFO|Releasing lport cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 from this chassis (sb_readonly=0)
Nov 26 02:01:02 compute-0 ovn_controller[89102]: 2025-11-26T02:01:02Z|00051|binding|INFO|Setting lport cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 down in Southbound
Nov 26 02:01:02 compute-0 ovn_controller[89102]: 2025-11-26T02:01:02Z|00052|binding|INFO|Removing iface tapcc7c212d-f2 ovn-installed in OVS
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.405 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.413 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:20:57 192.168.0.118'], port_security=['fa:16:3e:70:20:57 192.168.0.118'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vnceagrg57o4-rkxsz3cjssco-tkhgbferrqyy-port-fjd2vmeyty65', 'neutron:cidrs': '192.168.0.118/24', 'neutron:device_id': '0e500d52-72e1-4501-b4d6-fc6ca575760f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97f5f89-70be-4349-beb5-5f8e6065072e', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vnceagrg57o4-rkxsz3cjssco-tkhgbferrqyy-port-fjd2vmeyty65', 'neutron:project_id': '4d902f6105ab4c81a51a4751fa89a83e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd3202a1a-8d71-42b1-ae70-18469fa18607', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.183', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5f5986b-4ad4-4edf-b238-68c26c7002dd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=cc7c212d-f288-48f9-a0c6-0e5635e3f2b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.417 286844 INFO neutron.agent.ovn.metadata.agent [-] Port cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 in datapath c97f5f89-70be-4349-beb5-5f8e6065072e unbound from our chassis#033[00m
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.423 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97f5f89-70be-4349-beb5-5f8e6065072e#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.434 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.448 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[9d90924b-34e6-4857-94ce-14cd11836155]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:01:02 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 26 02:01:02 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 6min 33.020s CPU time.
Nov 26 02:01:02 compute-0 systemd-machined[138512]: Machine qemu-2-instance-00000002 terminated.
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.492 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[2930e253-934a-4138-b625-d4b41cea900b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.495 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[6fbd0b5c-ca0f-48f6-a10e-15d3f8029cf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.537 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[53de8d47-3763-4b69-accf-f87c6e52c10a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.571 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[6282f700-cc17-47cb-a382-1b25a020c9c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97f5f89-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:e8:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 12, 'rx_bytes': 532, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 12, 'rx_bytes': 532, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544483, 'reachable_time': 22545, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 428048, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.596 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[9e67056a-8b34-4134-a2c7-fdb353b667af]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc97f5f89-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544500, 'tstamp': 544500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 428049, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc97f5f89-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544503, 'tstamp': 544503}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 428049, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.598 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97f5f89-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.600 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.610 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.610 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97f5f89-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.611 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.611 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97f5f89-70, col_values=(('external_ids', {'iface-id': '3824ec63-7278-42dc-8c72-8ec8e06c2f0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.612 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.620 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.631 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.641 350391 INFO nova.virt.libvirt.driver [-] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Instance destroyed successfully.#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.641 350391 DEBUG nova.objects.instance [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lazy-loading 'resources' on Instance uuid 0e500d52-72e1-4501-b4d6-fc6ca575760f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.665 350391 DEBUG nova.virt.libvirt.vif [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T01:51:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-grg57o4-rkxsz3cjssco-tkhgbferrqyy-vnf-25kkokddjcoo',id=2,image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-26T01:51:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='366b90b6-2e85-40c4-9ca1-855cf9022409'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d902f6105ab4c81a51a4751fa89a83e',ramdisk_id='',reservation_id='r-fn9f8qdl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T01:51:21Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yNDEyNjAwMDg1MzcwOTg3MTE0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI0MTI2MDAwODUzNzA5ODcxMTQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MjQxMjYwMDA4NTM3MDk4NzExND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI0MTI2MDAwODUzNzA5ODcxMTQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yNDEyNjAwMDg1MzcwOTg3MTE0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yNDEyNjAwMDg1MzcwOTg3MTE0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Nov 26 02:01:02 compute-0 nova_compute[350387]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MjQxMjYwMDA4NTM3MDk4NzExND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI0MTI2MDAwODUzNzA5ODcxMTQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yNDEyNjAwMDg1MzcwOTg3MTE0PT0tLQo=',user_id='b130e7a8bed3424f9f5ff63b35cd2b28',uuid=0e500d52-72e1-4501-b4d6-fc6ca575760f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "address": "fa:16:3e:70:20:57", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7c212d-f2", "ovs_interfaceid": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.666 350391 DEBUG nova.network.os_vif_util [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converting VIF {"id": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "address": "fa:16:3e:70:20:57", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7c212d-f2", "ovs_interfaceid": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.667 350391 DEBUG nova.network.os_vif_util [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:20:57,bridge_name='br-int',has_traffic_filtering=True,id=cc7c212d-f288-48f9-a0c6-0e5635e3f2b7,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcc7c212d-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.668 350391 DEBUG os_vif [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:20:57,bridge_name='br-int',has_traffic_filtering=True,id=cc7c212d-f288-48f9-a0c6-0e5635e3f2b7,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcc7c212d-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.671 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.671 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc7c212d-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.675 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.680 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.685 350391 INFO os_vif [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:20:57,bridge_name='br-int',has_traffic_filtering=True,id=cc7c212d-f288-48f9-a0c6-0e5635e3f2b7,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcc7c212d-f2')#033[00m
Nov 26 02:01:02 compute-0 rsyslogd[188548]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 02:01:02.665 350391 DEBUG nova.virt.libvirt.vif [None req-c50a974b-e9 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.731 350391 DEBUG nova.compute.manager [req-b90d519e-65f1-4299-974b-05c9e01ef7dd req-b503d252-8755-4d39-a4f7-6f0b794e59d8 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Received event network-vif-unplugged-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.732 350391 DEBUG oslo_concurrency.lockutils [req-b90d519e-65f1-4299-974b-05c9e01ef7dd req-b503d252-8755-4d39-a4f7-6f0b794e59d8 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.733 350391 DEBUG oslo_concurrency.lockutils [req-b90d519e-65f1-4299-974b-05c9e01ef7dd req-b503d252-8755-4d39-a4f7-6f0b794e59d8 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.733 350391 DEBUG oslo_concurrency.lockutils [req-b90d519e-65f1-4299-974b-05c9e01ef7dd req-b503d252-8755-4d39-a4f7-6f0b794e59d8 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.734 350391 DEBUG nova.compute.manager [req-b90d519e-65f1-4299-974b-05c9e01ef7dd req-b503d252-8755-4d39-a4f7-6f0b794e59d8 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] No waiting events found dispatching network-vif-unplugged-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.734 350391 DEBUG nova.compute.manager [req-b90d519e-65f1-4299-974b-05c9e01ef7dd req-b503d252-8755-4d39-a4f7-6f0b794e59d8 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Received event network-vif-unplugged-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.749 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:01:02 compute-0 nova_compute[350387]: 2025-11-26 02:01:02.750 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:02.751 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 02:01:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:03 compute-0 nova_compute[350387]: 2025-11-26 02:01:03.299 350391 DEBUG nova.compute.manager [req-91ce1234-f570-4889-b743-cf6a00ffaa48 req-2cdf7cd6-c238-4f80-a38c-69585a461e74 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Received event network-changed-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:01:03 compute-0 nova_compute[350387]: 2025-11-26 02:01:03.300 350391 DEBUG nova.compute.manager [req-91ce1234-f570-4889-b743-cf6a00ffaa48 req-2cdf7cd6-c238-4f80-a38c-69585a461e74 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Refreshing instance network info cache due to event network-changed-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:01:03 compute-0 nova_compute[350387]: 2025-11-26 02:01:03.300 350391 DEBUG oslo_concurrency.lockutils [req-91ce1234-f570-4889-b743-cf6a00ffaa48 req-2cdf7cd6-c238-4f80-a38c-69585a461e74 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:01:03 compute-0 nova_compute[350387]: 2025-11-26 02:01:03.301 350391 DEBUG oslo_concurrency.lockutils [req-91ce1234-f570-4889-b743-cf6a00ffaa48 req-2cdf7cd6-c238-4f80-a38c-69585a461e74 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:01:03 compute-0 nova_compute[350387]: 2025-11-26 02:01:03.301 350391 DEBUG nova.network.neutron [req-91ce1234-f570-4889-b743-cf6a00ffaa48 req-2cdf7cd6-c238-4f80-a38c-69585a461e74 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Refreshing network info cache for port cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:01:04 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.001 350391 INFO nova.virt.libvirt.driver [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Deleting instance files /var/lib/nova/instances/0e500d52-72e1-4501-b4d6-fc6ca575760f_del#033[00m
Nov 26 02:01:04 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.002 350391 INFO nova.virt.libvirt.driver [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Deletion of /var/lib/nova/instances/0e500d52-72e1-4501-b4d6-fc6ca575760f_del complete#033[00m
Nov 26 02:01:04 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.113 350391 DEBUG nova.virt.libvirt.host [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 26 02:01:04 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.113 350391 INFO nova.virt.libvirt.host [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] UEFI support detected#033[00m
Nov 26 02:01:04 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.116 350391 INFO nova.compute.manager [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Took 1.93 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 02:01:04 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.117 350391 DEBUG oslo.service.loopingcall [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 02:01:04 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.118 350391 DEBUG nova.compute.manager [-] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 02:01:04 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.118 350391 DEBUG nova.network.neutron [-] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 02:01:04 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.866 350391 DEBUG nova.compute.manager [req-5e754621-83d0-4d25-876c-b6f7e62d476e req-fa7f36de-cc75-4e41-b9cf-29e0ef2c6321 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Received event network-vif-plugged-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:01:04 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.866 350391 DEBUG oslo_concurrency.lockutils [req-5e754621-83d0-4d25-876c-b6f7e62d476e req-fa7f36de-cc75-4e41-b9cf-29e0ef2c6321 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:01:04 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.867 350391 DEBUG oslo_concurrency.lockutils [req-5e754621-83d0-4d25-876c-b6f7e62d476e req-fa7f36de-cc75-4e41-b9cf-29e0ef2c6321 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:01:04 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.867 350391 DEBUG oslo_concurrency.lockutils [req-5e754621-83d0-4d25-876c-b6f7e62d476e req-fa7f36de-cc75-4e41-b9cf-29e0ef2c6321 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:01:04 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.867 350391 DEBUG nova.compute.manager [req-5e754621-83d0-4d25-876c-b6f7e62d476e req-fa7f36de-cc75-4e41-b9cf-29e0ef2c6321 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] No waiting events found dispatching network-vif-plugged-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:01:04 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.868 350391 WARNING nova.compute.manager [req-5e754621-83d0-4d25-876c-b6f7e62d476e req-fa7f36de-cc75-4e41-b9cf-29e0ef2c6321 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Received unexpected event network-vif-plugged-cc7c212d-f288-48f9-a0c6-0e5635e3f2b7 for instance with vm_state active and task_state deleting.#033[00m
Nov 26 02:01:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:01:05 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.998 350391 DEBUG nova.network.neutron [req-91ce1234-f570-4889-b743-cf6a00ffaa48 req-2cdf7cd6-c238-4f80-a38c-69585a461e74 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Updated VIF entry in instance network info cache for port cc7c212d-f288-48f9-a0c6-0e5635e3f2b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:01:05 compute-0 nova_compute[350387]: 2025-11-26 02:01:04.999 350391 DEBUG nova.network.neutron [req-91ce1234-f570-4889-b743-cf6a00ffaa48 req-2cdf7cd6-c238-4f80-a38c-69585a461e74 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Updating instance_info_cache with network_info: [{"id": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "address": "fa:16:3e:70:20:57", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc7c212d-f2", "ovs_interfaceid": "cc7c212d-f288-48f9-a0c6-0e5635e3f2b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:01:05 compute-0 nova_compute[350387]: 2025-11-26 02:01:05.034 350391 DEBUG oslo_concurrency.lockutils [req-91ce1234-f570-4889-b743-cf6a00ffaa48 req-2cdf7cd6-c238-4f80-a38c-69585a461e74 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-0e500d52-72e1-4501-b4d6-fc6ca575760f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:01:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 248 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 10 op/s
Nov 26 02:01:05 compute-0 nova_compute[350387]: 2025-11-26 02:01:05.585 350391 DEBUG nova.network.neutron [-] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:01:05 compute-0 nova_compute[350387]: 2025-11-26 02:01:05.598 350391 INFO nova.compute.manager [-] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Took 1.48 seconds to deallocate network for instance.#033[00m
Nov 26 02:01:05 compute-0 nova_compute[350387]: 2025-11-26 02:01:05.638 350391 DEBUG oslo_concurrency.lockutils [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:01:05 compute-0 nova_compute[350387]: 2025-11-26 02:01:05.638 350391 DEBUG oslo_concurrency.lockutils [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:01:05 compute-0 nova_compute[350387]: 2025-11-26 02:01:05.757 350391 DEBUG oslo_concurrency.processutils [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:01:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:01:06 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1563416692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:01:06 compute-0 nova_compute[350387]: 2025-11-26 02:01:06.279 350391 DEBUG oslo_concurrency.processutils [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:01:06 compute-0 nova_compute[350387]: 2025-11-26 02:01:06.287 350391 DEBUG nova.compute.provider_tree [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:01:06 compute-0 nova_compute[350387]: 2025-11-26 02:01:06.301 350391 DEBUG nova.scheduler.client.report [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:01:06 compute-0 nova_compute[350387]: 2025-11-26 02:01:06.317 350391 DEBUG oslo_concurrency.lockutils [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:01:06 compute-0 nova_compute[350387]: 2025-11-26 02:01:06.365 350391 INFO nova.scheduler.client.report [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Deleted allocations for instance 0e500d52-72e1-4501-b4d6-fc6ca575760f#033[00m
Nov 26 02:01:06 compute-0 nova_compute[350387]: 2025-11-26 02:01:06.462 350391 DEBUG oslo_concurrency.lockutils [None req-c50a974b-e98e-4e80-ad7e-02f770a3c8e7 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "0e500d52-72e1-4501-b4d6-fc6ca575760f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:01:06 compute-0 nova_compute[350387]: 2025-11-26 02:01:06.593 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 228 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.4 KiB/s wr, 19 op/s
Nov 26 02:01:07 compute-0 nova_compute[350387]: 2025-11-26 02:01:07.674 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 26 02:01:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:01:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 26 02:01:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:01:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:01:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:01:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:01:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:01:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:01:11 compute-0 nova_compute[350387]: 2025-11-26 02:01:11.596 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:11 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:11.753 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:01:12 compute-0 podman[428104]: 2025-11-26 02:01:12.579154938 +0000 UTC m=+0.112366207 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:01:12 compute-0 podman[428102]: 2025-11-26 02:01:12.603256267 +0000 UTC m=+0.143915206 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 26 02:01:12 compute-0 podman[428105]: 2025-11-26 02:01:12.615345308 +0000 UTC m=+0.145182872 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 02:01:12 compute-0 nova_compute[350387]: 2025-11-26 02:01:12.676 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 26 02:01:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:01:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 26 02:01:15 compute-0 podman[428163]: 2025-11-26 02:01:15.592127814 +0000 UTC m=+0.139758749 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 02:01:15 compute-0 podman[428164]: 2025-11-26 02:01:15.695426994 +0000 UTC m=+0.237198514 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:01:16 compute-0 nova_compute[350387]: 2025-11-26 02:01:16.599 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Nov 26 02:01:17 compute-0 nova_compute[350387]: 2025-11-26 02:01:17.636 350391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764122462.634882, 0e500d52-72e1-4501-b4d6-fc6ca575760f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:01:17 compute-0 nova_compute[350387]: 2025-11-26 02:01:17.637 350391 INFO nova.compute.manager [-] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] VM Stopped (Lifecycle Event)#033[00m
Nov 26 02:01:17 compute-0 nova_compute[350387]: 2025-11-26 02:01:17.666 350391 DEBUG nova.compute.manager [None req-338c3da6-1ea5-41eb-a529-88d486f2fff2 - - - - - -] [instance: 0e500d52-72e1-4501-b4d6-fc6ca575760f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:01:17 compute-0 nova_compute[350387]: 2025-11-26 02:01:17.679 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:18 compute-0 podman[428207]: 2025-11-26 02:01:18.601530636 +0000 UTC m=+0.142058763 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, vcs-type=git, build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 26 02:01:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 341 B/s wr, 20 op/s
Nov 26 02:01:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:01:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:21 compute-0 nova_compute[350387]: 2025-11-26 02:01:21.603 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:22 compute-0 nova_compute[350387]: 2025-11-26 02:01:22.682 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:23 compute-0 podman[428225]: 2025-11-26 02:01:23.594413336 +0000 UTC m=+0.136640251 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 26 02:01:24 compute-0 nova_compute[350387]: 2025-11-26 02:01:24.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:01:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:01:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:24.980 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:01:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:24.981 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:01:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:01:24.981 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:01:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:25 compute-0 nova_compute[350387]: 2025-11-26 02:01:25.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:01:25 compute-0 nova_compute[350387]: 2025-11-26 02:01:25.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:01:25 compute-0 nova_compute[350387]: 2025-11-26 02:01:25.444 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:01:25 compute-0 nova_compute[350387]: 2025-11-26 02:01:25.445 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:01:25 compute-0 nova_compute[350387]: 2025-11-26 02:01:25.446 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:01:25 compute-0 nova_compute[350387]: 2025-11-26 02:01:25.447 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:01:25 compute-0 nova_compute[350387]: 2025-11-26 02:01:25.447 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:01:25 compute-0 podman[428244]: 2025-11-26 02:01:25.599269883 +0000 UTC m=+0.140792798 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container)
Nov 26 02:01:25 compute-0 podman[428245]: 2025-11-26 02:01:25.631642735 +0000 UTC m=+0.163180519 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:01:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:01:25 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1704360537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:01:25 compute-0 nova_compute[350387]: 2025-11-26 02:01:25.949 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:01:26 compute-0 nova_compute[350387]: 2025-11-26 02:01:26.093 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:01:26 compute-0 nova_compute[350387]: 2025-11-26 02:01:26.093 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:01:26 compute-0 nova_compute[350387]: 2025-11-26 02:01:26.094 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:01:26 compute-0 nova_compute[350387]: 2025-11-26 02:01:26.100 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:01:26 compute-0 nova_compute[350387]: 2025-11-26 02:01:26.101 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:01:26 compute-0 nova_compute[350387]: 2025-11-26 02:01:26.101 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:01:26 compute-0 nova_compute[350387]: 2025-11-26 02:01:26.109 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:01:26 compute-0 nova_compute[350387]: 2025-11-26 02:01:26.109 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:01:26 compute-0 nova_compute[350387]: 2025-11-26 02:01:26.110 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:01:26 compute-0 nova_compute[350387]: 2025-11-26 02:01:26.608 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:26 compute-0 nova_compute[350387]: 2025-11-26 02:01:26.835 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:01:26 compute-0 nova_compute[350387]: 2025-11-26 02:01:26.836 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3409MB free_disk=59.88887023925781GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:01:26 compute-0 nova_compute[350387]: 2025-11-26 02:01:26.837 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:01:26 compute-0 nova_compute[350387]: 2025-11-26 02:01:26.837 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:01:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:01:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/838251598' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:01:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:01:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/838251598' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:01:27 compute-0 nova_compute[350387]: 2025-11-26 02:01:27.051 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:01:27 compute-0 nova_compute[350387]: 2025-11-26 02:01:27.051 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:01:27 compute-0 nova_compute[350387]: 2025-11-26 02:01:27.052 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance d32050dc-c041-47df-994e-7d05cf1f489a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:01:27 compute-0 nova_compute[350387]: 2025-11-26 02:01:27.053 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:01:27 compute-0 nova_compute[350387]: 2025-11-26 02:01:27.053 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:01:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:27 compute-0 nova_compute[350387]: 2025-11-26 02:01:27.120 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:01:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:01:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/436852843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:01:27 compute-0 nova_compute[350387]: 2025-11-26 02:01:27.685 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:27 compute-0 nova_compute[350387]: 2025-11-26 02:01:27.692 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:01:27 compute-0 nova_compute[350387]: 2025-11-26 02:01:27.706 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:01:27 compute-0 nova_compute[350387]: 2025-11-26 02:01:27.746 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:01:27 compute-0 nova_compute[350387]: 2025-11-26 02:01:27.749 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:01:27 compute-0 nova_compute[350387]: 2025-11-26 02:01:27.750 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:01:28 compute-0 nova_compute[350387]: 2025-11-26 02:01:28.750 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:01:28 compute-0 nova_compute[350387]: 2025-11-26 02:01:28.751 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:01:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:29 compute-0 nova_compute[350387]: 2025-11-26 02:01:29.368 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:01:29 compute-0 nova_compute[350387]: 2025-11-26 02:01:29.369 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:01:29 compute-0 nova_compute[350387]: 2025-11-26 02:01:29.369 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:01:29 compute-0 podman[158021]: time="2025-11-26T02:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:01:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:01:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8638 "" "Go-http-client/1.1"
Nov 26 02:01:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:01:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:31 compute-0 nova_compute[350387]: 2025-11-26 02:01:31.109 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Updating instance_info_cache with network_info: [{"id": "867227e5-4422-4cfb-93d9-0589612717db", "address": "fa:16:3e:d6:c0:70", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.36", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867227e5-44", "ovs_interfaceid": "867227e5-4422-4cfb-93d9-0589612717db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:01:31 compute-0 nova_compute[350387]: 2025-11-26 02:01:31.140 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:01:31 compute-0 nova_compute[350387]: 2025-11-26 02:01:31.141 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:01:31 compute-0 nova_compute[350387]: 2025-11-26 02:01:31.142 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:01:31 compute-0 nova_compute[350387]: 2025-11-26 02:01:31.143 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:01:31 compute-0 nova_compute[350387]: 2025-11-26 02:01:31.143 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:01:31 compute-0 nova_compute[350387]: 2025-11-26 02:01:31.144 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:01:31 compute-0 nova_compute[350387]: 2025-11-26 02:01:31.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:01:31 compute-0 nova_compute[350387]: 2025-11-26 02:01:31.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:01:31 compute-0 openstack_network_exporter[367323]: ERROR   02:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:01:31 compute-0 openstack_network_exporter[367323]: ERROR   02:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:01:31 compute-0 openstack_network_exporter[367323]: ERROR   02:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:01:31 compute-0 openstack_network_exporter[367323]: ERROR   02:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:01:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:01:31 compute-0 openstack_network_exporter[367323]: ERROR   02:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:01:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:01:31 compute-0 nova_compute[350387]: 2025-11-26 02:01:31.612 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:32 compute-0 nova_compute[350387]: 2025-11-26 02:01:32.294 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:01:32 compute-0 nova_compute[350387]: 2025-11-26 02:01:32.690 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:01:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:36 compute-0 nova_compute[350387]: 2025-11-26 02:01:36.614 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:37 compute-0 nova_compute[350387]: 2025-11-26 02:01:37.694 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:38 compute-0 ovn_controller[89102]: 2025-11-26T02:01:38Z|00053|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 26 02:01:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:01:41
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.mgr', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'images']
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:01:41 compute-0 nova_compute[350387]: 2025-11-26 02:01:41.618 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:01:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:01:42 compute-0 nova_compute[350387]: 2025-11-26 02:01:42.697 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:43 compute-0 podman[428473]: 2025-11-26 02:01:43.105245932 +0000 UTC m=+0.109796785 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 02:01:43 compute-0 podman[428471]: 2025-11-26 02:01:43.106009213 +0000 UTC m=+0.122423630 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Nov 26 02:01:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:43 compute-0 podman[428472]: 2025-11-26 02:01:43.123214158 +0000 UTC m=+0.136345753 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 26 02:01:43 compute-0 podman[428553]: 2025-11-26 02:01:43.279887072 +0000 UTC m=+0.106002497 container exec 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:01:43 compute-0 podman[428553]: 2025-11-26 02:01:43.398677168 +0000 UTC m=+0.224792613 container exec_died 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:01:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:01:44 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:01:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:01:44 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:01:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:01:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:45 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:01:45 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:01:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:01:45 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:01:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:01:45 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:01:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:01:45 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:01:45 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 6c6f3703-4cea-4f67-ac1a-e0857d6826e5 does not exist
Nov 26 02:01:45 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 3545001a-dea8-434a-836d-82c6764e6710 does not exist
Nov 26 02:01:45 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev bd1588cf-d359-407b-89f4-606871ccb00e does not exist
Nov 26 02:01:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:01:45 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:01:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:01:45 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:01:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:01:45 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:01:46 compute-0 podman[428857]: 2025-11-26 02:01:46.239572332 +0000 UTC m=+0.143271828 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:01:46 compute-0 podman[428858]: 2025-11-26 02:01:46.318096555 +0000 UTC m=+0.219601599 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 02:01:46 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:01:46 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:01:46 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:01:46 compute-0 nova_compute[350387]: 2025-11-26 02:01:46.621 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:47 compute-0 podman[429015]: 2025-11-26 02:01:47.096598179 +0000 UTC m=+0.092514628 container create 75e3ca62ed5f90145370b96c94a51a16b3c1ec41d7d2084282ad9076280c7b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_tu, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:01:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:47 compute-0 podman[429015]: 2025-11-26 02:01:47.060287006 +0000 UTC m=+0.056203475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:01:47 compute-0 systemd[1]: Started libpod-conmon-75e3ca62ed5f90145370b96c94a51a16b3c1ec41d7d2084282ad9076280c7b01.scope.
Nov 26 02:01:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:01:47 compute-0 podman[429015]: 2025-11-26 02:01:47.245549916 +0000 UTC m=+0.241466405 container init 75e3ca62ed5f90145370b96c94a51a16b3c1ec41d7d2084282ad9076280c7b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_tu, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 02:01:47 compute-0 podman[429015]: 2025-11-26 02:01:47.264228043 +0000 UTC m=+0.260144492 container start 75e3ca62ed5f90145370b96c94a51a16b3c1ec41d7d2084282ad9076280c7b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_tu, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 02:01:47 compute-0 podman[429015]: 2025-11-26 02:01:47.270748636 +0000 UTC m=+0.266665145 container attach 75e3ca62ed5f90145370b96c94a51a16b3c1ec41d7d2084282ad9076280c7b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:01:47 compute-0 focused_tu[429031]: 167 167
Nov 26 02:01:47 compute-0 systemd[1]: libpod-75e3ca62ed5f90145370b96c94a51a16b3c1ec41d7d2084282ad9076280c7b01.scope: Deactivated successfully.
Nov 26 02:01:47 compute-0 podman[429015]: 2025-11-26 02:01:47.278023181 +0000 UTC m=+0.273939620 container died 75e3ca62ed5f90145370b96c94a51a16b3c1ec41d7d2084282ad9076280c7b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_tu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:01:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e348f4a5ab02a6f2ca24c1c62225340d20ea84ffae4302a54149958a35b96d3d-merged.mount: Deactivated successfully.
Nov 26 02:01:47 compute-0 podman[429015]: 2025-11-26 02:01:47.367474612 +0000 UTC m=+0.363391061 container remove 75e3ca62ed5f90145370b96c94a51a16b3c1ec41d7d2084282ad9076280c7b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:01:47 compute-0 systemd[1]: libpod-conmon-75e3ca62ed5f90145370b96c94a51a16b3c1ec41d7d2084282ad9076280c7b01.scope: Deactivated successfully.
Nov 26 02:01:47 compute-0 podman[429055]: 2025-11-26 02:01:47.605990372 +0000 UTC m=+0.080546610 container create 0c3940cb3a5ac90d96cab8c51fb67994284beebaa6e16d52c227d284482fd46d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_darwin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:01:47 compute-0 podman[429055]: 2025-11-26 02:01:47.568033653 +0000 UTC m=+0.042589961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:01:47 compute-0 systemd[1]: Started libpod-conmon-0c3940cb3a5ac90d96cab8c51fb67994284beebaa6e16d52c227d284482fd46d.scope.
Nov 26 02:01:47 compute-0 nova_compute[350387]: 2025-11-26 02:01:47.699 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560b345d54a75597cacbd0f6def24e8882f0c6a725748535af781a0901079352/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560b345d54a75597cacbd0f6def24e8882f0c6a725748535af781a0901079352/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560b345d54a75597cacbd0f6def24e8882f0c6a725748535af781a0901079352/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560b345d54a75597cacbd0f6def24e8882f0c6a725748535af781a0901079352/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560b345d54a75597cacbd0f6def24e8882f0c6a725748535af781a0901079352/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:01:47 compute-0 podman[429055]: 2025-11-26 02:01:47.798449985 +0000 UTC m=+0.273006233 container init 0c3940cb3a5ac90d96cab8c51fb67994284beebaa6e16d52c227d284482fd46d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 02:01:47 compute-0 podman[429055]: 2025-11-26 02:01:47.832381761 +0000 UTC m=+0.306938019 container start 0c3940cb3a5ac90d96cab8c51fb67994284beebaa6e16d52c227d284482fd46d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 02:01:47 compute-0 podman[429055]: 2025-11-26 02:01:47.839524532 +0000 UTC m=+0.314080830 container attach 0c3940cb3a5ac90d96cab8c51fb67994284beebaa6e16d52c227d284482fd46d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_darwin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:01:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:49 compute-0 sleepy_darwin[429070]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:01:49 compute-0 sleepy_darwin[429070]: --> relative data size: 1.0
Nov 26 02:01:49 compute-0 sleepy_darwin[429070]: --> All data devices are unavailable
Nov 26 02:01:49 compute-0 systemd[1]: libpod-0c3940cb3a5ac90d96cab8c51fb67994284beebaa6e16d52c227d284482fd46d.scope: Deactivated successfully.
Nov 26 02:01:49 compute-0 systemd[1]: libpod-0c3940cb3a5ac90d96cab8c51fb67994284beebaa6e16d52c227d284482fd46d.scope: Consumed 1.341s CPU time.
Nov 26 02:01:49 compute-0 podman[429100]: 2025-11-26 02:01:49.387727363 +0000 UTC m=+0.081530018 container died 0c3940cb3a5ac90d96cab8c51fb67994284beebaa6e16d52c227d284482fd46d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_darwin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:01:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-560b345d54a75597cacbd0f6def24e8882f0c6a725748535af781a0901079352-merged.mount: Deactivated successfully.
Nov 26 02:01:49 compute-0 podman[429100]: 2025-11-26 02:01:49.475055943 +0000 UTC m=+0.168858588 container remove 0c3940cb3a5ac90d96cab8c51fb67994284beebaa6e16d52c227d284482fd46d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_darwin, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:01:49 compute-0 podman[429099]: 2025-11-26 02:01:49.478552702 +0000 UTC m=+0.158536998 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, container_name=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.openshift.expose-services=)
Nov 26 02:01:49 compute-0 systemd[1]: libpod-conmon-0c3940cb3a5ac90d96cab8c51fb67994284beebaa6e16d52c227d284482fd46d.scope: Deactivated successfully.
Nov 26 02:01:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:01:50 compute-0 podman[429273]: 2025-11-26 02:01:50.698145193 +0000 UTC m=+0.089663107 container create b44d6626aa46fed0f5b86b5ddda5f9fd4fc65e65f3fcac7000d9e835a5f7d314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_beaver, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 02:01:50 compute-0 podman[429273]: 2025-11-26 02:01:50.657483457 +0000 UTC m=+0.049001411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:01:50 compute-0 systemd[1]: Started libpod-conmon-b44d6626aa46fed0f5b86b5ddda5f9fd4fc65e65f3fcac7000d9e835a5f7d314.scope.
Nov 26 02:01:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:01:50 compute-0 podman[429273]: 2025-11-26 02:01:50.841729559 +0000 UTC m=+0.233247443 container init b44d6626aa46fed0f5b86b5ddda5f9fd4fc65e65f3fcac7000d9e835a5f7d314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_beaver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 02:01:50 compute-0 podman[429273]: 2025-11-26 02:01:50.858913493 +0000 UTC m=+0.250431357 container start b44d6626aa46fed0f5b86b5ddda5f9fd4fc65e65f3fcac7000d9e835a5f7d314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_beaver, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 02:01:50 compute-0 podman[429273]: 2025-11-26 02:01:50.864326385 +0000 UTC m=+0.255844289 container attach b44d6626aa46fed0f5b86b5ddda5f9fd4fc65e65f3fcac7000d9e835a5f7d314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 02:01:50 compute-0 laughing_beaver[429289]: 167 167
Nov 26 02:01:50 compute-0 systemd[1]: libpod-b44d6626aa46fed0f5b86b5ddda5f9fd4fc65e65f3fcac7000d9e835a5f7d314.scope: Deactivated successfully.
Nov 26 02:01:50 compute-0 podman[429273]: 2025-11-26 02:01:50.875418628 +0000 UTC m=+0.266936532 container died b44d6626aa46fed0f5b86b5ddda5f9fd4fc65e65f3fcac7000d9e835a5f7d314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 26 02:01:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a09818a8f8c38dd537475305b710baf0a7290f5015adbbb7c15ead2de59e98b-merged.mount: Deactivated successfully.
Nov 26 02:01:50 compute-0 podman[429273]: 2025-11-26 02:01:50.974455678 +0000 UTC m=+0.365973552 container remove b44d6626aa46fed0f5b86b5ddda5f9fd4fc65e65f3fcac7000d9e835a5f7d314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 02:01:50 compute-0 systemd[1]: libpod-conmon-b44d6626aa46fed0f5b86b5ddda5f9fd4fc65e65f3fcac7000d9e835a5f7d314.scope: Deactivated successfully.
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016571738458032168 of space, bias 1.0, pg target 0.49715215374096505 quantized to 32 (current 32)
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:01:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:01:51 compute-0 podman[429312]: 2025-11-26 02:01:51.256913036 +0000 UTC m=+0.084905033 container create b0afceffd727345ba01992bc482dacf0f7d13384273591aedbfe0de4e469a391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cerf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 02:01:51 compute-0 podman[429312]: 2025-11-26 02:01:51.226908961 +0000 UTC m=+0.054900958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:01:51 compute-0 systemd[1]: Started libpod-conmon-b0afceffd727345ba01992bc482dacf0f7d13384273591aedbfe0de4e469a391.scope.
Nov 26 02:01:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c644745197c53f63dcb73f37e9a117251f57a79e0a0ae55004b3f1a66cfb8163/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c644745197c53f63dcb73f37e9a117251f57a79e0a0ae55004b3f1a66cfb8163/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c644745197c53f63dcb73f37e9a117251f57a79e0a0ae55004b3f1a66cfb8163/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c644745197c53f63dcb73f37e9a117251f57a79e0a0ae55004b3f1a66cfb8163/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:01:51 compute-0 podman[429312]: 2025-11-26 02:01:51.439079599 +0000 UTC m=+0.267071646 container init b0afceffd727345ba01992bc482dacf0f7d13384273591aedbfe0de4e469a391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 02:01:51 compute-0 podman[429312]: 2025-11-26 02:01:51.456099659 +0000 UTC m=+0.284091646 container start b0afceffd727345ba01992bc482dacf0f7d13384273591aedbfe0de4e469a391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cerf, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 02:01:51 compute-0 podman[429312]: 2025-11-26 02:01:51.463208649 +0000 UTC m=+0.291200656 container attach b0afceffd727345ba01992bc482dacf0f7d13384273591aedbfe0de4e469a391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cerf, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 02:01:51 compute-0 nova_compute[350387]: 2025-11-26 02:01:51.624 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]: {
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:    "0": [
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:        {
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "devices": [
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "/dev/loop3"
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            ],
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "lv_name": "ceph_lv0",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "lv_size": "21470642176",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "name": "ceph_lv0",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "tags": {
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.cluster_name": "ceph",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.crush_device_class": "",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.encrypted": "0",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.osd_id": "0",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.type": "block",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.vdo": "0"
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            },
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "type": "block",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "vg_name": "ceph_vg0"
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:        }
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:    ],
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:    "1": [
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:        {
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "devices": [
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "/dev/loop4"
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            ],
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "lv_name": "ceph_lv1",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "lv_size": "21470642176",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "name": "ceph_lv1",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "tags": {
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.cluster_name": "ceph",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.crush_device_class": "",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.encrypted": "0",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.osd_id": "1",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.type": "block",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.vdo": "0"
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            },
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "type": "block",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "vg_name": "ceph_vg1"
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:        }
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:    ],
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:    "2": [
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:        {
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "devices": [
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "/dev/loop5"
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            ],
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "lv_name": "ceph_lv2",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "lv_size": "21470642176",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "name": "ceph_lv2",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "tags": {
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.cluster_name": "ceph",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.crush_device_class": "",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.encrypted": "0",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.osd_id": "2",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.type": "block",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:                "ceph.vdo": "0"
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            },
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "type": "block",
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:            "vg_name": "ceph_vg2"
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:        }
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]:    ]
Nov 26 02:01:52 compute-0 peaceful_cerf[429327]: }
Nov 26 02:01:52 compute-0 systemd[1]: libpod-b0afceffd727345ba01992bc482dacf0f7d13384273591aedbfe0de4e469a391.scope: Deactivated successfully.
Nov 26 02:01:52 compute-0 podman[429312]: 2025-11-26 02:01:52.338791359 +0000 UTC m=+1.166783346 container died b0afceffd727345ba01992bc482dacf0f7d13384273591aedbfe0de4e469a391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cerf, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 02:01:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c644745197c53f63dcb73f37e9a117251f57a79e0a0ae55004b3f1a66cfb8163-merged.mount: Deactivated successfully.
Nov 26 02:01:52 compute-0 podman[429312]: 2025-11-26 02:01:52.441296837 +0000 UTC m=+1.269288834 container remove b0afceffd727345ba01992bc482dacf0f7d13384273591aedbfe0de4e469a391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:01:52 compute-0 systemd[1]: libpod-conmon-b0afceffd727345ba01992bc482dacf0f7d13384273591aedbfe0de4e469a391.scope: Deactivated successfully.
Nov 26 02:01:52 compute-0 nova_compute[350387]: 2025-11-26 02:01:52.703 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:53 compute-0 podman[429485]: 2025-11-26 02:01:53.755586597 +0000 UTC m=+0.088342570 container create 582d95be1d415f056caf1735e23207f11e4b0f2265065c2fd225e2fb72a1cb37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tesla, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 02:01:53 compute-0 podman[429485]: 2025-11-26 02:01:53.730736587 +0000 UTC m=+0.063492560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:01:53 compute-0 systemd[1]: Started libpod-conmon-582d95be1d415f056caf1735e23207f11e4b0f2265065c2fd225e2fb72a1cb37.scope.
Nov 26 02:01:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:01:53 compute-0 podman[429485]: 2025-11-26 02:01:53.922224312 +0000 UTC m=+0.254980335 container init 582d95be1d415f056caf1735e23207f11e4b0f2265065c2fd225e2fb72a1cb37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tesla, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:01:53 compute-0 podman[429485]: 2025-11-26 02:01:53.940767824 +0000 UTC m=+0.273523787 container start 582d95be1d415f056caf1735e23207f11e4b0f2265065c2fd225e2fb72a1cb37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:01:53 compute-0 podman[429485]: 2025-11-26 02:01:53.948687127 +0000 UTC m=+0.281443150 container attach 582d95be1d415f056caf1735e23207f11e4b0f2265065c2fd225e2fb72a1cb37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:01:53 compute-0 competent_tesla[429507]: 167 167
Nov 26 02:01:53 compute-0 systemd[1]: libpod-582d95be1d415f056caf1735e23207f11e4b0f2265065c2fd225e2fb72a1cb37.scope: Deactivated successfully.
Nov 26 02:01:53 compute-0 podman[429485]: 2025-11-26 02:01:53.954770229 +0000 UTC m=+0.287526202 container died 582d95be1d415f056caf1735e23207f11e4b0f2265065c2fd225e2fb72a1cb37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tesla, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 02:01:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8e4641dd5e9947bb8b60d54f673022486738d6fc37fe20f5a93a996257bbc0b-merged.mount: Deactivated successfully.
Nov 26 02:01:54 compute-0 podman[429485]: 2025-11-26 02:01:54.032053676 +0000 UTC m=+0.364809619 container remove 582d95be1d415f056caf1735e23207f11e4b0f2265065c2fd225e2fb72a1cb37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tesla, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:01:54 compute-0 podman[429499]: 2025-11-26 02:01:54.047893353 +0000 UTC m=+0.210576124 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:01:54 compute-0 systemd[1]: libpod-conmon-582d95be1d415f056caf1735e23207f11e4b0f2265065c2fd225e2fb72a1cb37.scope: Deactivated successfully.
Nov 26 02:01:54 compute-0 podman[429545]: 2025-11-26 02:01:54.340488656 +0000 UTC m=+0.088285359 container create 4d835245cddf9312b4ba33f224b7d4d00e0998754de3fbf081a700869a7d650b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:01:54 compute-0 podman[429545]: 2025-11-26 02:01:54.308119324 +0000 UTC m=+0.055916107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:01:54 compute-0 systemd[1]: Started libpod-conmon-4d835245cddf9312b4ba33f224b7d4d00e0998754de3fbf081a700869a7d650b.scope.
Nov 26 02:01:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64175b7e52db33ea56f5cbc4b29d4ae190e375534a7c1fe749ac4f8f212c9f09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64175b7e52db33ea56f5cbc4b29d4ae190e375534a7c1fe749ac4f8f212c9f09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64175b7e52db33ea56f5cbc4b29d4ae190e375534a7c1fe749ac4f8f212c9f09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64175b7e52db33ea56f5cbc4b29d4ae190e375534a7c1fe749ac4f8f212c9f09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:01:54 compute-0 podman[429545]: 2025-11-26 02:01:54.526762634 +0000 UTC m=+0.274559417 container init 4d835245cddf9312b4ba33f224b7d4d00e0998754de3fbf081a700869a7d650b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 02:01:54 compute-0 podman[429545]: 2025-11-26 02:01:54.552099168 +0000 UTC m=+0.299895891 container start 4d835245cddf9312b4ba33f224b7d4d00e0998754de3fbf081a700869a7d650b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:01:54 compute-0 podman[429545]: 2025-11-26 02:01:54.558699284 +0000 UTC m=+0.306496007 container attach 4d835245cddf9312b4ba33f224b7d4d00e0998754de3fbf081a700869a7d650b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:01:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:01:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:55 compute-0 pensive_poincare[429561]: {
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:        "osd_id": 0,
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:        "type": "bluestore"
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:    },
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:        "osd_id": 2,
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:        "type": "bluestore"
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:    },
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:        "osd_id": 1,
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:        "type": "bluestore"
Nov 26 02:01:55 compute-0 pensive_poincare[429561]:    }
Nov 26 02:01:55 compute-0 pensive_poincare[429561]: }
Nov 26 02:01:55 compute-0 systemd[1]: libpod-4d835245cddf9312b4ba33f224b7d4d00e0998754de3fbf081a700869a7d650b.scope: Deactivated successfully.
Nov 26 02:01:55 compute-0 systemd[1]: libpod-4d835245cddf9312b4ba33f224b7d4d00e0998754de3fbf081a700869a7d650b.scope: Consumed 1.212s CPU time.
Nov 26 02:01:55 compute-0 podman[429545]: 2025-11-26 02:01:55.763368915 +0000 UTC m=+1.511165628 container died 4d835245cddf9312b4ba33f224b7d4d00e0998754de3fbf081a700869a7d650b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 02:01:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-64175b7e52db33ea56f5cbc4b29d4ae190e375534a7c1fe749ac4f8f212c9f09-merged.mount: Deactivated successfully.
Nov 26 02:01:55 compute-0 podman[429545]: 2025-11-26 02:01:55.857458786 +0000 UTC m=+1.605255469 container remove 4d835245cddf9312b4ba33f224b7d4d00e0998754de3fbf081a700869a7d650b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:01:55 compute-0 systemd[1]: libpod-conmon-4d835245cddf9312b4ba33f224b7d4d00e0998754de3fbf081a700869a7d650b.scope: Deactivated successfully.
Nov 26 02:01:55 compute-0 podman[429597]: 2025-11-26 02:01:55.902798964 +0000 UTC m=+0.096252303 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:01:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:01:55 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:01:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:01:55 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:01:55 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev f7572577-6d8e-4c17-a71b-1d625c6c2b07 does not exist
Nov 26 02:01:55 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev eb1eafa4-ada4-4b90-aa16-24e0d257e99b does not exist
Nov 26 02:01:55 compute-0 podman[429594]: 2025-11-26 02:01:55.933950002 +0000 UTC m=+0.119287212 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 02:01:56 compute-0 nova_compute[350387]: 2025-11-26 02:01:56.628 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:56 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:01:56 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:01:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:57 compute-0 nova_compute[350387]: 2025-11-26 02:01:57.705 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:01:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:01:59 compute-0 podman[158021]: time="2025-11-26T02:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:01:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:01:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8638 "" "Go-http-client/1.1"
Nov 26 02:01:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:02:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:01 compute-0 openstack_network_exporter[367323]: ERROR   02:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:02:01 compute-0 openstack_network_exporter[367323]: ERROR   02:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:02:01 compute-0 openstack_network_exporter[367323]: ERROR   02:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:02:01 compute-0 openstack_network_exporter[367323]: ERROR   02:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:02:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:02:01 compute-0 openstack_network_exporter[367323]: ERROR   02:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:02:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:02:01 compute-0 nova_compute[350387]: 2025-11-26 02:02:01.632 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:02 compute-0 nova_compute[350387]: 2025-11-26 02:02:02.708 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:02:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:06 compute-0 nova_compute[350387]: 2025-11-26 02:02:06.636 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:07 compute-0 nova_compute[350387]: 2025-11-26 02:02:07.711 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:02:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:02:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:02:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:02:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:02:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:02:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:02:11 compute-0 nova_compute[350387]: 2025-11-26 02:02:11.639 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:12 compute-0 nova_compute[350387]: 2025-11-26 02:02:12.714 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:13 compute-0 podman[429697]: 2025-11-26 02:02:13.535340204 +0000 UTC m=+0.090533379 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 02:02:13 compute-0 podman[429696]: 2025-11-26 02:02:13.547790403 +0000 UTC m=+0.097882325 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute)
Nov 26 02:02:13 compute-0 podman[429698]: 2025-11-26 02:02:13.577245959 +0000 UTC m=+0.123816282 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 02:02:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:02:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:16 compute-0 podman[429758]: 2025-11-26 02:02:16.618677523 +0000 UTC m=+0.163959318 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Nov 26 02:02:16 compute-0 nova_compute[350387]: 2025-11-26 02:02:16.643 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:16 compute-0 podman[429759]: 2025-11-26 02:02:16.672952764 +0000 UTC m=+0.213348762 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 02:02:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:17 compute-0 nova_compute[350387]: 2025-11-26 02:02:17.717 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:02:20 compute-0 podman[429804]: 2025-11-26 02:02:20.627553938 +0000 UTC m=+0.164257896 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, distribution-scope=public, name=ubi9, release-0.7.12=)
Nov 26 02:02:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:21 compute-0 nova_compute[350387]: 2025-11-26 02:02:21.646 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:22 compute-0 nova_compute[350387]: 2025-11-26 02:02:22.721 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:24 compute-0 podman[429824]: 2025-11-26 02:02:24.577893196 +0000 UTC m=+0.124242094 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 26 02:02:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:02:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:02:24.981 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:02:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:02:24.982 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:02:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:02:24.983 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:02:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:25 compute-0 nova_compute[350387]: 2025-11-26 02:02:25.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:02:25 compute-0 nova_compute[350387]: 2025-11-26 02:02:25.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:02:25 compute-0 nova_compute[350387]: 2025-11-26 02:02:25.349 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:02:25 compute-0 nova_compute[350387]: 2025-11-26 02:02:25.350 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:02:25 compute-0 nova_compute[350387]: 2025-11-26 02:02:25.351 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:02:25 compute-0 nova_compute[350387]: 2025-11-26 02:02:25.352 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:02:25 compute-0 nova_compute[350387]: 2025-11-26 02:02:25.353 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:02:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:02:25 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1106653014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:02:25 compute-0 nova_compute[350387]: 2025-11-26 02:02:25.921 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.066 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.067 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.068 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.074 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.075 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.075 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.084 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.084 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.085 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:02:26 compute-0 podman[429867]: 2025-11-26 02:02:26.560446105 +0000 UTC m=+0.106482796 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, maintainer=Red Hat, Inc., architecture=x86_64, vendor=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 26 02:02:26 compute-0 podman[429868]: 2025-11-26 02:02:26.597875375 +0000 UTC m=+0.135561742 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.636 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.638 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3408MB free_disk=59.88887023925781GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.638 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.638 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.649 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.864 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.865 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.865 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance d32050dc-c041-47df-994e-7d05cf1f489a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.867 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:02:26 compute-0 nova_compute[350387]: 2025-11-26 02:02:26.867 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:02:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:02:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2493040928' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:02:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:02:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2493040928' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:02:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:27 compute-0 nova_compute[350387]: 2025-11-26 02:02:27.199 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:02:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:02:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3932713807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:02:27 compute-0 nova_compute[350387]: 2025-11-26 02:02:27.718 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:02:27 compute-0 nova_compute[350387]: 2025-11-26 02:02:27.723 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:27 compute-0 nova_compute[350387]: 2025-11-26 02:02:27.732 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:02:27 compute-0 nova_compute[350387]: 2025-11-26 02:02:27.761 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:02:27 compute-0 nova_compute[350387]: 2025-11-26 02:02:27.763 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:02:27 compute-0 nova_compute[350387]: 2025-11-26 02:02:27.764 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:02:27 compute-0 nova_compute[350387]: 2025-11-26 02:02:27.765 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:02:28 compute-0 nova_compute[350387]: 2025-11-26 02:02:28.779 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:02:28 compute-0 nova_compute[350387]: 2025-11-26 02:02:28.780 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:02:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:29 compute-0 nova_compute[350387]: 2025-11-26 02:02:29.404 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:02:29 compute-0 nova_compute[350387]: 2025-11-26 02:02:29.404 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:02:29 compute-0 nova_compute[350387]: 2025-11-26 02:02:29.404 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:02:29 compute-0 podman[158021]: time="2025-11-26T02:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:02:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:02:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8636 "" "Go-http-client/1.1"
Nov 26 02:02:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:02:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:31 compute-0 openstack_network_exporter[367323]: ERROR   02:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:02:31 compute-0 openstack_network_exporter[367323]: ERROR   02:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:02:31 compute-0 openstack_network_exporter[367323]: ERROR   02:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:02:31 compute-0 openstack_network_exporter[367323]: ERROR   02:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:02:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:02:31 compute-0 openstack_network_exporter[367323]: ERROR   02:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:02:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:02:31 compute-0 nova_compute[350387]: 2025-11-26 02:02:31.652 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:31 compute-0 nova_compute[350387]: 2025-11-26 02:02:31.740 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Updating instance_info_cache with network_info: [{"id": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "address": "fa:16:3e:99:2d:81", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d715a2-34", "ovs_interfaceid": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:02:31 compute-0 nova_compute[350387]: 2025-11-26 02:02:31.759 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:02:31 compute-0 nova_compute[350387]: 2025-11-26 02:02:31.759 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:02:31 compute-0 nova_compute[350387]: 2025-11-26 02:02:31.760 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:02:31 compute-0 nova_compute[350387]: 2025-11-26 02:02:31.761 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:02:31 compute-0 nova_compute[350387]: 2025-11-26 02:02:31.761 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:02:32 compute-0 nova_compute[350387]: 2025-11-26 02:02:32.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:02:32 compute-0 nova_compute[350387]: 2025-11-26 02:02:32.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:02:32 compute-0 nova_compute[350387]: 2025-11-26 02:02:32.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:02:32 compute-0 nova_compute[350387]: 2025-11-26 02:02:32.727 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:33 compute-0 nova_compute[350387]: 2025-11-26 02:02:33.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:02:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:02:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:36 compute-0 nova_compute[350387]: 2025-11-26 02:02:36.654 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:37 compute-0 nova_compute[350387]: 2025-11-26 02:02:37.729 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:02:41
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['images', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control']
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:02:41 compute-0 nova_compute[350387]: 2025-11-26 02:02:41.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:02:41 compute-0 nova_compute[350387]: 2025-11-26 02:02:41.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 02:02:41 compute-0 nova_compute[350387]: 2025-11-26 02:02:41.315 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 02:02:41 compute-0 nova_compute[350387]: 2025-11-26 02:02:41.657 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:02:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:02:42 compute-0 nova_compute[350387]: 2025-11-26 02:02:42.733 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.870 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.870 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.871 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.871 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50aa1ffec0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.885 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a8b199f7-8cd5-45ea-bc7e-af8352a6afa2', 'name': 'vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {'metering.server_group': '366b90b6-2e85-40c4-9ca1-855cf9022409'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.891 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd32050dc-c041-47df-994e-7d05cf1f489a', 'name': 'vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {'metering.server_group': '366b90b6-2e85-40c4-9ca1-855cf9022409'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.897 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b1c088bc-7a6b-4580-93ff-685731747189', 'name': 'test_0', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.897 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.898 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.898 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.899 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T02:02:42.898791) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.901 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.902 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.902 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.902 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.903 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.903 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T02:02:42.903339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.912 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.918 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.926 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.927 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.927 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.927 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.928 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.928 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.928 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.929 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T02:02:42.928539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.931 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.931 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.931 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.931 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.931 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.932 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.933 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T02:02:42.932200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.933 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.934 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.934 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.935 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.936 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.936 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.936 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.936 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.937 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.938 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T02:02:42.937036) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.938 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.938 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.939 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.940 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.941 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.941 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.941 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.941 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.942 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.943 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.943 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T02:02:42.942185) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.943 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.944 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.945 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.945 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.946 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.946 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.946 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.947 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.948 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T02:02:42.946912) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:42.988 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/cpu volume: 40480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.032 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/cpu volume: 39000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.073 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/cpu volume: 46680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.074 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.075 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.075 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.075 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.075 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.076 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T02:02:43.076106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.077 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.078 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.078 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.079 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.079 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.079 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.080 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.080 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.080 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.080 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/memory.usage volume: 49.00390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.081 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/memory.usage volume: 49.0546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.081 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/memory.usage volume: 48.828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.082 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.083 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T02:02:43.080495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.083 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.083 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.084 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.084 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.084 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.084 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.085 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.085 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T02:02:43.084540) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.085 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.086 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes volume: 2262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.086 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.087 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.087 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.087 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.087 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.087 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.088 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.088 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T02:02:43.087768) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.089 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.090 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.090 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.091 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.091 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.091 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.091 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.091 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.092 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.093 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.093 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T02:02:43.091635) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.094 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.094 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.095 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.095 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.095 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.095 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.095 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.096 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.097 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.097 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.098 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.098 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.098 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.099 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.099 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T02:02:43.095510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.100 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T02:02:43.099035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.100 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.100 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.101 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.101 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.101 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.101 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.102 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.102 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T02:02:43.102029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.137 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.138 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.138 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.167 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.168 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.168 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.197 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.198 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.199 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.200 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.200 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.200 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.200 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.200 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.201 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T02:02:43.201059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.293 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.294 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.294 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.400 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.401 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.401 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.498 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.499 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.499 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.500 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.501 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.501 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.501 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.501 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.502 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.502 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.502 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.502 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.latency volume: 1818076010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T02:02:43.502337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.503 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.latency volume: 286055535 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.503 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.latency volume: 221080770 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.504 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.latency volume: 2007436788 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.504 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.latency volume: 283353651 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.504 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.latency volume: 197487344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.505 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 2182324777 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.505 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 336768448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.506 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 176765271 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.507 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.507 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.507 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.508 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.508 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.508 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.508 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.509 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.509 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.510 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.510 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.511 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T02:02:43.508290) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.511 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.512 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.512 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.513 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.513 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.514 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.514 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.514 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.514 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T02:02:43.514570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.515 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.515 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.516 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.516 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.516 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.517 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.517 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.518 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.518 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.519 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.520 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.520 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.521 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.521 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.521 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.521 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.522 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.523 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.524 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T02:02:43.521312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.524 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.525 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.525 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.526 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.526 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.527 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.528 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.528 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.528 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.529 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.529 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.530 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.530 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.531 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.531 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.532 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.532 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.532 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.532 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.533 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T02:02:43.529406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.533 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.533 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.latency volume: 5109418941 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.534 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.latency volume: 30681884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.534 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.535 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.latency volume: 5738822785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.535 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.latency volume: 28688069 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.535 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.536 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 5787370869 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.536 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 30575996 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.537 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T02:02:43.533125) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.538 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.538 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.539 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.539 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.539 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.539 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.539 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.540 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.540 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.541 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T02:02:43.539519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.541 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.542 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.542 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.542 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.542 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.543 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.543 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.543 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.543 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.543 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.544 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.544 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.544 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T02:02:43.543955) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.545 15 DEBUG ceilometer.compute.pollsters [-] a8b199f7-8cd5-45ea-bc7e-af8352a6afa2/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.545 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.545 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.545 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.546 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.546 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.546 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.547 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.547 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.547 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.547 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.547 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.548 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.548 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.548 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.548 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.548 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.550 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.550 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.550 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.550 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.550 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.550 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.550 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.550 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.550 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.550 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:02:43.550 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:02:44 compute-0 podman[429933]: 2025-11-26 02:02:44.583989497 +0000 UTC m=+0.127335461 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true)
Nov 26 02:02:44 compute-0 podman[429935]: 2025-11-26 02:02:44.590408427 +0000 UTC m=+0.120946012 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 02:02:44 compute-0 podman[429934]: 2025-11-26 02:02:44.611479798 +0000 UTC m=+0.146985352 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:02:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:02:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:45 compute-0 nova_compute[350387]: 2025-11-26 02:02:45.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:02:45 compute-0 nova_compute[350387]: 2025-11-26 02:02:45.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.414782) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122566414925, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2047, "num_deletes": 251, "total_data_size": 3439281, "memory_usage": 3492120, "flush_reason": "Manual Compaction"}
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122566440341, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3373413, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30111, "largest_seqno": 32157, "table_properties": {"data_size": 3364023, "index_size": 5948, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18578, "raw_average_key_size": 20, "raw_value_size": 3345477, "raw_average_value_size": 3612, "num_data_blocks": 264, "num_entries": 926, "num_filter_entries": 926, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764122337, "oldest_key_time": 1764122337, "file_creation_time": 1764122566, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 25660 microseconds, and 15728 cpu microseconds.
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.440449) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3373413 bytes OK
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.440481) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.444193) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.444217) EVENT_LOG_v1 {"time_micros": 1764122566444209, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.444242) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3430725, prev total WAL file size 3430725, number of live WAL files 2.
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.446305) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3294KB)], [68(7048KB)]
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122566446380, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10590892, "oldest_snapshot_seqno": -1}
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5332 keys, 8847720 bytes, temperature: kUnknown
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122566507649, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8847720, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8811530, "index_size": 21744, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 133620, "raw_average_key_size": 25, "raw_value_size": 8714549, "raw_average_value_size": 1634, "num_data_blocks": 897, "num_entries": 5332, "num_filter_entries": 5332, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764122566, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.508109) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8847720 bytes
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.511083) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.6 rd, 144.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 6.9 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 5846, records dropped: 514 output_compression: NoCompression
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.511115) EVENT_LOG_v1 {"time_micros": 1764122566511100, "job": 38, "event": "compaction_finished", "compaction_time_micros": 61354, "compaction_time_cpu_micros": 45185, "output_level": 6, "num_output_files": 1, "total_output_size": 8847720, "num_input_records": 5846, "num_output_records": 5332, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122566513013, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122566515713, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.445980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.515952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.515959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.515962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.515964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:02:46 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:02:46.515967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:02:46 compute-0 nova_compute[350387]: 2025-11-26 02:02:46.660 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:47 compute-0 podman[429990]: 2025-11-26 02:02:47.596008376 +0000 UTC m=+0.143985097 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 02:02:47 compute-0 podman[429991]: 2025-11-26 02:02:47.65608539 +0000 UTC m=+0.193784593 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 02:02:47 compute-0 nova_compute[350387]: 2025-11-26 02:02:47.736 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016571738458032168 of space, bias 1.0, pg target 0.49715215374096505 quantized to 32 (current 32)
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:02:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:02:51 compute-0 podman[430036]: 2025-11-26 02:02:51.591876315 +0000 UTC m=+0.140471839 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, version=9.4, io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, release-0.7.12=, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 02:02:51 compute-0 nova_compute[350387]: 2025-11-26 02:02:51.662 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:52 compute-0 nova_compute[350387]: 2025-11-26 02:02:52.741 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:02:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:55 compute-0 podman[430057]: 2025-11-26 02:02:55.55754171 +0000 UTC m=+0.101280221 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 26 02:02:56 compute-0 nova_compute[350387]: 2025-11-26 02:02:56.664 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:56 compute-0 podman[430175]: 2025-11-26 02:02:56.840947829 +0000 UTC m=+0.124279095 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:02:56 compute-0 podman[430174]: 2025-11-26 02:02:56.857478533 +0000 UTC m=+0.142043523 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 26 02:02:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1565: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:02:57 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:02:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:02:57 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:02:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:02:57 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:02:57 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev ba24de6e-6192-4535-a3c7-55610d625503 does not exist
Nov 26 02:02:57 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 05e46d1c-e026-4a74-8c82-3bd972597386 does not exist
Nov 26 02:02:57 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5cd57263-72bc-476c-a691-108e230c2d13 does not exist
Nov 26 02:02:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:02:57 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:02:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:02:57 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:02:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:02:57 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:02:57 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:02:57 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:02:57 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:02:57 compute-0 nova_compute[350387]: 2025-11-26 02:02:57.755 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:02:58 compute-0 podman[430385]: 2025-11-26 02:02:58.526546044 +0000 UTC m=+0.076219837 container create 967d519d419bed0c8ba977783119643f0b716c054f2ca90accf01c0c47ed0646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:02:58 compute-0 podman[430385]: 2025-11-26 02:02:58.494806774 +0000 UTC m=+0.044480597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:02:58 compute-0 systemd[1]: Started libpod-conmon-967d519d419bed0c8ba977783119643f0b716c054f2ca90accf01c0c47ed0646.scope.
Nov 26 02:02:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:02:58 compute-0 podman[430385]: 2025-11-26 02:02:58.660625483 +0000 UTC m=+0.210299306 container init 967d519d419bed0c8ba977783119643f0b716c054f2ca90accf01c0c47ed0646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jemison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 02:02:58 compute-0 podman[430385]: 2025-11-26 02:02:58.676905249 +0000 UTC m=+0.226579042 container start 967d519d419bed0c8ba977783119643f0b716c054f2ca90accf01c0c47ed0646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 02:02:58 compute-0 podman[430385]: 2025-11-26 02:02:58.681619722 +0000 UTC m=+0.231293515 container attach 967d519d419bed0c8ba977783119643f0b716c054f2ca90accf01c0c47ed0646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jemison, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 02:02:58 compute-0 stoic_jemison[430399]: 167 167
Nov 26 02:02:58 compute-0 systemd[1]: libpod-967d519d419bed0c8ba977783119643f0b716c054f2ca90accf01c0c47ed0646.scope: Deactivated successfully.
Nov 26 02:02:58 compute-0 podman[430385]: 2025-11-26 02:02:58.691379905 +0000 UTC m=+0.241053718 container died 967d519d419bed0c8ba977783119643f0b716c054f2ca90accf01c0c47ed0646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jemison, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 02:02:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6880b173a32ce76b7ab71f0576d4654b5849baab8f268586acd6f76791f75c9-merged.mount: Deactivated successfully.
Nov 26 02:02:58 compute-0 podman[430385]: 2025-11-26 02:02:58.775684699 +0000 UTC m=+0.325358492 container remove 967d519d419bed0c8ba977783119643f0b716c054f2ca90accf01c0c47ed0646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jemison, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:02:58 compute-0 systemd[1]: libpod-conmon-967d519d419bed0c8ba977783119643f0b716c054f2ca90accf01c0c47ed0646.scope: Deactivated successfully.
Nov 26 02:02:59 compute-0 podman[430422]: 2025-11-26 02:02:59.08038035 +0000 UTC m=+0.090469237 container create 73a4f583693977b1af56df37ad9e9d455dd354a6a953a0bd65587941122fedd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:02:59 compute-0 podman[430422]: 2025-11-26 02:02:59.044128544 +0000 UTC m=+0.054217491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:02:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:02:59 compute-0 systemd[1]: Started libpod-conmon-73a4f583693977b1af56df37ad9e9d455dd354a6a953a0bd65587941122fedd0.scope.
Nov 26 02:02:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a76ddbdbc169e6d38ea6342509161deb6e5e02be5ce566915f3a4ec1c7cd3b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a76ddbdbc169e6d38ea6342509161deb6e5e02be5ce566915f3a4ec1c7cd3b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a76ddbdbc169e6d38ea6342509161deb6e5e02be5ce566915f3a4ec1c7cd3b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a76ddbdbc169e6d38ea6342509161deb6e5e02be5ce566915f3a4ec1c7cd3b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a76ddbdbc169e6d38ea6342509161deb6e5e02be5ce566915f3a4ec1c7cd3b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:02:59 compute-0 podman[430422]: 2025-11-26 02:02:59.250267623 +0000 UTC m=+0.260356550 container init 73a4f583693977b1af56df37ad9e9d455dd354a6a953a0bd65587941122fedd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 02:02:59 compute-0 podman[430422]: 2025-11-26 02:02:59.295215713 +0000 UTC m=+0.305304610 container start 73a4f583693977b1af56df37ad9e9d455dd354a6a953a0bd65587941122fedd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hamilton, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 02:02:59 compute-0 podman[430422]: 2025-11-26 02:02:59.3029443 +0000 UTC m=+0.313033237 container attach 73a4f583693977b1af56df37ad9e9d455dd354a6a953a0bd65587941122fedd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hamilton, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 02:02:59 compute-0 podman[158021]: time="2025-11-26T02:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:02:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45524 "" "Go-http-client/1.1"
Nov 26 02:02:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9046 "" "Go-http-client/1.1"
Nov 26 02:02:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:03:00 compute-0 xenodochial_hamilton[430437]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:03:00 compute-0 xenodochial_hamilton[430437]: --> relative data size: 1.0
Nov 26 02:03:00 compute-0 xenodochial_hamilton[430437]: --> All data devices are unavailable
Nov 26 02:03:00 compute-0 systemd[1]: libpod-73a4f583693977b1af56df37ad9e9d455dd354a6a953a0bd65587941122fedd0.scope: Deactivated successfully.
Nov 26 02:03:00 compute-0 podman[430422]: 2025-11-26 02:03:00.539514906 +0000 UTC m=+1.549603803 container died 73a4f583693977b1af56df37ad9e9d455dd354a6a953a0bd65587941122fedd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 02:03:00 compute-0 systemd[1]: libpod-73a4f583693977b1af56df37ad9e9d455dd354a6a953a0bd65587941122fedd0.scope: Consumed 1.163s CPU time.
Nov 26 02:03:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a76ddbdbc169e6d38ea6342509161deb6e5e02be5ce566915f3a4ec1c7cd3b7-merged.mount: Deactivated successfully.
Nov 26 02:03:00 compute-0 podman[430422]: 2025-11-26 02:03:00.64130196 +0000 UTC m=+1.651390827 container remove 73a4f583693977b1af56df37ad9e9d455dd354a6a953a0bd65587941122fedd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hamilton, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:03:00 compute-0 systemd[1]: libpod-conmon-73a4f583693977b1af56df37ad9e9d455dd354a6a953a0bd65587941122fedd0.scope: Deactivated successfully.
Nov 26 02:03:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:03:01 compute-0 openstack_network_exporter[367323]: ERROR   02:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:03:01 compute-0 openstack_network_exporter[367323]: ERROR   02:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:03:01 compute-0 openstack_network_exporter[367323]: ERROR   02:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:03:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:03:01 compute-0 openstack_network_exporter[367323]: ERROR   02:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:03:01 compute-0 openstack_network_exporter[367323]: ERROR   02:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:03:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:03:01 compute-0 nova_compute[350387]: 2025-11-26 02:03:01.668 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:01 compute-0 podman[430617]: 2025-11-26 02:03:01.757626936 +0000 UTC m=+0.086665171 container create 42939ec29259ba04c3d9ba5180ce051e22a5d3f95e0702753dae3d3427a26236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_herschel, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 02:03:01 compute-0 podman[430617]: 2025-11-26 02:03:01.726731359 +0000 UTC m=+0.055769654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:03:01 compute-0 systemd[1]: Started libpod-conmon-42939ec29259ba04c3d9ba5180ce051e22a5d3f95e0702753dae3d3427a26236.scope.
Nov 26 02:03:01 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:03:01 compute-0 podman[430617]: 2025-11-26 02:03:01.906346875 +0000 UTC m=+0.235385180 container init 42939ec29259ba04c3d9ba5180ce051e22a5d3f95e0702753dae3d3427a26236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_herschel, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 02:03:01 compute-0 podman[430617]: 2025-11-26 02:03:01.925565094 +0000 UTC m=+0.254603339 container start 42939ec29259ba04c3d9ba5180ce051e22a5d3f95e0702753dae3d3427a26236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_herschel, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 02:03:01 compute-0 podman[430617]: 2025-11-26 02:03:01.932091877 +0000 UTC m=+0.261130182 container attach 42939ec29259ba04c3d9ba5180ce051e22a5d3f95e0702753dae3d3427a26236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_herschel, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 02:03:01 compute-0 trusting_herschel[430634]: 167 167
Nov 26 02:03:01 compute-0 systemd[1]: libpod-42939ec29259ba04c3d9ba5180ce051e22a5d3f95e0702753dae3d3427a26236.scope: Deactivated successfully.
Nov 26 02:03:01 compute-0 podman[430617]: 2025-11-26 02:03:01.938294501 +0000 UTC m=+0.267332746 container died 42939ec29259ba04c3d9ba5180ce051e22a5d3f95e0702753dae3d3427a26236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_herschel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 02:03:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7baef40cf7a4b86e1889211aed053f91632bc77967efca556279611316108f9b-merged.mount: Deactivated successfully.
Nov 26 02:03:02 compute-0 podman[430617]: 2025-11-26 02:03:02.018027126 +0000 UTC m=+0.347065341 container remove 42939ec29259ba04c3d9ba5180ce051e22a5d3f95e0702753dae3d3427a26236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_herschel, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 02:03:02 compute-0 systemd[1]: libpod-conmon-42939ec29259ba04c3d9ba5180ce051e22a5d3f95e0702753dae3d3427a26236.scope: Deactivated successfully.
Nov 26 02:03:02 compute-0 podman[430657]: 2025-11-26 02:03:02.273639101 +0000 UTC m=+0.080267451 container create bf52c4fbd322b9e65661ff6cf57e62430e16098ebdeb048741c965f712d5ef2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:03:02 compute-0 podman[430657]: 2025-11-26 02:03:02.23650024 +0000 UTC m=+0.043128680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:03:02 compute-0 systemd[1]: Started libpod-conmon-bf52c4fbd322b9e65661ff6cf57e62430e16098ebdeb048741c965f712d5ef2b.scope.
Nov 26 02:03:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac15a1e4dfc891dce312b9dc1e20ebc8d8c8fe3495508dc2749ed561352a904/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac15a1e4dfc891dce312b9dc1e20ebc8d8c8fe3495508dc2749ed561352a904/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac15a1e4dfc891dce312b9dc1e20ebc8d8c8fe3495508dc2749ed561352a904/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ac15a1e4dfc891dce312b9dc1e20ebc8d8c8fe3495508dc2749ed561352a904/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:03:02 compute-0 podman[430657]: 2025-11-26 02:03:02.451645401 +0000 UTC m=+0.258273771 container init bf52c4fbd322b9e65661ff6cf57e62430e16098ebdeb048741c965f712d5ef2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_fermi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:03:02 compute-0 podman[430657]: 2025-11-26 02:03:02.479130702 +0000 UTC m=+0.285759082 container start bf52c4fbd322b9e65661ff6cf57e62430e16098ebdeb048741c965f712d5ef2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:03:02 compute-0 podman[430657]: 2025-11-26 02:03:02.486477098 +0000 UTC m=+0.293105528 container attach bf52c4fbd322b9e65661ff6cf57e62430e16098ebdeb048741c965f712d5ef2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_fermi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 26 02:03:02 compute-0 nova_compute[350387]: 2025-11-26 02:03:02.758 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.267 350391 DEBUG oslo_concurrency.lockutils [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.269 350391 DEBUG oslo_concurrency.lockutils [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.270 350391 DEBUG oslo_concurrency.lockutils [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.271 350391 DEBUG oslo_concurrency.lockutils [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.272 350391 DEBUG oslo_concurrency.lockutils [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.274 350391 INFO nova.compute.manager [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Terminating instance#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.276 350391 DEBUG nova.compute.manager [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 02:03:03 compute-0 bold_fermi[430674]: {
Nov 26 02:03:03 compute-0 bold_fermi[430674]:    "0": [
Nov 26 02:03:03 compute-0 bold_fermi[430674]:        {
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "devices": [
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "/dev/loop3"
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            ],
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "lv_name": "ceph_lv0",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "lv_size": "21470642176",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "name": "ceph_lv0",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "tags": {
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.cluster_name": "ceph",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.crush_device_class": "",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.encrypted": "0",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.osd_id": "0",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.type": "block",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.vdo": "0"
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            },
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "type": "block",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "vg_name": "ceph_vg0"
Nov 26 02:03:03 compute-0 bold_fermi[430674]:        }
Nov 26 02:03:03 compute-0 bold_fermi[430674]:    ],
Nov 26 02:03:03 compute-0 bold_fermi[430674]:    "1": [
Nov 26 02:03:03 compute-0 bold_fermi[430674]:        {
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "devices": [
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "/dev/loop4"
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            ],
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "lv_name": "ceph_lv1",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "lv_size": "21470642176",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "name": "ceph_lv1",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "tags": {
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.cluster_name": "ceph",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.crush_device_class": "",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.encrypted": "0",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.osd_id": "1",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.type": "block",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.vdo": "0"
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            },
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "type": "block",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "vg_name": "ceph_vg1"
Nov 26 02:03:03 compute-0 bold_fermi[430674]:        }
Nov 26 02:03:03 compute-0 bold_fermi[430674]:    ],
Nov 26 02:03:03 compute-0 bold_fermi[430674]:    "2": [
Nov 26 02:03:03 compute-0 bold_fermi[430674]:        {
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "devices": [
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "/dev/loop5"
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            ],
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "lv_name": "ceph_lv2",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "lv_size": "21470642176",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "name": "ceph_lv2",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "tags": {
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.cluster_name": "ceph",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.crush_device_class": "",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.encrypted": "0",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.osd_id": "2",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.type": "block",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:                "ceph.vdo": "0"
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            },
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "type": "block",
Nov 26 02:03:03 compute-0 bold_fermi[430674]:            "vg_name": "ceph_vg2"
Nov 26 02:03:03 compute-0 bold_fermi[430674]:        }
Nov 26 02:03:03 compute-0 bold_fermi[430674]:    ]
Nov 26 02:03:03 compute-0 bold_fermi[430674]: }
Nov 26 02:03:03 compute-0 systemd[1]: libpod-bf52c4fbd322b9e65661ff6cf57e62430e16098ebdeb048741c965f712d5ef2b.scope: Deactivated successfully.
Nov 26 02:03:03 compute-0 podman[430657]: 2025-11-26 02:03:03.330607082 +0000 UTC m=+1.137235452 container died bf52c4fbd322b9e65661ff6cf57e62430e16098ebdeb048741c965f712d5ef2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_fermi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 02:03:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ac15a1e4dfc891dce312b9dc1e20ebc8d8c8fe3495508dc2749ed561352a904-merged.mount: Deactivated successfully.
Nov 26 02:03:03 compute-0 podman[430657]: 2025-11-26 02:03:03.438190458 +0000 UTC m=+1.244818838 container remove bf52c4fbd322b9e65661ff6cf57e62430e16098ebdeb048741c965f712d5ef2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_fermi, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:03:03 compute-0 systemd[1]: libpod-conmon-bf52c4fbd322b9e65661ff6cf57e62430e16098ebdeb048741c965f712d5ef2b.scope: Deactivated successfully.
Nov 26 02:03:03 compute-0 kernel: tap867227e5-44 (unregistering): left promiscuous mode
Nov 26 02:03:03 compute-0 NetworkManager[48886]: <info>  [1764122583.4791] device (tap867227e5-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 02:03:03 compute-0 ovn_controller[89102]: 2025-11-26T02:03:03Z|00054|binding|INFO|Releasing lport 867227e5-4422-4cfb-93d9-0589612717db from this chassis (sb_readonly=0)
Nov 26 02:03:03 compute-0 ovn_controller[89102]: 2025-11-26T02:03:03Z|00055|binding|INFO|Setting lport 867227e5-4422-4cfb-93d9-0589612717db down in Southbound
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.485 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:03 compute-0 ovn_controller[89102]: 2025-11-26T02:03:03Z|00056|binding|INFO|Removing iface tap867227e5-44 ovn-installed in OVS
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.491 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:03.501 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:c0:70 192.168.0.36'], port_security=['fa:16:3e:d6:c0:70 192.168.0.36'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vnceagrg57o4-kl5by2wl55k2-qlnmxyop4kzj-port-cgeuuhndjcpy', 'neutron:cidrs': '192.168.0.36/24', 'neutron:device_id': 'a8b199f7-8cd5-45ea-bc7e-af8352a6afa2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97f5f89-70be-4349-beb5-5f8e6065072e', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vnceagrg57o4-kl5by2wl55k2-qlnmxyop4kzj-port-cgeuuhndjcpy', 'neutron:project_id': '4d902f6105ab4c81a51a4751fa89a83e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd3202a1a-8d71-42b1-ae70-18469fa18607', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.202', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5f5986b-4ad4-4edf-b238-68c26c7002dd, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=867227e5-4422-4cfb-93d9-0589612717db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.502 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:03.503 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 867227e5-4422-4cfb-93d9-0589612717db in datapath c97f5f89-70be-4349-beb5-5f8e6065072e unbound from our chassis#033[00m
Nov 26 02:03:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:03.506 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97f5f89-70be-4349-beb5-5f8e6065072e#033[00m
Nov 26 02:03:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:03.526 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[a8643491-159e-47d4-b4ea-818b914bec8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:03:03 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 26 02:03:03 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 48.498s CPU time.
Nov 26 02:03:03 compute-0 systemd-machined[138512]: Machine qemu-3-instance-00000003 terminated.
Nov 26 02:03:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:03.567 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[7b0cc4c0-4e0d-4c9e-9cd6-537385eda74a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:03:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:03.570 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[775d999d-fef7-43e1-b4a3-f797d110c3b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:03:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:03.602 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[672d4161-752a-4fca-a32d-0975904f065e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:03:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:03.626 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[0d9e8689-7215-42ca-ba31-a0e1ca236377]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97f5f89-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:e8:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 14, 'rx_bytes': 532, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 14, 'rx_bytes': 532, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544483, 'reachable_time': 22545, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430730, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:03:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:03.649 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[e057a49d-1d47-425c-a32e-f705dbc650f5]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc97f5f89-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544500, 'tstamp': 544500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430733, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc97f5f89-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544503, 'tstamp': 544503}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430733, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:03:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:03.651 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97f5f89-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.654 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.663 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:03.664 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97f5f89-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:03:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:03.665 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:03:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:03.665 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97f5f89-70, col_values=(('external_ids', {'iface-id': '3824ec63-7278-42dc-8c72-8ec8e06c2f0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:03:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:03.666 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.711 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.723 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.729 350391 INFO nova.virt.libvirt.driver [-] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Instance destroyed successfully.#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.730 350391 DEBUG nova.objects.instance [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lazy-loading 'resources' on Instance uuid a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.742 350391 DEBUG nova.virt.libvirt.vif [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T01:55:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-grg57o4-kl5by2wl55k2-qlnmxyop4kzj-vnf-gputkh7zzb6o',id=3,image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-26T01:55:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='366b90b6-2e85-40c4-9ca1-855cf9022409'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d902f6105ab4c81a51a4751fa89a83e',ramdisk_id='',reservation_id='r-g9hi0hcw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T01:55:20Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yODQxODE3MjYzOTEzNDI0Nzc1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI4NDE4MTcyNjM5MTM0MjQ3NzU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mjg0MTgxNzI2MzkxMzQyNDc3NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI4NDE4MTcyNjM5MTM0MjQ3NzU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yODQxODE3MjYzOTEzNDI0Nzc1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yODQxODE3MjYzOTEzNDI0Nzc1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Nov 26 02:03:03 compute-0 nova_compute[350387]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mjg0MTgxNzI2MzkxMzQyNDc3NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI4NDE4MTcyNjM5MTM0MjQ3NzU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yODQxODE3MjYzOTEzNDI0Nzc1PT0tLQo=',user_id='b130e7a8bed3424f9f5ff63b35cd2b28',uuid=a8b199f7-8cd5-45ea-bc7e-af8352a6afa2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "867227e5-4422-4cfb-93d9-0589612717db", "address": "fa:16:3e:d6:c0:70", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.36", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867227e5-44", "ovs_interfaceid": "867227e5-4422-4cfb-93d9-0589612717db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.742 350391 DEBUG nova.network.os_vif_util [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converting VIF {"id": "867227e5-4422-4cfb-93d9-0589612717db", "address": "fa:16:3e:d6:c0:70", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.36", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867227e5-44", "ovs_interfaceid": "867227e5-4422-4cfb-93d9-0589612717db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.743 350391 DEBUG nova.network.os_vif_util [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:c0:70,bridge_name='br-int',has_traffic_filtering=True,id=867227e5-4422-4cfb-93d9-0589612717db,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap867227e5-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.744 350391 DEBUG os_vif [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:c0:70,bridge_name='br-int',has_traffic_filtering=True,id=867227e5-4422-4cfb-93d9-0589612717db,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap867227e5-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.745 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.746 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap867227e5-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.748 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.750 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.753 350391 INFO os_vif [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:c0:70,bridge_name='br-int',has_traffic_filtering=True,id=867227e5-4422-4cfb-93d9-0589612717db,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap867227e5-44')#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.788 350391 DEBUG nova.compute.manager [req-0cf919f2-6231-40d0-8e32-7868d80b8c85 req-75fcf292-0ed6-4e61-a5d0-133553cd78a2 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Received event network-vif-unplugged-867227e5-4422-4cfb-93d9-0589612717db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.788 350391 DEBUG oslo_concurrency.lockutils [req-0cf919f2-6231-40d0-8e32-7868d80b8c85 req-75fcf292-0ed6-4e61-a5d0-133553cd78a2 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.789 350391 DEBUG oslo_concurrency.lockutils [req-0cf919f2-6231-40d0-8e32-7868d80b8c85 req-75fcf292-0ed6-4e61-a5d0-133553cd78a2 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.790 350391 DEBUG oslo_concurrency.lockutils [req-0cf919f2-6231-40d0-8e32-7868d80b8c85 req-75fcf292-0ed6-4e61-a5d0-133553cd78a2 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.790 350391 DEBUG nova.compute.manager [req-0cf919f2-6231-40d0-8e32-7868d80b8c85 req-75fcf292-0ed6-4e61-a5d0-133553cd78a2 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] No waiting events found dispatching network-vif-unplugged-867227e5-4422-4cfb-93d9-0589612717db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:03:03 compute-0 nova_compute[350387]: 2025-11-26 02:03:03.790 350391 DEBUG nova.compute.manager [req-0cf919f2-6231-40d0-8e32-7868d80b8c85 req-75fcf292-0ed6-4e61-a5d0-133553cd78a2 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Received event network-vif-unplugged-867227e5-4422-4cfb-93d9-0589612717db for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 02:03:04 compute-0 rsyslogd[188548]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 02:03:03.742 350391 DEBUG nova.virt.libvirt.vif [None req-47214334-a9 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 02:03:04 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:04.206 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:03:04 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:04.207 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 02:03:04 compute-0 nova_compute[350387]: 2025-11-26 02:03:04.211 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:04 compute-0 nova_compute[350387]: 2025-11-26 02:03:04.438 350391 DEBUG nova.compute.manager [req-3b8b76ff-a57b-4f00-9a42-3a778c7e3751 req-ecc0a801-9efe-470d-b5ec-8050cc32f55d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Received event network-changed-867227e5-4422-4cfb-93d9-0589612717db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:03:04 compute-0 nova_compute[350387]: 2025-11-26 02:03:04.438 350391 DEBUG nova.compute.manager [req-3b8b76ff-a57b-4f00-9a42-3a778c7e3751 req-ecc0a801-9efe-470d-b5ec-8050cc32f55d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Refreshing instance network info cache due to event network-changed-867227e5-4422-4cfb-93d9-0589612717db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:03:04 compute-0 nova_compute[350387]: 2025-11-26 02:03:04.440 350391 DEBUG oslo_concurrency.lockutils [req-3b8b76ff-a57b-4f00-9a42-3a778c7e3751 req-ecc0a801-9efe-470d-b5ec-8050cc32f55d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:03:04 compute-0 nova_compute[350387]: 2025-11-26 02:03:04.441 350391 DEBUG oslo_concurrency.lockutils [req-3b8b76ff-a57b-4f00-9a42-3a778c7e3751 req-ecc0a801-9efe-470d-b5ec-8050cc32f55d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:03:04 compute-0 nova_compute[350387]: 2025-11-26 02:03:04.442 350391 DEBUG nova.network.neutron [req-3b8b76ff-a57b-4f00-9a42-3a778c7e3751 req-ecc0a801-9efe-470d-b5ec-8050cc32f55d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Refreshing network info cache for port 867227e5-4422-4cfb-93d9-0589612717db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:03:04 compute-0 podman[430876]: 2025-11-26 02:03:04.578765164 +0000 UTC m=+0.076731182 container create 112091f2b878c40746f0c8e33df91e10965dd6c5304e97dfe2ae332dfadf235f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:03:04 compute-0 podman[430876]: 2025-11-26 02:03:04.551232722 +0000 UTC m=+0.049198820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:03:04 compute-0 systemd[1]: Started libpod-conmon-112091f2b878c40746f0c8e33df91e10965dd6c5304e97dfe2ae332dfadf235f.scope.
Nov 26 02:03:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:03:04 compute-0 podman[430876]: 2025-11-26 02:03:04.720996271 +0000 UTC m=+0.218962359 container init 112091f2b878c40746f0c8e33df91e10965dd6c5304e97dfe2ae332dfadf235f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:03:04 compute-0 podman[430876]: 2025-11-26 02:03:04.739642294 +0000 UTC m=+0.237608342 container start 112091f2b878c40746f0c8e33df91e10965dd6c5304e97dfe2ae332dfadf235f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:03:04 compute-0 podman[430876]: 2025-11-26 02:03:04.746929738 +0000 UTC m=+0.244895796 container attach 112091f2b878c40746f0c8e33df91e10965dd6c5304e97dfe2ae332dfadf235f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:03:04 compute-0 peaceful_babbage[430891]: 167 167
Nov 26 02:03:04 compute-0 systemd[1]: libpod-112091f2b878c40746f0c8e33df91e10965dd6c5304e97dfe2ae332dfadf235f.scope: Deactivated successfully.
Nov 26 02:03:04 compute-0 podman[430876]: 2025-11-26 02:03:04.755816787 +0000 UTC m=+0.253782825 container died 112091f2b878c40746f0c8e33df91e10965dd6c5304e97dfe2ae332dfadf235f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 02:03:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e9866f9b931d61b62624eee4bae9d3222212b5dd62c132dc6c72a92c350dd69-merged.mount: Deactivated successfully.
Nov 26 02:03:04 compute-0 podman[430876]: 2025-11-26 02:03:04.831041746 +0000 UTC m=+0.329007764 container remove 112091f2b878c40746f0c8e33df91e10965dd6c5304e97dfe2ae332dfadf235f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:03:04 compute-0 systemd[1]: libpod-conmon-112091f2b878c40746f0c8e33df91e10965dd6c5304e97dfe2ae332dfadf235f.scope: Deactivated successfully.
Nov 26 02:03:04 compute-0 nova_compute[350387]: 2025-11-26 02:03:04.922 350391 INFO nova.virt.libvirt.driver [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Deleting instance files /var/lib/nova/instances/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_del#033[00m
Nov 26 02:03:04 compute-0 nova_compute[350387]: 2025-11-26 02:03:04.924 350391 INFO nova.virt.libvirt.driver [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Deletion of /var/lib/nova/instances/a8b199f7-8cd5-45ea-bc7e-af8352a6afa2_del complete#033[00m
Nov 26 02:03:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:03:04 compute-0 nova_compute[350387]: 2025-11-26 02:03:04.995 350391 INFO nova.compute.manager [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Took 1.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 02:03:04 compute-0 nova_compute[350387]: 2025-11-26 02:03:04.995 350391 DEBUG oslo.service.loopingcall [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 02:03:04 compute-0 nova_compute[350387]: 2025-11-26 02:03:04.996 350391 DEBUG nova.compute.manager [-] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 02:03:04 compute-0 nova_compute[350387]: 2025-11-26 02:03:04.996 350391 DEBUG nova.network.neutron [-] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 02:03:05 compute-0 podman[430913]: 2025-11-26 02:03:05.091877869 +0000 UTC m=+0.062374670 container create 70d8c8d1372cbb98749bb6bd770f7bec7b4856a59dfadf3bd75824f182a41fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:03:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 184 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 341 B/s wr, 5 op/s
Nov 26 02:03:05 compute-0 podman[430913]: 2025-11-26 02:03:05.066545628 +0000 UTC m=+0.037042459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:03:05 compute-0 systemd[1]: Started libpod-conmon-70d8c8d1372cbb98749bb6bd770f7bec7b4856a59dfadf3bd75824f182a41fc5.scope.
Nov 26 02:03:05 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afed7986b822de78be5341690ce4c34b6ea9898f05f07429aca3cc468cc2357/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afed7986b822de78be5341690ce4c34b6ea9898f05f07429aca3cc468cc2357/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afed7986b822de78be5341690ce4c34b6ea9898f05f07429aca3cc468cc2357/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afed7986b822de78be5341690ce4c34b6ea9898f05f07429aca3cc468cc2357/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:03:05 compute-0 podman[430913]: 2025-11-26 02:03:05.268472369 +0000 UTC m=+0.238969250 container init 70d8c8d1372cbb98749bb6bd770f7bec7b4856a59dfadf3bd75824f182a41fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:03:05 compute-0 podman[430913]: 2025-11-26 02:03:05.286516395 +0000 UTC m=+0.257013186 container start 70d8c8d1372cbb98749bb6bd770f7bec7b4856a59dfadf3bd75824f182a41fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:03:05 compute-0 podman[430913]: 2025-11-26 02:03:05.291206737 +0000 UTC m=+0.261703608 container attach 70d8c8d1372cbb98749bb6bd770f7bec7b4856a59dfadf3bd75824f182a41fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 02:03:05 compute-0 nova_compute[350387]: 2025-11-26 02:03:05.768 350391 DEBUG nova.network.neutron [req-3b8b76ff-a57b-4f00-9a42-3a778c7e3751 req-ecc0a801-9efe-470d-b5ec-8050cc32f55d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Updated VIF entry in instance network info cache for port 867227e5-4422-4cfb-93d9-0589612717db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:03:05 compute-0 nova_compute[350387]: 2025-11-26 02:03:05.770 350391 DEBUG nova.network.neutron [req-3b8b76ff-a57b-4f00-9a42-3a778c7e3751 req-ecc0a801-9efe-470d-b5ec-8050cc32f55d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Updating instance_info_cache with network_info: [{"id": "867227e5-4422-4cfb-93d9-0589612717db", "address": "fa:16:3e:d6:c0:70", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.36", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap867227e5-44", "ovs_interfaceid": "867227e5-4422-4cfb-93d9-0589612717db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:03:05 compute-0 nova_compute[350387]: 2025-11-26 02:03:05.797 350391 DEBUG oslo_concurrency.lockutils [req-3b8b76ff-a57b-4f00-9a42-3a778c7e3751 req-ecc0a801-9efe-470d-b5ec-8050cc32f55d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:03:05 compute-0 nova_compute[350387]: 2025-11-26 02:03:05.869 350391 DEBUG nova.compute.manager [req-8a9c3da4-f970-4e67-8e58-4bcc94109463 req-7ca43255-aa68-47a5-b624-2d4142d62669 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Received event network-vif-plugged-867227e5-4422-4cfb-93d9-0589612717db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:03:05 compute-0 nova_compute[350387]: 2025-11-26 02:03:05.870 350391 DEBUG oslo_concurrency.lockutils [req-8a9c3da4-f970-4e67-8e58-4bcc94109463 req-7ca43255-aa68-47a5-b624-2d4142d62669 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:03:05 compute-0 nova_compute[350387]: 2025-11-26 02:03:05.870 350391 DEBUG oslo_concurrency.lockutils [req-8a9c3da4-f970-4e67-8e58-4bcc94109463 req-7ca43255-aa68-47a5-b624-2d4142d62669 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:03:05 compute-0 nova_compute[350387]: 2025-11-26 02:03:05.871 350391 DEBUG oslo_concurrency.lockutils [req-8a9c3da4-f970-4e67-8e58-4bcc94109463 req-7ca43255-aa68-47a5-b624-2d4142d62669 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:03:05 compute-0 nova_compute[350387]: 2025-11-26 02:03:05.871 350391 DEBUG nova.compute.manager [req-8a9c3da4-f970-4e67-8e58-4bcc94109463 req-7ca43255-aa68-47a5-b624-2d4142d62669 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] No waiting events found dispatching network-vif-plugged-867227e5-4422-4cfb-93d9-0589612717db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:03:05 compute-0 nova_compute[350387]: 2025-11-26 02:03:05.871 350391 WARNING nova.compute.manager [req-8a9c3da4-f970-4e67-8e58-4bcc94109463 req-7ca43255-aa68-47a5-b624-2d4142d62669 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Received unexpected event network-vif-plugged-867227e5-4422-4cfb-93d9-0589612717db for instance with vm_state active and task_state deleting.#033[00m
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]: {
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:        "osd_id": 0,
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:        "type": "bluestore"
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:    },
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:        "osd_id": 2,
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:        "type": "bluestore"
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:    },
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:        "osd_id": 1,
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:        "type": "bluestore"
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]:    }
Nov 26 02:03:06 compute-0 youthful_dijkstra[430929]: }
Nov 26 02:03:06 compute-0 systemd[1]: libpod-70d8c8d1372cbb98749bb6bd770f7bec7b4856a59dfadf3bd75824f182a41fc5.scope: Deactivated successfully.
Nov 26 02:03:06 compute-0 systemd[1]: libpod-70d8c8d1372cbb98749bb6bd770f7bec7b4856a59dfadf3bd75824f182a41fc5.scope: Consumed 1.235s CPU time.
Nov 26 02:03:06 compute-0 podman[430913]: 2025-11-26 02:03:06.532464454 +0000 UTC m=+1.502961285 container died 70d8c8d1372cbb98749bb6bd770f7bec7b4856a59dfadf3bd75824f182a41fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 26 02:03:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4afed7986b822de78be5341690ce4c34b6ea9898f05f07429aca3cc468cc2357-merged.mount: Deactivated successfully.
Nov 26 02:03:06 compute-0 podman[430913]: 2025-11-26 02:03:06.619064641 +0000 UTC m=+1.589561452 container remove 70d8c8d1372cbb98749bb6bd770f7bec7b4856a59dfadf3bd75824f182a41fc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 02:03:06 compute-0 systemd[1]: libpod-conmon-70d8c8d1372cbb98749bb6bd770f7bec7b4856a59dfadf3bd75824f182a41fc5.scope: Deactivated successfully.
Nov 26 02:03:06 compute-0 nova_compute[350387]: 2025-11-26 02:03:06.672 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:03:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:03:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:03:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:03:06 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 61307fd6-e63a-4caa-90fb-6aea03a09613 does not exist
Nov 26 02:03:06 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 4e68e570-230f-4c1b-ad2d-9cf769f11367 does not exist
Nov 26 02:03:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 26 02:03:07 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:03:07 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:03:07 compute-0 nova_compute[350387]: 2025-11-26 02:03:07.861 350391 DEBUG nova.network.neutron [-] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:03:07 compute-0 nova_compute[350387]: 2025-11-26 02:03:07.884 350391 INFO nova.compute.manager [-] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Took 2.89 seconds to deallocate network for instance.#033[00m
Nov 26 02:03:07 compute-0 nova_compute[350387]: 2025-11-26 02:03:07.923 350391 DEBUG oslo_concurrency.lockutils [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:03:07 compute-0 nova_compute[350387]: 2025-11-26 02:03:07.924 350391 DEBUG oslo_concurrency.lockutils [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:03:08 compute-0 nova_compute[350387]: 2025-11-26 02:03:08.033 350391 DEBUG oslo_concurrency.processutils [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:03:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:03:08 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071766748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:03:08 compute-0 nova_compute[350387]: 2025-11-26 02:03:08.516 350391 DEBUG oslo_concurrency.processutils [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:03:08 compute-0 nova_compute[350387]: 2025-11-26 02:03:08.531 350391 DEBUG nova.compute.provider_tree [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:03:08 compute-0 nova_compute[350387]: 2025-11-26 02:03:08.561 350391 DEBUG nova.scheduler.client.report [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:03:08 compute-0 nova_compute[350387]: 2025-11-26 02:03:08.594 350391 DEBUG oslo_concurrency.lockutils [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:03:08 compute-0 nova_compute[350387]: 2025-11-26 02:03:08.633 350391 INFO nova.scheduler.client.report [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Deleted allocations for instance a8b199f7-8cd5-45ea-bc7e-af8352a6afa2#033[00m
Nov 26 02:03:08 compute-0 nova_compute[350387]: 2025-11-26 02:03:08.748 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:08 compute-0 nova_compute[350387]: 2025-11-26 02:03:08.770 350391 DEBUG oslo_concurrency.lockutils [None req-47214334-a9c2-47df-8cb6-e8c666053b19 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "a8b199f7-8cd5-45ea-bc7e-af8352a6afa2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:03:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 26 02:03:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:03:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:10.209 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:03:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:03:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:03:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:03:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:03:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:03:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:03:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 26 02:03:11 compute-0 nova_compute[350387]: 2025-11-26 02:03:11.676 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 26 02:03:13 compute-0 nova_compute[350387]: 2025-11-26 02:03:13.750 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:14 compute-0 podman[431048]: 2025-11-26 02:03:14.851212557 +0000 UTC m=+0.113535644 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 26 02:03:14 compute-0 podman[431047]: 2025-11-26 02:03:14.863675276 +0000 UTC m=+0.125329114 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251118)
Nov 26 02:03:14 compute-0 podman[431049]: 2025-11-26 02:03:14.880647782 +0000 UTC m=+0.135754327 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:03:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:03:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 26 02:03:16 compute-0 nova_compute[350387]: 2025-11-26 02:03:16.680 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 KiB/s wr, 34 op/s
Nov 26 02:03:18 compute-0 podman[431106]: 2025-11-26 02:03:18.585026732 +0000 UTC m=+0.132637400 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 02:03:18 compute-0 podman[431107]: 2025-11-26 02:03:18.683006258 +0000 UTC m=+0.227243901 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 26 02:03:18 compute-0 nova_compute[350387]: 2025-11-26 02:03:18.727 350391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764122583.7256036, a8b199f7-8cd5-45ea-bc7e-af8352a6afa2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:03:18 compute-0 nova_compute[350387]: 2025-11-26 02:03:18.729 350391 INFO nova.compute.manager [-] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] VM Stopped (Lifecycle Event)#033[00m
Nov 26 02:03:18 compute-0 nova_compute[350387]: 2025-11-26 02:03:18.752 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:18 compute-0 nova_compute[350387]: 2025-11-26 02:03:18.769 350391 DEBUG nova.compute.manager [None req-ae86a8a1-60b3-4990-a070-93c590c276bc - - - - - -] [instance: a8b199f7-8cd5-45ea-bc7e-af8352a6afa2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:03:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:03:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:03:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1577: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:03:21 compute-0 nova_compute[350387]: 2025-11-26 02:03:21.685 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:22 compute-0 podman[431149]: 2025-11-26 02:03:22.532410015 +0000 UTC m=+0.099396748 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vcs-type=git, config_id=edpm, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, name=ubi9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, release=1214.1726694543, io.openshift.expose-services=, architecture=x86_64, managed_by=edpm_ansible)
Nov 26 02:03:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1578: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:03:23 compute-0 nova_compute[350387]: 2025-11-26 02:03:23.755 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:03:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:24.983 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:03:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:24.984 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:03:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:03:24.984 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:03:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:03:25 compute-0 nova_compute[350387]: 2025-11-26 02:03:25.326 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:03:25 compute-0 nova_compute[350387]: 2025-11-26 02:03:25.379 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:03:25 compute-0 nova_compute[350387]: 2025-11-26 02:03:25.380 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:03:25 compute-0 nova_compute[350387]: 2025-11-26 02:03:25.381 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:03:25 compute-0 nova_compute[350387]: 2025-11-26 02:03:25.382 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:03:25 compute-0 nova_compute[350387]: 2025-11-26 02:03:25.383 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:03:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:03:25 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3780801795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:03:25 compute-0 nova_compute[350387]: 2025-11-26 02:03:25.914 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.056 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.056 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.056 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.064 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.064 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.064 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:03:26 compute-0 podman[431192]: 2025-11-26 02:03:26.149512996 +0000 UTC m=+0.146800436 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true)
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.651 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.653 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3629MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.653 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.654 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.688 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.787 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.788 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance d32050dc-c041-47df-994e-7d05cf1f489a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.789 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.789 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:03:26 compute-0 systemd-logind[800]: New session 61 of user zuul.
Nov 26 02:03:26 compute-0 systemd[1]: Started Session 61 of User zuul.
Nov 26 02:03:26 compute-0 nova_compute[350387]: 2025-11-26 02:03:26.879 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:03:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:03:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3108356196' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:03:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:03:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3108356196' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:03:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:03:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:03:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2168831615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:03:27 compute-0 nova_compute[350387]: 2025-11-26 02:03:27.365 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:03:27 compute-0 nova_compute[350387]: 2025-11-26 02:03:27.378 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:03:27 compute-0 nova_compute[350387]: 2025-11-26 02:03:27.399 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:03:27 compute-0 nova_compute[350387]: 2025-11-26 02:03:27.401 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:03:27 compute-0 nova_compute[350387]: 2025-11-26 02:03:27.402 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:03:27 compute-0 podman[431337]: 2025-11-26 02:03:27.538627828 +0000 UTC m=+0.094720556 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-type=git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.6, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7)
Nov 26 02:03:27 compute-0 podman[431338]: 2025-11-26 02:03:27.56151649 +0000 UTC m=+0.113354329 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:03:28 compute-0 python3[431453]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 02:03:28 compute-0 nova_compute[350387]: 2025-11-26 02:03:28.374 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:03:28 compute-0 nova_compute[350387]: 2025-11-26 02:03:28.375 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:03:28 compute-0 nova_compute[350387]: 2025-11-26 02:03:28.375 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:03:28 compute-0 nova_compute[350387]: 2025-11-26 02:03:28.759 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:03:29 compute-0 nova_compute[350387]: 2025-11-26 02:03:29.301 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:03:29 compute-0 podman[158021]: time="2025-11-26T02:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:03:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:03:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8639 "" "Go-http-client/1.1"
Nov 26 02:03:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:03:30 compute-0 nova_compute[350387]: 2025-11-26 02:03:30.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:03:30 compute-0 nova_compute[350387]: 2025-11-26 02:03:30.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:03:30 compute-0 nova_compute[350387]: 2025-11-26 02:03:30.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:03:30 compute-0 nova_compute[350387]: 2025-11-26 02:03:30.682 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:03:30 compute-0 nova_compute[350387]: 2025-11-26 02:03:30.683 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:03:30 compute-0 nova_compute[350387]: 2025-11-26 02:03:30.684 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:03:30 compute-0 nova_compute[350387]: 2025-11-26 02:03:30.684 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid b1c088bc-7a6b-4580-93ff-685731747189 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:03:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1582: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:03:31 compute-0 openstack_network_exporter[367323]: ERROR   02:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:03:31 compute-0 openstack_network_exporter[367323]: ERROR   02:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:03:31 compute-0 openstack_network_exporter[367323]: ERROR   02:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:03:31 compute-0 openstack_network_exporter[367323]: ERROR   02:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:03:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:03:31 compute-0 openstack_network_exporter[367323]: ERROR   02:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:03:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:03:31 compute-0 nova_compute[350387]: 2025-11-26 02:03:31.691 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:31 compute-0 nova_compute[350387]: 2025-11-26 02:03:31.818 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updating instance_info_cache with network_info: [{"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:03:31 compute-0 nova_compute[350387]: 2025-11-26 02:03:31.842 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:03:31 compute-0 nova_compute[350387]: 2025-11-26 02:03:31.842 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:03:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:03:33 compute-0 nova_compute[350387]: 2025-11-26 02:03:33.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:03:33 compute-0 nova_compute[350387]: 2025-11-26 02:03:33.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:03:33 compute-0 nova_compute[350387]: 2025-11-26 02:03:33.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:03:33 compute-0 nova_compute[350387]: 2025-11-26 02:03:33.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:03:33 compute-0 nova_compute[350387]: 2025-11-26 02:03:33.763 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:03:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:03:35 compute-0 nova_compute[350387]: 2025-11-26 02:03:35.294 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:03:36 compute-0 nova_compute[350387]: 2025-11-26 02:03:36.695 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 683 KiB/s wr, 6 op/s
Nov 26 02:03:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 26 02:03:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 26 02:03:37 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 26 02:03:38 compute-0 nova_compute[350387]: 2025-11-26 02:03:38.766 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1587: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 820 KiB/s wr, 7 op/s
Nov 26 02:03:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:03:40 compute-0 ovn_controller[89102]: 2025-11-26T02:03:40Z|00057|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:03:41
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.rgw.root', 'vms', '.mgr', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta']
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 1.6 MiB/s wr, 8 op/s
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:03:41 compute-0 nova_compute[350387]: 2025-11-26 02:03:41.698 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:03:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:03:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1589: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Nov 26 02:03:43 compute-0 nova_compute[350387]: 2025-11-26 02:03:43.770 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:03:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1590: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Nov 26 02:03:45 compute-0 nova_compute[350387]: 2025-11-26 02:03:45.284 350391 DEBUG oslo_concurrency.lockutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "0046c72b-74cd-452f-a02f-902be795d40a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:03:45 compute-0 nova_compute[350387]: 2025-11-26 02:03:45.285 350391 DEBUG oslo_concurrency.lockutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "0046c72b-74cd-452f-a02f-902be795d40a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:03:45 compute-0 nova_compute[350387]: 2025-11-26 02:03:45.307 350391 DEBUG nova.compute.manager [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 02:03:45 compute-0 nova_compute[350387]: 2025-11-26 02:03:45.417 350391 DEBUG oslo_concurrency.lockutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:03:45 compute-0 nova_compute[350387]: 2025-11-26 02:03:45.418 350391 DEBUG oslo_concurrency.lockutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:03:45 compute-0 nova_compute[350387]: 2025-11-26 02:03:45.431 350391 DEBUG nova.virt.hardware [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 02:03:45 compute-0 nova_compute[350387]: 2025-11-26 02:03:45.431 350391 INFO nova.compute.claims [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 02:03:45 compute-0 podman[431492]: 2025-11-26 02:03:45.558913034 +0000 UTC m=+0.096580098 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 02:03:45 compute-0 podman[431491]: 2025-11-26 02:03:45.563977356 +0000 UTC m=+0.109404788 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 02:03:45 compute-0 nova_compute[350387]: 2025-11-26 02:03:45.577 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:03:45 compute-0 podman[431490]: 2025-11-26 02:03:45.588233027 +0000 UTC m=+0.130846490 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:03:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:03:45 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2627008626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.014 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.028 350391 DEBUG nova.compute.provider_tree [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.067 350391 DEBUG nova.scheduler.client.report [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.099 350391 DEBUG oslo_concurrency.lockutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.101 350391 DEBUG nova.compute.manager [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.167 350391 DEBUG nova.compute.manager [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.189 350391 INFO nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.240 350391 DEBUG nova.compute.manager [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.341 350391 DEBUG nova.compute.manager [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.343 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.344 350391 INFO nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Creating image(s)#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.385 350391 DEBUG nova.storage.rbd_utils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0046c72b-74cd-452f-a02f-902be795d40a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.526 350391 DEBUG nova.storage.rbd_utils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0046c72b-74cd-452f-a02f-902be795d40a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.574 350391 DEBUG nova.storage.rbd_utils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0046c72b-74cd-452f-a02f-902be795d40a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.583 350391 DEBUG oslo_concurrency.lockutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "8b2418705cce6052c0ebe8d6666be2547437287b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.585 350391 DEBUG oslo_concurrency.lockutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "8b2418705cce6052c0ebe8d6666be2547437287b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.702 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:46 compute-0 nova_compute[350387]: 2025-11-26 02:03:46.861 350391 DEBUG nova.virt.libvirt.imagebackend [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Image locations are: [{'url': 'rbd://36901f64-240e-5c29-a2e2-29b56f2c329c/images/85cfb92f-8bb6-4b62-9458-cec3db6a90d0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://36901f64-240e-5c29-a2e2-29b56f2c329c/images/85cfb92f-8bb6-4b62-9458-cec3db6a90d0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 26 02:03:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 773 KiB/s wr, 10 op/s
Nov 26 02:03:47 compute-0 nova_compute[350387]: 2025-11-26 02:03:47.916 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b2418705cce6052c0ebe8d6666be2547437287b.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:03:48 compute-0 nova_compute[350387]: 2025-11-26 02:03:48.006 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b2418705cce6052c0ebe8d6666be2547437287b.part --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:03:48 compute-0 nova_compute[350387]: 2025-11-26 02:03:48.008 350391 DEBUG nova.virt.images [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] 85cfb92f-8bb6-4b62-9458-cec3db6a90d0 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 26 02:03:48 compute-0 nova_compute[350387]: 2025-11-26 02:03:48.010 350391 DEBUG nova.privsep.utils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 26 02:03:48 compute-0 nova_compute[350387]: 2025-11-26 02:03:48.012 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8b2418705cce6052c0ebe8d6666be2547437287b.part /var/lib/nova/instances/_base/8b2418705cce6052c0ebe8d6666be2547437287b.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:03:48 compute-0 nova_compute[350387]: 2025-11-26 02:03:48.276 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8b2418705cce6052c0ebe8d6666be2547437287b.part /var/lib/nova/instances/_base/8b2418705cce6052c0ebe8d6666be2547437287b.converted" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:03:48 compute-0 nova_compute[350387]: 2025-11-26 02:03:48.280 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b2418705cce6052c0ebe8d6666be2547437287b.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:03:48 compute-0 nova_compute[350387]: 2025-11-26 02:03:48.342 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8b2418705cce6052c0ebe8d6666be2547437287b.converted --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:03:48 compute-0 nova_compute[350387]: 2025-11-26 02:03:48.344 350391 DEBUG oslo_concurrency.lockutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "8b2418705cce6052c0ebe8d6666be2547437287b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:03:48 compute-0 nova_compute[350387]: 2025-11-26 02:03:48.375 350391 DEBUG nova.storage.rbd_utils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0046c72b-74cd-452f-a02f-902be795d40a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:03:48 compute-0 nova_compute[350387]: 2025-11-26 02:03:48.383 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8b2418705cce6052c0ebe8d6666be2547437287b 0046c72b-74cd-452f-a02f-902be795d40a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:03:48 compute-0 nova_compute[350387]: 2025-11-26 02:03:48.773 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:48 compute-0 nova_compute[350387]: 2025-11-26 02:03:48.865 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8b2418705cce6052c0ebe8d6666be2547437287b 0046c72b-74cd-452f-a02f-902be795d40a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:03:49 compute-0 nova_compute[350387]: 2025-11-26 02:03:49.041 350391 DEBUG nova.storage.rbd_utils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] resizing rbd image 0046c72b-74cd-452f-a02f-902be795d40a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 26 02:03:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 688 KiB/s wr, 9 op/s
Nov 26 02:03:49 compute-0 nova_compute[350387]: 2025-11-26 02:03:49.301 350391 DEBUG nova.objects.instance [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lazy-loading 'migration_context' on Instance uuid 0046c72b-74cd-452f-a02f-902be795d40a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:03:49 compute-0 nova_compute[350387]: 2025-11-26 02:03:49.373 350391 DEBUG nova.storage.rbd_utils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0046c72b-74cd-452f-a02f-902be795d40a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:03:49 compute-0 nova_compute[350387]: 2025-11-26 02:03:49.417 350391 DEBUG nova.storage.rbd_utils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0046c72b-74cd-452f-a02f-902be795d40a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:03:49 compute-0 nova_compute[350387]: 2025-11-26 02:03:49.428 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:03:49 compute-0 nova_compute[350387]: 2025-11-26 02:03:49.515 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:03:49 compute-0 nova_compute[350387]: 2025-11-26 02:03:49.516 350391 DEBUG oslo_concurrency.lockutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:03:49 compute-0 nova_compute[350387]: 2025-11-26 02:03:49.517 350391 DEBUG oslo_concurrency.lockutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:03:49 compute-0 nova_compute[350387]: 2025-11-26 02:03:49.518 350391 DEBUG oslo_concurrency.lockutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:03:49 compute-0 nova_compute[350387]: 2025-11-26 02:03:49.558 350391 DEBUG nova.storage.rbd_utils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0046c72b-74cd-452f-a02f-902be795d40a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:03:49 compute-0 nova_compute[350387]: 2025-11-26 02:03:49.580 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 0046c72b-74cd-452f-a02f-902be795d40a_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:03:49 compute-0 podman[431778]: 2025-11-26 02:03:49.611982239 +0000 UTC m=+0.159306127 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0)
Nov 26 02:03:49 compute-0 podman[431780]: 2025-11-26 02:03:49.647932267 +0000 UTC m=+0.187988271 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 26 02:03:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.062 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 0046c72b-74cd-452f-a02f-902be795d40a_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.293 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.295 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Ensure instance console log exists: /var/lib/nova/instances/0046c72b-74cd-452f-a02f-902be795d40a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.296 350391 DEBUG oslo_concurrency.lockutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.296 350391 DEBUG oslo_concurrency.lockutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.297 350391 DEBUG oslo_concurrency.lockutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.298 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T02:03:32Z,direct_url=<?>,disk_format='qcow2',id=85cfb92f-8bb6-4b62-9458-cec3db6a90d0,min_disk=0,min_ram=0,name='fvt_testing_image',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T02:03:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': '85cfb92f-8bb6-4b62-9458-cec3db6a90d0'}], 'ephemerals': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'size': 1, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.307 350391 WARNING nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.315 350391 DEBUG nova.virt.libvirt.host [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.316 350391 DEBUG nova.virt.libvirt.host [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.322 350391 DEBUG nova.virt.libvirt.host [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.323 350391 DEBUG nova.virt.libvirt.host [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.324 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.325 350391 DEBUG nova.virt.hardware [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T02:03:40Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='eefcd8ff-0262-4ed2-8f9b-ba348f336c91',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T02:03:32Z,direct_url=<?>,disk_format='qcow2',id=85cfb92f-8bb6-4b62-9458-cec3db6a90d0,min_disk=0,min_ram=0,name='fvt_testing_image',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T02:03:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.326 350391 DEBUG nova.virt.hardware [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.327 350391 DEBUG nova.virt.hardware [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.327 350391 DEBUG nova.virt.hardware [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.328 350391 DEBUG nova.virt.hardware [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.328 350391 DEBUG nova.virt.hardware [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.329 350391 DEBUG nova.virt.hardware [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.329 350391 DEBUG nova.virt.hardware [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.330 350391 DEBUG nova.virt.hardware [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.330 350391 DEBUG nova.virt.hardware [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.330 350391 DEBUG nova.virt.hardware [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.335 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:03:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:03:50 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3889191802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.850 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:03:50 compute-0 nova_compute[350387]: 2025-11-26 02:03:50.853 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1593: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 36 op/s
Nov 26 02:03:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:03:51 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1187497821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013691715300840008 of space, bias 1.0, pg target 0.4107514590252002 quantized to 32 (current 32)
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0005066271692062251 of space, bias 1.0, pg target 0.15198815076186756 quantized to 32 (current 32)
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:03:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:03:51 compute-0 nova_compute[350387]: 2025-11-26 02:03:51.294 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:03:51 compute-0 nova_compute[350387]: 2025-11-26 02:03:51.354 350391 DEBUG nova.storage.rbd_utils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0046c72b-74cd-452f-a02f-902be795d40a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:03:51 compute-0 nova_compute[350387]: 2025-11-26 02:03:51.367 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:03:51 compute-0 nova_compute[350387]: 2025-11-26 02:03:51.704 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:03:51 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3408957597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:03:51 compute-0 nova_compute[350387]: 2025-11-26 02:03:51.836 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:03:51 compute-0 nova_compute[350387]: 2025-11-26 02:03:51.839 350391 DEBUG nova.objects.instance [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0046c72b-74cd-452f-a02f-902be795d40a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:03:51 compute-0 nova_compute[350387]: 2025-11-26 02:03:51.861 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] End _get_guest_xml xml=<domain type="kvm">
Nov 26 02:03:51 compute-0 nova_compute[350387]:  <uuid>0046c72b-74cd-452f-a02f-902be795d40a</uuid>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  <name>instance-00000005</name>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  <memory>524288</memory>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  <metadata>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <nova:name>fvt_testing_server</nova:name>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 02:03:50</nova:creationTime>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <nova:flavor name="fvt_testing_flavor">
Nov 26 02:03:51 compute-0 nova_compute[350387]:        <nova:memory>512</nova:memory>
Nov 26 02:03:51 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 02:03:51 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 02:03:51 compute-0 nova_compute[350387]:        <nova:ephemeral>1</nova:ephemeral>
Nov 26 02:03:51 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 02:03:51 compute-0 nova_compute[350387]:        <nova:user uuid="b130e7a8bed3424f9f5ff63b35cd2b28">admin</nova:user>
Nov 26 02:03:51 compute-0 nova_compute[350387]:        <nova:project uuid="4d902f6105ab4c81a51a4751fa89a83e">admin</nova:project>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="85cfb92f-8bb6-4b62-9458-cec3db6a90d0"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <nova:ports/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  </metadata>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <system>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <entry name="serial">0046c72b-74cd-452f-a02f-902be795d40a</entry>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <entry name="uuid">0046c72b-74cd-452f-a02f-902be795d40a</entry>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    </system>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  <os>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  </os>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  <features>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <apic/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  </features>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  </clock>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  </cpu>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  <devices>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/0046c72b-74cd-452f-a02f-902be795d40a_disk">
Nov 26 02:03:51 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      </source>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:03:51 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/0046c72b-74cd-452f-a02f-902be795d40a_disk.eph0">
Nov 26 02:03:51 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      </source>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:03:51 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <target dev="vdb" bus="virtio"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/0046c72b-74cd-452f-a02f-902be795d40a_disk.config">
Nov 26 02:03:51 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      </source>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:03:51 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/0046c72b-74cd-452f-a02f-902be795d40a/console.log" append="off"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    </serial>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <video>
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    </video>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    </rng>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 02:03:51 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 02:03:51 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 02:03:51 compute-0 nova_compute[350387]:  </devices>
Nov 26 02:03:51 compute-0 nova_compute[350387]: </domain>
Nov 26 02:03:51 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 02:03:51 compute-0 nova_compute[350387]: 2025-11-26 02:03:51.911 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:03:51 compute-0 nova_compute[350387]: 2025-11-26 02:03:51.912 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:03:51 compute-0 nova_compute[350387]: 2025-11-26 02:03:51.912 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:03:51 compute-0 nova_compute[350387]: 2025-11-26 02:03:51.912 350391 INFO nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Using config drive#033[00m
Nov 26 02:03:51 compute-0 nova_compute[350387]: 2025-11-26 02:03:51.958 350391 DEBUG nova.storage.rbd_utils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0046c72b-74cd-452f-a02f-902be795d40a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:03:52 compute-0 nova_compute[350387]: 2025-11-26 02:03:52.679 350391 INFO nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Creating config drive at /var/lib/nova/instances/0046c72b-74cd-452f-a02f-902be795d40a/disk.config#033[00m
Nov 26 02:03:52 compute-0 nova_compute[350387]: 2025-11-26 02:03:52.688 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0046c72b-74cd-452f-a02f-902be795d40a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4bd09d9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:03:52 compute-0 nova_compute[350387]: 2025-11-26 02:03:52.833 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0046c72b-74cd-452f-a02f-902be795d40a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4bd09d9" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:03:52 compute-0 nova_compute[350387]: 2025-11-26 02:03:52.904 350391 DEBUG nova.storage.rbd_utils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] rbd image 0046c72b-74cd-452f-a02f-902be795d40a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:03:52 compute-0 nova_compute[350387]: 2025-11-26 02:03:52.917 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0046c72b-74cd-452f-a02f-902be795d40a/disk.config 0046c72b-74cd-452f-a02f-902be795d40a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:03:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1594: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 52 op/s
Nov 26 02:03:53 compute-0 nova_compute[350387]: 2025-11-26 02:03:53.199 350391 DEBUG oslo_concurrency.processutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0046c72b-74cd-452f-a02f-902be795d40a/disk.config 0046c72b-74cd-452f-a02f-902be795d40a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:03:53 compute-0 nova_compute[350387]: 2025-11-26 02:03:53.200 350391 INFO nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Deleting local config drive /var/lib/nova/instances/0046c72b-74cd-452f-a02f-902be795d40a/disk.config because it was imported into RBD.#033[00m
Nov 26 02:03:53 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 26 02:03:53 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 26 02:03:53 compute-0 systemd-machined[138512]: New machine qemu-5-instance-00000005.
Nov 26 02:03:53 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 26 02:03:53 compute-0 podman[432059]: 2025-11-26 02:03:53.389727605 +0000 UTC m=+0.135404707 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, io.openshift.expose-services=)
Nov 26 02:03:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:03:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 7289 writes, 32K keys, 7289 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 7289 writes, 7289 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1323 writes, 5989 keys, 1323 commit groups, 1.0 writes per commit group, ingest: 8.57 MB, 0.01 MB/s#012Interval WAL: 1323 writes, 1323 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    114.5      0.34              0.19        19    0.018       0      0       0.0       0.0#012  L6      1/0    8.44 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.3    171.0    138.2      0.94              0.57        18    0.052     86K    10K       0.0       0.0#012 Sum      1/0    8.44 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.3    125.2    131.8      1.28              0.76        37    0.035     86K    10K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.4    127.2    131.9      0.30              0.19         8    0.038     22K   2517       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    171.0    138.2      0.94              0.57        18    0.052     86K    10K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    116.8      0.33              0.19        18    0.019       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.038, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.16 GB write, 0.06 MB/s write, 0.16 GB read, 0.05 MB/s read, 1.3 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5636b955b1f0#2 capacity: 308.00 MB usage: 20.09 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000294 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1280,19.42 MB,6.30425%) FilterBlock(38,244.73 KB,0.0775969%) IndexBlock(38,447.89 KB,0.142011%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 26 02:03:53 compute-0 nova_compute[350387]: 2025-11-26 02:03:53.776 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.312 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764122634.3106956, 0046c72b-74cd-452f-a02f-902be795d40a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.314 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] VM Resumed (Lifecycle Event)#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.318 350391 DEBUG nova.compute.manager [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.319 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.328 350391 INFO nova.virt.libvirt.driver [-] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Instance spawned successfully.#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.329 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.337 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.346 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.363 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.363 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.364 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.365 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.366 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.367 350391 DEBUG nova.virt.libvirt.driver [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.373 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.373 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764122634.3175123, 0046c72b-74cd-452f-a02f-902be795d40a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.374 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] VM Started (Lifecycle Event)#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.403 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.412 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:03:54 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.438 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.443 350391 INFO nova.compute.manager [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Took 8.10 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.444 350391 DEBUG nova.compute.manager [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:03:54 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.513 350391 INFO nova.compute.manager [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Took 9.13 seconds to build instance.#033[00m
Nov 26 02:03:54 compute-0 nova_compute[350387]: 2025-11-26 02:03:54.532 350391 DEBUG oslo_concurrency.lockutils [None req-745410bb-7283-49b4-8820-f36c79594e96 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "0046c72b-74cd-452f-a02f-902be795d40a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:03:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:03:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1595: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 49 op/s
Nov 26 02:03:56 compute-0 podman[432190]: 2025-11-26 02:03:56.621983619 +0000 UTC m=+0.159038370 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 26 02:03:56 compute-0 nova_compute[350387]: 2025-11-26 02:03:56.706 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 60 op/s
Nov 26 02:03:58 compute-0 podman[432208]: 2025-11-26 02:03:58.581694488 +0000 UTC m=+0.129013628 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:03:58 compute-0 podman[432207]: 2025-11-26 02:03:58.607686767 +0000 UTC m=+0.148742981 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Nov 26 02:03:58 compute-0 nova_compute[350387]: 2025-11-26 02:03:58.781 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:03:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1597: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 60 op/s
Nov 26 02:03:59 compute-0 podman[158021]: time="2025-11-26T02:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:03:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:03:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8644 "" "Go-http-client/1.1"
Nov 26 02:03:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:04:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1598: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 104 op/s
Nov 26 02:04:01 compute-0 openstack_network_exporter[367323]: ERROR   02:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:04:01 compute-0 openstack_network_exporter[367323]: ERROR   02:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:04:01 compute-0 openstack_network_exporter[367323]: ERROR   02:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:04:01 compute-0 openstack_network_exporter[367323]: ERROR   02:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:04:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:04:01 compute-0 openstack_network_exporter[367323]: ERROR   02:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:04:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:04:01 compute-0 nova_compute[350387]: 2025-11-26 02:04:01.708 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 13 KiB/s wr, 76 op/s
Nov 26 02:04:03 compute-0 nova_compute[350387]: 2025-11-26 02:04:03.783 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:04:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Nov 26 02:04:06 compute-0 nova_compute[350387]: 2025-11-26 02:04:06.713 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 341 B/s wr, 55 op/s
Nov 26 02:04:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:04:08 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:04:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:04:08 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:04:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:04:08 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:04:08 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 1423e7df-60c0-410c-9e4e-aa73c2a0cce3 does not exist
Nov 26 02:04:08 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 99ed4c4d-af24-4fe7-82d4-bae74706c037 does not exist
Nov 26 02:04:08 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 74241dcb-7c31-4d63-b33f-60e4d904bfce does not exist
Nov 26 02:04:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:04:08 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:04:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:04:08 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:04:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:04:08 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:04:08 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:04:08 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:04:08 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:04:08 compute-0 nova_compute[350387]: 2025-11-26 02:04:08.787 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:09 compute-0 nova_compute[350387]: 2025-11-26 02:04:09.148 350391 DEBUG oslo_concurrency.lockutils [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "0046c72b-74cd-452f-a02f-902be795d40a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:04:09 compute-0 nova_compute[350387]: 2025-11-26 02:04:09.150 350391 DEBUG oslo_concurrency.lockutils [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "0046c72b-74cd-452f-a02f-902be795d40a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:04:09 compute-0 nova_compute[350387]: 2025-11-26 02:04:09.150 350391 DEBUG oslo_concurrency.lockutils [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "0046c72b-74cd-452f-a02f-902be795d40a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:04:09 compute-0 nova_compute[350387]: 2025-11-26 02:04:09.151 350391 DEBUG oslo_concurrency.lockutils [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "0046c72b-74cd-452f-a02f-902be795d40a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:04:09 compute-0 nova_compute[350387]: 2025-11-26 02:04:09.152 350391 DEBUG oslo_concurrency.lockutils [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "0046c72b-74cd-452f-a02f-902be795d40a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:04:09 compute-0 nova_compute[350387]: 2025-11-26 02:04:09.153 350391 INFO nova.compute.manager [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Terminating instance#033[00m
Nov 26 02:04:09 compute-0 nova_compute[350387]: 2025-11-26 02:04:09.155 350391 DEBUG oslo_concurrency.lockutils [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "refresh_cache-0046c72b-74cd-452f-a02f-902be795d40a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:04:09 compute-0 nova_compute[350387]: 2025-11-26 02:04:09.155 350391 DEBUG oslo_concurrency.lockutils [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquired lock "refresh_cache-0046c72b-74cd-452f-a02f-902be795d40a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:04:09 compute-0 nova_compute[350387]: 2025-11-26 02:04:09.156 350391 DEBUG nova.network.neutron [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 02:04:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1602: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 44 op/s
Nov 26 02:04:09 compute-0 nova_compute[350387]: 2025-11-26 02:04:09.328 350391 DEBUG nova.network.neutron [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 02:04:09 compute-0 podman[432518]: 2025-11-26 02:04:09.461028672 +0000 UTC m=+0.074834369 container create 0def599dd1549a2da0c3e20d5286646e361ce4b4aa40c59f1f340f1b5379b5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 02:04:09 compute-0 systemd[1]: Started libpod-conmon-0def599dd1549a2da0c3e20d5286646e361ce4b4aa40c59f1f340f1b5379b5f8.scope.
Nov 26 02:04:09 compute-0 podman[432518]: 2025-11-26 02:04:09.432522713 +0000 UTC m=+0.046328420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:04:09 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:04:09 compute-0 podman[432518]: 2025-11-26 02:04:09.5793981 +0000 UTC m=+0.193203817 container init 0def599dd1549a2da0c3e20d5286646e361ce4b4aa40c59f1f340f1b5379b5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bartik, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 02:04:09 compute-0 podman[432518]: 2025-11-26 02:04:09.5940294 +0000 UTC m=+0.207835107 container start 0def599dd1549a2da0c3e20d5286646e361ce4b4aa40c59f1f340f1b5379b5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:04:09 compute-0 podman[432518]: 2025-11-26 02:04:09.599192655 +0000 UTC m=+0.212998422 container attach 0def599dd1549a2da0c3e20d5286646e361ce4b4aa40c59f1f340f1b5379b5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bartik, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:04:09 compute-0 fervent_bartik[432534]: 167 167
Nov 26 02:04:09 compute-0 systemd[1]: libpod-0def599dd1549a2da0c3e20d5286646e361ce4b4aa40c59f1f340f1b5379b5f8.scope: Deactivated successfully.
Nov 26 02:04:09 compute-0 conmon[432534]: conmon 0def599dd1549a2da0c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0def599dd1549a2da0c3e20d5286646e361ce4b4aa40c59f1f340f1b5379b5f8.scope/container/memory.events
Nov 26 02:04:09 compute-0 podman[432518]: 2025-11-26 02:04:09.602637421 +0000 UTC m=+0.216443158 container died 0def599dd1549a2da0c3e20d5286646e361ce4b4aa40c59f1f340f1b5379b5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 02:04:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc7cafd0ed2c6712e3c439022eedcd637af35bbe2b4fffbc010b8eda23e4d2d4-merged.mount: Deactivated successfully.
Nov 26 02:04:09 compute-0 podman[432518]: 2025-11-26 02:04:09.675985098 +0000 UTC m=+0.289790805 container remove 0def599dd1549a2da0c3e20d5286646e361ce4b4aa40c59f1f340f1b5379b5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:04:09 compute-0 systemd[1]: libpod-conmon-0def599dd1549a2da0c3e20d5286646e361ce4b4aa40c59f1f340f1b5379b5f8.scope: Deactivated successfully.
Nov 26 02:04:09 compute-0 nova_compute[350387]: 2025-11-26 02:04:09.777 350391 DEBUG nova.network.neutron [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:04:09 compute-0 nova_compute[350387]: 2025-11-26 02:04:09.811 350391 DEBUG oslo_concurrency.lockutils [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Releasing lock "refresh_cache-0046c72b-74cd-452f-a02f-902be795d40a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:04:09 compute-0 nova_compute[350387]: 2025-11-26 02:04:09.814 350391 DEBUG nova.compute.manager [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 02:04:09 compute-0 podman[432558]: 2025-11-26 02:04:09.921210892 +0000 UTC m=+0.073709767 container create ccb59d8dc4def59df814976d3feb1985f9a7ef1457e1ee343b6719fd16573898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:04:09 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 26 02:04:09 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 16.944s CPU time.
Nov 26 02:04:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:04:09 compute-0 systemd-machined[138512]: Machine qemu-5-instance-00000005 terminated.
Nov 26 02:04:09 compute-0 systemd[1]: Started libpod-conmon-ccb59d8dc4def59df814976d3feb1985f9a7ef1457e1ee343b6719fd16573898.scope.
Nov 26 02:04:09 compute-0 podman[432558]: 2025-11-26 02:04:09.889858033 +0000 UTC m=+0.042356908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:04:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ae7fafe6c16ef02eb742543e3cbdf0f445dab8acffd0ac291be943365609eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ae7fafe6c16ef02eb742543e3cbdf0f445dab8acffd0ac291be943365609eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ae7fafe6c16ef02eb742543e3cbdf0f445dab8acffd0ac291be943365609eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ae7fafe6c16ef02eb742543e3cbdf0f445dab8acffd0ac291be943365609eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ae7fafe6c16ef02eb742543e3cbdf0f445dab8acffd0ac291be943365609eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:04:10 compute-0 podman[432558]: 2025-11-26 02:04:10.036444793 +0000 UTC m=+0.188943688 container init ccb59d8dc4def59df814976d3feb1985f9a7ef1457e1ee343b6719fd16573898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:04:10 compute-0 podman[432558]: 2025-11-26 02:04:10.056023432 +0000 UTC m=+0.208522297 container start ccb59d8dc4def59df814976d3feb1985f9a7ef1457e1ee343b6719fd16573898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:04:10 compute-0 podman[432558]: 2025-11-26 02:04:10.061217807 +0000 UTC m=+0.213716712 container attach ccb59d8dc4def59df814976d3feb1985f9a7ef1457e1ee343b6719fd16573898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:04:10 compute-0 nova_compute[350387]: 2025-11-26 02:04:10.072 350391 INFO nova.virt.libvirt.driver [-] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Instance destroyed successfully.#033[00m
Nov 26 02:04:10 compute-0 nova_compute[350387]: 2025-11-26 02:04:10.073 350391 DEBUG nova.objects.instance [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lazy-loading 'resources' on Instance uuid 0046c72b-74cd-452f-a02f-902be795d40a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:04:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:04:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:04:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:04:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:04:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:04:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:04:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 172 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.2 KiB/s wr, 59 op/s
Nov 26 02:04:11 compute-0 objective_solomon[432574]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:04:11 compute-0 objective_solomon[432574]: --> relative data size: 1.0
Nov 26 02:04:11 compute-0 objective_solomon[432574]: --> All data devices are unavailable
Nov 26 02:04:11 compute-0 systemd[1]: libpod-ccb59d8dc4def59df814976d3feb1985f9a7ef1457e1ee343b6719fd16573898.scope: Deactivated successfully.
Nov 26 02:04:11 compute-0 systemd[1]: libpod-ccb59d8dc4def59df814976d3feb1985f9a7ef1457e1ee343b6719fd16573898.scope: Consumed 1.144s CPU time.
Nov 26 02:04:11 compute-0 podman[432625]: 2025-11-26 02:04:11.381918781 +0000 UTC m=+0.047119772 container died ccb59d8dc4def59df814976d3feb1985f9a7ef1457e1ee343b6719fd16573898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 02:04:11 compute-0 nova_compute[350387]: 2025-11-26 02:04:11.385 350391 INFO nova.virt.libvirt.driver [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Deleting instance files /var/lib/nova/instances/0046c72b-74cd-452f-a02f-902be795d40a_del#033[00m
Nov 26 02:04:11 compute-0 nova_compute[350387]: 2025-11-26 02:04:11.387 350391 INFO nova.virt.libvirt.driver [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Deletion of /var/lib/nova/instances/0046c72b-74cd-452f-a02f-902be795d40a_del complete#033[00m
Nov 26 02:04:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-45ae7fafe6c16ef02eb742543e3cbdf0f445dab8acffd0ac291be943365609eb-merged.mount: Deactivated successfully.
Nov 26 02:04:11 compute-0 nova_compute[350387]: 2025-11-26 02:04:11.442 350391 INFO nova.compute.manager [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Took 1.63 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 02:04:11 compute-0 nova_compute[350387]: 2025-11-26 02:04:11.443 350391 DEBUG oslo.service.loopingcall [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 02:04:11 compute-0 nova_compute[350387]: 2025-11-26 02:04:11.443 350391 DEBUG nova.compute.manager [-] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 02:04:11 compute-0 nova_compute[350387]: 2025-11-26 02:04:11.444 350391 DEBUG nova.network.neutron [-] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 02:04:11 compute-0 podman[432625]: 2025-11-26 02:04:11.47605399 +0000 UTC m=+0.141254951 container remove ccb59d8dc4def59df814976d3feb1985f9a7ef1457e1ee343b6719fd16573898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:04:11 compute-0 systemd[1]: libpod-conmon-ccb59d8dc4def59df814976d3feb1985f9a7ef1457e1ee343b6719fd16573898.scope: Deactivated successfully.
Nov 26 02:04:11 compute-0 nova_compute[350387]: 2025-11-26 02:04:11.673 350391 DEBUG nova.network.neutron [-] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 02:04:11 compute-0 nova_compute[350387]: 2025-11-26 02:04:11.688 350391 DEBUG nova.network.neutron [-] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:04:11 compute-0 nova_compute[350387]: 2025-11-26 02:04:11.712 350391 INFO nova.compute.manager [-] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Took 0.27 seconds to deallocate network for instance.#033[00m
Nov 26 02:04:11 compute-0 nova_compute[350387]: 2025-11-26 02:04:11.720 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:11 compute-0 nova_compute[350387]: 2025-11-26 02:04:11.758 350391 DEBUG oslo_concurrency.lockutils [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:04:11 compute-0 nova_compute[350387]: 2025-11-26 02:04:11.758 350391 DEBUG oslo_concurrency.lockutils [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:04:11 compute-0 nova_compute[350387]: 2025-11-26 02:04:11.893 350391 DEBUG oslo_concurrency.processutils [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:04:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:04:12 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490795887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:04:12 compute-0 nova_compute[350387]: 2025-11-26 02:04:12.379 350391 DEBUG oslo_concurrency.processutils [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:04:12 compute-0 nova_compute[350387]: 2025-11-26 02:04:12.397 350391 DEBUG nova.compute.provider_tree [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:04:12 compute-0 nova_compute[350387]: 2025-11-26 02:04:12.419 350391 DEBUG nova.scheduler.client.report [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:04:12 compute-0 nova_compute[350387]: 2025-11-26 02:04:12.445 350391 DEBUG oslo_concurrency.lockutils [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:04:12 compute-0 nova_compute[350387]: 2025-11-26 02:04:12.501 350391 INFO nova.scheduler.client.report [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Deleted allocations for instance 0046c72b-74cd-452f-a02f-902be795d40a#033[00m
Nov 26 02:04:12 compute-0 nova_compute[350387]: 2025-11-26 02:04:12.597 350391 DEBUG oslo_concurrency.lockutils [None req-4d9ea624-3419-44eb-b8ff-a9ec89413e81 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "0046c72b-74cd-452f-a02f-902be795d40a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:04:12 compute-0 podman[432796]: 2025-11-26 02:04:12.611222594 +0000 UTC m=+0.077174184 container create 1ac7d22ba68ddeceb67f2281cdbc29ad71fb7e046a6c819d29784afe381e70df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bose, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 02:04:12 compute-0 systemd[1]: Started libpod-conmon-1ac7d22ba68ddeceb67f2281cdbc29ad71fb7e046a6c819d29784afe381e70df.scope.
Nov 26 02:04:12 compute-0 podman[432796]: 2025-11-26 02:04:12.584957918 +0000 UTC m=+0.050909488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:04:12 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:04:12 compute-0 podman[432796]: 2025-11-26 02:04:12.778319189 +0000 UTC m=+0.244270809 container init 1ac7d22ba68ddeceb67f2281cdbc29ad71fb7e046a6c819d29784afe381e70df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:04:12 compute-0 podman[432796]: 2025-11-26 02:04:12.795438048 +0000 UTC m=+0.261389628 container start 1ac7d22ba68ddeceb67f2281cdbc29ad71fb7e046a6c819d29784afe381e70df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bose, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:04:12 compute-0 podman[432796]: 2025-11-26 02:04:12.80193002 +0000 UTC m=+0.267881610 container attach 1ac7d22ba68ddeceb67f2281cdbc29ad71fb7e046a6c819d29784afe381e70df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bose, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 02:04:12 compute-0 hardcore_bose[432812]: 167 167
Nov 26 02:04:12 compute-0 systemd[1]: libpod-1ac7d22ba68ddeceb67f2281cdbc29ad71fb7e046a6c819d29784afe381e70df.scope: Deactivated successfully.
Nov 26 02:04:12 compute-0 podman[432796]: 2025-11-26 02:04:12.806140748 +0000 UTC m=+0.272092308 container died 1ac7d22ba68ddeceb67f2281cdbc29ad71fb7e046a6c819d29784afe381e70df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bose, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 02:04:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fbfd6240850f2705e700444d8b23f360a69044f365b740902a0e0e50306b74e-merged.mount: Deactivated successfully.
Nov 26 02:04:12 compute-0 podman[432796]: 2025-11-26 02:04:12.869479434 +0000 UTC m=+0.335431004 container remove 1ac7d22ba68ddeceb67f2281cdbc29ad71fb7e046a6c819d29784afe381e70df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 26 02:04:12 compute-0 systemd[1]: libpod-conmon-1ac7d22ba68ddeceb67f2281cdbc29ad71fb7e046a6c819d29784afe381e70df.scope: Deactivated successfully.
Nov 26 02:04:13 compute-0 podman[432835]: 2025-11-26 02:04:13.161592703 +0000 UTC m=+0.095933880 container create 95947f2e619b8976df9028e1c66bb94bfd2f77906142678c83612f976fbe9ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_engelbart, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 02:04:13 compute-0 podman[432835]: 2025-11-26 02:04:13.116199811 +0000 UTC m=+0.050541038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:04:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 172 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Nov 26 02:04:13 compute-0 systemd[1]: Started libpod-conmon-95947f2e619b8976df9028e1c66bb94bfd2f77906142678c83612f976fbe9ed9.scope.
Nov 26 02:04:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09700b9e6b127c0a9f3583837f3afb5c1ec8b8cd1e1df6461ba93b7a6b767773/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09700b9e6b127c0a9f3583837f3afb5c1ec8b8cd1e1df6461ba93b7a6b767773/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09700b9e6b127c0a9f3583837f3afb5c1ec8b8cd1e1df6461ba93b7a6b767773/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09700b9e6b127c0a9f3583837f3afb5c1ec8b8cd1e1df6461ba93b7a6b767773/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:04:13 compute-0 podman[432835]: 2025-11-26 02:04:13.324969113 +0000 UTC m=+0.259310280 container init 95947f2e619b8976df9028e1c66bb94bfd2f77906142678c83612f976fbe9ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 02:04:13 compute-0 podman[432835]: 2025-11-26 02:04:13.360561581 +0000 UTC m=+0.294902758 container start 95947f2e619b8976df9028e1c66bb94bfd2f77906142678c83612f976fbe9ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 02:04:13 compute-0 podman[432835]: 2025-11-26 02:04:13.366549859 +0000 UTC m=+0.300891046 container attach 95947f2e619b8976df9028e1c66bb94bfd2f77906142678c83612f976fbe9ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:04:13 compute-0 nova_compute[350387]: 2025-11-26 02:04:13.789 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]: {
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:    "0": [
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:        {
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "devices": [
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "/dev/loop3"
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            ],
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "lv_name": "ceph_lv0",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "lv_size": "21470642176",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "name": "ceph_lv0",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "tags": {
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.cluster_name": "ceph",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.crush_device_class": "",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.encrypted": "0",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.osd_id": "0",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.type": "block",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.vdo": "0"
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            },
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "type": "block",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "vg_name": "ceph_vg0"
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:        }
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:    ],
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:    "1": [
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:        {
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "devices": [
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "/dev/loop4"
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            ],
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "lv_name": "ceph_lv1",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "lv_size": "21470642176",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "name": "ceph_lv1",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "tags": {
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.cluster_name": "ceph",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.crush_device_class": "",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.encrypted": "0",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.osd_id": "1",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.type": "block",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.vdo": "0"
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            },
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "type": "block",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "vg_name": "ceph_vg1"
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:        }
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:    ],
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:    "2": [
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:        {
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "devices": [
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "/dev/loop5"
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            ],
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "lv_name": "ceph_lv2",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "lv_size": "21470642176",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "name": "ceph_lv2",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "tags": {
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.cluster_name": "ceph",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.crush_device_class": "",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.encrypted": "0",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.osd_id": "2",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.type": "block",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:                "ceph.vdo": "0"
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            },
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "type": "block",
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:            "vg_name": "ceph_vg2"
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:        }
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]:    ]
Nov 26 02:04:14 compute-0 trusting_engelbart[432851]: }
Nov 26 02:04:14 compute-0 systemd[1]: libpod-95947f2e619b8976df9028e1c66bb94bfd2f77906142678c83612f976fbe9ed9.scope: Deactivated successfully.
Nov 26 02:04:14 compute-0 podman[432835]: 2025-11-26 02:04:14.203966715 +0000 UTC m=+1.138307862 container died 95947f2e619b8976df9028e1c66bb94bfd2f77906142678c83612f976fbe9ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 26 02:04:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-09700b9e6b127c0a9f3583837f3afb5c1ec8b8cd1e1df6461ba93b7a6b767773-merged.mount: Deactivated successfully.
Nov 26 02:04:14 compute-0 podman[432835]: 2025-11-26 02:04:14.282044664 +0000 UTC m=+1.216385821 container remove 95947f2e619b8976df9028e1c66bb94bfd2f77906142678c83612f976fbe9ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 02:04:14 compute-0 systemd[1]: libpod-conmon-95947f2e619b8976df9028e1c66bb94bfd2f77906142678c83612f976fbe9ed9.scope: Deactivated successfully.
Nov 26 02:04:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 26 02:04:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 26 02:04:14 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 26 02:04:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:04:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 157 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Nov 26 02:04:15 compute-0 podman[433008]: 2025-11-26 02:04:15.432269339 +0000 UTC m=+0.091906197 container create 7c8d1200390ab94c9b5f82823aebb59ef0a22c12f19196550c46b724b60d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 02:04:15 compute-0 podman[433008]: 2025-11-26 02:04:15.394173261 +0000 UTC m=+0.053810119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:04:15 compute-0 systemd[1]: Started libpod-conmon-7c8d1200390ab94c9b5f82823aebb59ef0a22c12f19196550c46b724b60d165e.scope.
Nov 26 02:04:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:04:15 compute-0 podman[433008]: 2025-11-26 02:04:15.566686858 +0000 UTC m=+0.226323726 container init 7c8d1200390ab94c9b5f82823aebb59ef0a22c12f19196550c46b724b60d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 02:04:15 compute-0 podman[433008]: 2025-11-26 02:04:15.580286499 +0000 UTC m=+0.239923347 container start 7c8d1200390ab94c9b5f82823aebb59ef0a22c12f19196550c46b724b60d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:04:15 compute-0 podman[433008]: 2025-11-26 02:04:15.586892154 +0000 UTC m=+0.246529002 container attach 7c8d1200390ab94c9b5f82823aebb59ef0a22c12f19196550c46b724b60d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 02:04:15 compute-0 modest_noyce[433022]: 167 167
Nov 26 02:04:15 compute-0 systemd[1]: libpod-7c8d1200390ab94c9b5f82823aebb59ef0a22c12f19196550c46b724b60d165e.scope: Deactivated successfully.
Nov 26 02:04:15 compute-0 podman[433008]: 2025-11-26 02:04:15.592000807 +0000 UTC m=+0.251637665 container died 7c8d1200390ab94c9b5f82823aebb59ef0a22c12f19196550c46b724b60d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:04:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8c1983549513d79626bf09f32eeefc9bb15eae9b08a272f85828d2ceb27b586-merged.mount: Deactivated successfully.
Nov 26 02:04:15 compute-0 podman[433008]: 2025-11-26 02:04:15.662676409 +0000 UTC m=+0.322313237 container remove 7c8d1200390ab94c9b5f82823aebb59ef0a22c12f19196550c46b724b60d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 02:04:15 compute-0 systemd[1]: libpod-conmon-7c8d1200390ab94c9b5f82823aebb59ef0a22c12f19196550c46b724b60d165e.scope: Deactivated successfully.
Nov 26 02:04:15 compute-0 podman[433029]: 2025-11-26 02:04:15.746259072 +0000 UTC m=+0.108837002 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 26 02:04:15 compute-0 podman[433035]: 2025-11-26 02:04:15.747074065 +0000 UTC m=+0.109933273 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:04:15 compute-0 podman[433037]: 2025-11-26 02:04:15.761561581 +0000 UTC m=+0.117138275 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm)
Nov 26 02:04:15 compute-0 podman[433100]: 2025-11-26 02:04:15.887690877 +0000 UTC m=+0.056059673 container create 375fe6313392cd54aa1c57c5e0c20299cb00f4914bf3dfec0d14aa35533a977c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lalande, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 02:04:15 compute-0 systemd[1]: Started libpod-conmon-375fe6313392cd54aa1c57c5e0c20299cb00f4914bf3dfec0d14aa35533a977c.scope.
Nov 26 02:04:15 compute-0 podman[433100]: 2025-11-26 02:04:15.865455793 +0000 UTC m=+0.033824599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:04:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9fe248c4b400fcf7c8ab23aaaa83c9f863b164192b90ce21af5bfaeb8efc35d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9fe248c4b400fcf7c8ab23aaaa83c9f863b164192b90ce21af5bfaeb8efc35d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9fe248c4b400fcf7c8ab23aaaa83c9f863b164192b90ce21af5bfaeb8efc35d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9fe248c4b400fcf7c8ab23aaaa83c9f863b164192b90ce21af5bfaeb8efc35d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:04:16 compute-0 podman[433100]: 2025-11-26 02:04:16.070947124 +0000 UTC m=+0.239315980 container init 375fe6313392cd54aa1c57c5e0c20299cb00f4914bf3dfec0d14aa35533a977c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lalande, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:04:16 compute-0 podman[433100]: 2025-11-26 02:04:16.089880865 +0000 UTC m=+0.258249681 container start 375fe6313392cd54aa1c57c5e0c20299cb00f4914bf3dfec0d14aa35533a977c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 02:04:16 compute-0 podman[433100]: 2025-11-26 02:04:16.096965934 +0000 UTC m=+0.265334800 container attach 375fe6313392cd54aa1c57c5e0c20299cb00f4914bf3dfec0d14aa35533a977c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lalande, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:04:16 compute-0 nova_compute[350387]: 2025-11-26 02:04:16.720 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 139 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.2 KiB/s wr, 70 op/s
Nov 26 02:04:17 compute-0 objective_lalande[433116]: {
Nov 26 02:04:17 compute-0 objective_lalande[433116]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:04:17 compute-0 objective_lalande[433116]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:04:17 compute-0 objective_lalande[433116]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:04:17 compute-0 objective_lalande[433116]:        "osd_id": 0,
Nov 26 02:04:17 compute-0 objective_lalande[433116]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:04:17 compute-0 objective_lalande[433116]:        "type": "bluestore"
Nov 26 02:04:17 compute-0 objective_lalande[433116]:    },
Nov 26 02:04:17 compute-0 objective_lalande[433116]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:04:17 compute-0 objective_lalande[433116]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:04:17 compute-0 objective_lalande[433116]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:04:17 compute-0 objective_lalande[433116]:        "osd_id": 2,
Nov 26 02:04:17 compute-0 objective_lalande[433116]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:04:17 compute-0 objective_lalande[433116]:        "type": "bluestore"
Nov 26 02:04:17 compute-0 objective_lalande[433116]:    },
Nov 26 02:04:17 compute-0 objective_lalande[433116]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:04:17 compute-0 objective_lalande[433116]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:04:17 compute-0 objective_lalande[433116]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:04:17 compute-0 objective_lalande[433116]:        "osd_id": 1,
Nov 26 02:04:17 compute-0 objective_lalande[433116]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:04:17 compute-0 objective_lalande[433116]:        "type": "bluestore"
Nov 26 02:04:17 compute-0 objective_lalande[433116]:    }
Nov 26 02:04:17 compute-0 objective_lalande[433116]: }
Nov 26 02:04:17 compute-0 systemd[1]: libpod-375fe6313392cd54aa1c57c5e0c20299cb00f4914bf3dfec0d14aa35533a977c.scope: Deactivated successfully.
Nov 26 02:04:17 compute-0 podman[433100]: 2025-11-26 02:04:17.303491767 +0000 UTC m=+1.471860573 container died 375fe6313392cd54aa1c57c5e0c20299cb00f4914bf3dfec0d14aa35533a977c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:04:17 compute-0 systemd[1]: libpod-375fe6313392cd54aa1c57c5e0c20299cb00f4914bf3dfec0d14aa35533a977c.scope: Consumed 1.203s CPU time.
Nov 26 02:04:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9fe248c4b400fcf7c8ab23aaaa83c9f863b164192b90ce21af5bfaeb8efc35d-merged.mount: Deactivated successfully.
Nov 26 02:04:17 compute-0 podman[433100]: 2025-11-26 02:04:17.419958232 +0000 UTC m=+1.588327048 container remove 375fe6313392cd54aa1c57c5e0c20299cb00f4914bf3dfec0d14aa35533a977c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 02:04:17 compute-0 systemd[1]: libpod-conmon-375fe6313392cd54aa1c57c5e0c20299cb00f4914bf3dfec0d14aa35533a977c.scope: Deactivated successfully.
Nov 26 02:04:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:04:17 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:04:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:04:17 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:04:17 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev a1f74725-61b7-47f6-8735-29c66838a923 does not exist
Nov 26 02:04:17 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev ffa37beb-9d75-47c5-a3e2-8220022471a2 does not exist
Nov 26 02:04:18 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:04:18 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:04:18 compute-0 nova_compute[350387]: 2025-11-26 02:04:18.792 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1608: 321 pgs: 321 active+clean; 139 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.2 KiB/s wr, 70 op/s
Nov 26 02:04:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:19.966025) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122659966067, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1034, "num_deletes": 255, "total_data_size": 1472661, "memory_usage": 1504320, "flush_reason": "Manual Compaction"}
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122659982907, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1447908, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32158, "largest_seqno": 33191, "table_properties": {"data_size": 1442812, "index_size": 2620, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10920, "raw_average_key_size": 19, "raw_value_size": 1432478, "raw_average_value_size": 2548, "num_data_blocks": 117, "num_entries": 562, "num_filter_entries": 562, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764122567, "oldest_key_time": 1764122567, "file_creation_time": 1764122659, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 16953 microseconds, and 7424 cpu microseconds.
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:19.982980) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1447908 bytes OK
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:19.983001) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:19.985729) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:19.985755) EVENT_LOG_v1 {"time_micros": 1764122659985746, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:19.985777) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1467760, prev total WAL file size 1467760, number of live WAL files 2.
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:19.987552) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1413KB)], [71(8640KB)]
Nov 26 02:04:19 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122659987662, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 10295628, "oldest_snapshot_seqno": -1}
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5368 keys, 10193411 bytes, temperature: kUnknown
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122660060974, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 10193411, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10154925, "index_size": 23976, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 135322, "raw_average_key_size": 25, "raw_value_size": 10055283, "raw_average_value_size": 1873, "num_data_blocks": 991, "num_entries": 5368, "num_filter_entries": 5368, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764122659, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:20.061344) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 10193411 bytes
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:20.064146) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.2 rd, 138.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 8.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(14.2) write-amplify(7.0) OK, records in: 5894, records dropped: 526 output_compression: NoCompression
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:20.064175) EVENT_LOG_v1 {"time_micros": 1764122660064162, "job": 40, "event": "compaction_finished", "compaction_time_micros": 73431, "compaction_time_cpu_micros": 46643, "output_level": 6, "num_output_files": 1, "total_output_size": 10193411, "num_input_records": 5894, "num_output_records": 5368, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122660065050, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122660068764, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:19.987218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:20.069622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:20.069632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:20.069636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:20.069640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:04:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:04:20.069645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:04:20 compute-0 podman[433211]: 2025-11-26 02:04:20.611891038 +0000 UTC m=+0.155175891 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0)
Nov 26 02:04:20 compute-0 podman[433212]: 2025-11-26 02:04:20.660573113 +0000 UTC m=+0.203141196 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:04:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.1 KiB/s wr, 53 op/s
Nov 26 02:04:21 compute-0 nova_compute[350387]: 2025-11-26 02:04:21.726 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1610: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.1 KiB/s wr, 42 op/s
Nov 26 02:04:23 compute-0 podman[433253]: 2025-11-26 02:04:23.613586371 +0000 UTC m=+0.163543276 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, architecture=x86_64, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 26 02:04:23 compute-0 nova_compute[350387]: 2025-11-26 02:04:23.796 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:04:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 26 02:04:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 26 02:04:24 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 26 02:04:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:04:24.985 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:04:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:04:24.986 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:04:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:04:24.986 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:04:25 compute-0 nova_compute[350387]: 2025-11-26 02:04:25.048 350391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764122650.0412028, 0046c72b-74cd-452f-a02f-902be795d40a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:04:25 compute-0 nova_compute[350387]: 2025-11-26 02:04:25.049 350391 INFO nova.compute.manager [-] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] VM Stopped (Lifecycle Event)#033[00m
Nov 26 02:04:25 compute-0 nova_compute[350387]: 2025-11-26 02:04:25.088 350391 DEBUG nova.compute.manager [None req-115a7bc7-8ff2-4f4e-8078-588271df11f1 - - - - - -] [instance: 0046c72b-74cd-452f-a02f-902be795d40a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:04:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 30 op/s
Nov 26 02:04:26 compute-0 nova_compute[350387]: 2025-11-26 02:04:26.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:04:26 compute-0 nova_compute[350387]: 2025-11-26 02:04:26.328 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:04:26 compute-0 nova_compute[350387]: 2025-11-26 02:04:26.329 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:04:26 compute-0 nova_compute[350387]: 2025-11-26 02:04:26.330 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:04:26 compute-0 nova_compute[350387]: 2025-11-26 02:04:26.331 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:04:26 compute-0 nova_compute[350387]: 2025-11-26 02:04:26.332 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:04:26 compute-0 nova_compute[350387]: 2025-11-26 02:04:26.728 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:04:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/672214681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:04:26 compute-0 nova_compute[350387]: 2025-11-26 02:04:26.834 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:04:26 compute-0 nova_compute[350387]: 2025-11-26 02:04:26.993 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:04:26 compute-0 nova_compute[350387]: 2025-11-26 02:04:26.994 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:04:26 compute-0 nova_compute[350387]: 2025-11-26 02:04:26.995 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:04:27 compute-0 nova_compute[350387]: 2025-11-26 02:04:27.008 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:04:27 compute-0 nova_compute[350387]: 2025-11-26 02:04:27.008 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:04:27 compute-0 nova_compute[350387]: 2025-11-26 02:04:27.008 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:04:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:04:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1630796839' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:04:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:04:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1630796839' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:04:27 compute-0 podman[433298]: 2025-11-26 02:04:27.058430205 +0000 UTC m=+0.137503176 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 02:04:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1613: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Nov 26 02:04:27 compute-0 systemd[1]: session-61.scope: Deactivated successfully.
Nov 26 02:04:27 compute-0 systemd[1]: session-61.scope: Consumed 1.400s CPU time.
Nov 26 02:04:27 compute-0 systemd-logind[800]: Session 61 logged out. Waiting for processes to exit.
Nov 26 02:04:27 compute-0 systemd-logind[800]: Removed session 61.
Nov 26 02:04:27 compute-0 nova_compute[350387]: 2025-11-26 02:04:27.684 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:04:27 compute-0 nova_compute[350387]: 2025-11-26 02:04:27.686 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3596MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:04:27 compute-0 nova_compute[350387]: 2025-11-26 02:04:27.686 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:04:27 compute-0 nova_compute[350387]: 2025-11-26 02:04:27.687 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:04:27 compute-0 nova_compute[350387]: 2025-11-26 02:04:27.797 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:04:27 compute-0 nova_compute[350387]: 2025-11-26 02:04:27.797 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance d32050dc-c041-47df-994e-7d05cf1f489a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:04:27 compute-0 nova_compute[350387]: 2025-11-26 02:04:27.798 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:04:27 compute-0 nova_compute[350387]: 2025-11-26 02:04:27.798 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:04:27 compute-0 nova_compute[350387]: 2025-11-26 02:04:27.901 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:04:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:04:28 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1650447860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:04:28 compute-0 nova_compute[350387]: 2025-11-26 02:04:28.454 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:04:28 compute-0 nova_compute[350387]: 2025-11-26 02:04:28.469 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:04:28 compute-0 nova_compute[350387]: 2025-11-26 02:04:28.493 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:04:28 compute-0 nova_compute[350387]: 2025-11-26 02:04:28.525 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:04:28 compute-0 nova_compute[350387]: 2025-11-26 02:04:28.526 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:04:28 compute-0 nova_compute[350387]: 2025-11-26 02:04:28.801 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Nov 26 02:04:29 compute-0 nova_compute[350387]: 2025-11-26 02:04:29.527 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:04:29 compute-0 nova_compute[350387]: 2025-11-26 02:04:29.527 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:04:29 compute-0 nova_compute[350387]: 2025-11-26 02:04:29.528 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:04:29 compute-0 podman[433341]: 2025-11-26 02:04:29.587181236 +0000 UTC m=+0.130401597 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, release=1755695350, config_id=edpm, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Nov 26 02:04:29 compute-0 podman[433342]: 2025-11-26 02:04:29.619342667 +0000 UTC m=+0.154647566 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 02:04:29 compute-0 podman[158021]: time="2025-11-26T02:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:04:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:04:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8637 "" "Go-http-client/1.1"
Nov 26 02:04:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:04:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 26 02:04:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 26 02:04:31 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 26 02:04:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1616: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 639 B/s wr, 7 op/s
Nov 26 02:04:31 compute-0 nova_compute[350387]: 2025-11-26 02:04:31.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:04:31 compute-0 openstack_network_exporter[367323]: ERROR   02:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:04:31 compute-0 openstack_network_exporter[367323]: ERROR   02:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:04:31 compute-0 openstack_network_exporter[367323]: ERROR   02:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:04:31 compute-0 openstack_network_exporter[367323]: ERROR   02:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:04:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:04:31 compute-0 openstack_network_exporter[367323]: ERROR   02:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:04:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:04:31 compute-0 nova_compute[350387]: 2025-11-26 02:04:31.731 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:32 compute-0 nova_compute[350387]: 2025-11-26 02:04:32.293 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:04:32 compute-0 nova_compute[350387]: 2025-11-26 02:04:32.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:04:32 compute-0 nova_compute[350387]: 2025-11-26 02:04:32.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:04:32 compute-0 nova_compute[350387]: 2025-11-26 02:04:32.821 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:04:32 compute-0 nova_compute[350387]: 2025-11-26 02:04:32.822 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:04:32 compute-0 nova_compute[350387]: 2025-11-26 02:04:32.822 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:04:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 1.9 MiB/s wr, 12 op/s
Nov 26 02:04:33 compute-0 nova_compute[350387]: 2025-11-26 02:04:33.804 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:34 compute-0 nova_compute[350387]: 2025-11-26 02:04:34.207 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Updating instance_info_cache with network_info: [{"id": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "address": "fa:16:3e:99:2d:81", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d715a2-34", "ovs_interfaceid": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:04:34 compute-0 nova_compute[350387]: 2025-11-26 02:04:34.230 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:04:34 compute-0 nova_compute[350387]: 2025-11-26 02:04:34.230 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:04:34 compute-0 nova_compute[350387]: 2025-11-26 02:04:34.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:04:34 compute-0 nova_compute[350387]: 2025-11-26 02:04:34.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:04:34 compute-0 nova_compute[350387]: 2025-11-26 02:04:34.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:04:34 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:04:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 17 op/s
Nov 26 02:04:36 compute-0 nova_compute[350387]: 2025-11-26 02:04:36.733 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1619: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 17 op/s
Nov 26 02:04:38 compute-0 nova_compute[350387]: 2025-11-26 02:04:38.806 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 17 op/s
Nov 26 02:04:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 26 02:04:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 26 02:04:39 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 26 02:04:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:04:41
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['volumes', 'backups', 'default.rgw.log', '.mgr', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.control']
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.6 MiB/s wr, 31 op/s
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:04:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:04:41 compute-0 nova_compute[350387]: 2025-11-26 02:04:41.737 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.870 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.871 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.874 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.883 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd32050dc-c041-47df-994e-7d05cf1f489a', 'name': 'vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {'metering.server_group': '366b90b6-2e85-40c4-9ca1-855cf9022409'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.889 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b1c088bc-7a6b-4580-93ff-685731747189', 'name': 'test_0', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.889 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.890 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.890 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.890 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.891 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T02:04:42.890434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.892 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.892 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.893 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.893 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.893 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.893 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.894 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T02:04:42.893590) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.900 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.907 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.907 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.908 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.908 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.908 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.908 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.909 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T02:04:42.908770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.910 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.910 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.910 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.911 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.911 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.911 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T02:04:42.911443) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.912 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.912 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.913 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.913 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.914 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.914 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.914 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.914 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T02:04:42.914420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.915 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.915 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.916 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.916 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.917 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.917 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.917 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.917 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T02:04:42.917439) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.918 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.918 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.919 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.919 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.919 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.920 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.920 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.920 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T02:04:42.920425) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.960 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/cpu volume: 41030000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.998 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/cpu volume: 48730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.999 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:42.999 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.000 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.000 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.000 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.000 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.001 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.001 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T02:04:43.000914) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.002 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.003 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.003 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.003 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.003 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.004 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.004 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.004 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/memory.usage volume: 48.93359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.005 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/memory.usage volume: 48.828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.005 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T02:04:43.004265) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.006 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.006 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.006 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.007 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.007 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.007 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.007 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.007 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.008 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.008 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes volume: 2346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.009 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.009 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.009 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.009 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.010 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.010 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.010 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T02:04:43.007711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T02:04:43.010301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.011 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.012 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.012 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.012 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.012 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.013 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.013 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.013 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.013 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T02:04:43.013191) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.015 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.015 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.015 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.015 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.015 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.016 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.016 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.016 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.017 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.017 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.018 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T02:04:43.016036) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.019 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.019 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.019 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.019 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T02:04:43.019653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.020 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.021 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.021 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.022 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.022 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.022 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.023 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T02:04:43.022953) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.058 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.060 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.061 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.098 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.099 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.100 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.102 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.103 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.104 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.104 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.105 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T02:04:43.105401) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.207 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.208 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.209 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.9 KiB/s wr, 32 op/s
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.307 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.308 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.309 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.311 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.312 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.312 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.313 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.314 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.315 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.316 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.317 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.317 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.latency volume: 2007436788 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T02:04:43.316935) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.319 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.latency volume: 283353651 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.320 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.latency volume: 197487344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.321 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 2182324777 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.322 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 336768448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.323 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 176765271 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.324 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.325 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.326 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.326 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.327 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.328 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.328 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T02:04:43.328259) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.329 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.330 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.330 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.331 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.331 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.332 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.333 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.334 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.334 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.334 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.334 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.335 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T02:04:43.335126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.336 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.337 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.337 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.338 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.339 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.339 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.341 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.341 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.342 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.342 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.342 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.344 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T02:04:43.342742) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.343 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.344 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.345 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.346 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.346 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.347 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.348 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.349 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.350 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.350 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.350 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.351 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.351 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.352 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T02:04:43.351279) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.352 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.353 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.353 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.354 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.354 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.354 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.354 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.355 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.latency volume: 5738822785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.355 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.latency volume: 28688069 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.355 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.356 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 5787370869 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.356 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 30575996 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T02:04:43.354569) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.356 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.357 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.357 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.357 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.357 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.357 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.357 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.358 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T02:04:43.357634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.358 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.358 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.359 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.359 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.359 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.360 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.360 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.360 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.360 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.360 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.360 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.361 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.361 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T02:04:43.360543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.361 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.361 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.362 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.362 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.362 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.363 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.363 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.363 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.364 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.364 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.364 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.364 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.364 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.364 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.365 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.365 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.365 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.365 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.365 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.365 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.366 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.366 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.366 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.366 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.366 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.366 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:04:43.367 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:04:43 compute-0 nova_compute[350387]: 2025-11-26 02:04:43.810 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:04:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 26 02:04:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 26 02:04:44 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 26 02:04:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1625: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 26 02:04:46 compute-0 podman[433386]: 2025-11-26 02:04:46.585354718 +0000 UTC m=+0.113833362 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 26 02:04:46 compute-0 podman[433385]: 2025-11-26 02:04:46.605222085 +0000 UTC m=+0.134411079 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Nov 26 02:04:46 compute-0 podman[433387]: 2025-11-26 02:04:46.647311305 +0000 UTC m=+0.163956437 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 02:04:46 compute-0 nova_compute[350387]: 2025-11-26 02:04:46.739 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 26 02:04:48 compute-0 nova_compute[350387]: 2025-11-26 02:04:48.814 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1627: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 26 02:04:49 compute-0 systemd-logind[800]: New session 62 of user zuul.
Nov 26 02:04:49 compute-0 systemd[1]: Started Session 62 of User zuul.
Nov 26 02:04:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:04:50 compute-0 python3[433625]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 511 B/s wr, 5 op/s
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:04:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:04:51 compute-0 podman[433665]: 2025-11-26 02:04:51.597354249 +0000 UTC m=+0.139266445 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 02:04:51 compute-0 podman[433666]: 2025-11-26 02:04:51.680019556 +0000 UTC m=+0.219628638 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 02:04:51 compute-0 nova_compute[350387]: 2025-11-26 02:04:51.742 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1629: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:04:53 compute-0 nova_compute[350387]: 2025-11-26 02:04:53.819 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:54 compute-0 podman[433710]: 2025-11-26 02:04:54.612127248 +0000 UTC m=+0.152468725 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, distribution-scope=public, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler)
Nov 26 02:04:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:04:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:04:56 compute-0 nova_compute[350387]: 2025-11-26 02:04:56.745 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1631: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:04:57 compute-0 podman[433729]: 2025-11-26 02:04:57.596463004 +0000 UTC m=+0.141288222 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 26 02:04:58 compute-0 python3[433923]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 02:04:58 compute-0 nova_compute[350387]: 2025-11-26 02:04:58.822 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:04:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:04:59 compute-0 podman[158021]: time="2025-11-26T02:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:04:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:04:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8644 "" "Go-http-client/1.1"
Nov 26 02:04:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:05:00 compute-0 podman[433962]: 2025-11-26 02:05:00.602700491 +0000 UTC m=+0.145212262 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41)
Nov 26 02:05:00 compute-0 podman[433963]: 2025-11-26 02:05:00.610547201 +0000 UTC m=+0.147152316 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:05:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1633: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:01 compute-0 openstack_network_exporter[367323]: ERROR   02:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:05:01 compute-0 openstack_network_exporter[367323]: ERROR   02:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:05:01 compute-0 openstack_network_exporter[367323]: ERROR   02:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:05:01 compute-0 openstack_network_exporter[367323]: ERROR   02:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:05:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:05:01 compute-0 openstack_network_exporter[367323]: ERROR   02:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:05:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:05:01 compute-0 nova_compute[350387]: 2025-11-26 02:05:01.749 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1634: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:03 compute-0 nova_compute[350387]: 2025-11-26 02:05:03.826 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:05:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:06 compute-0 nova_compute[350387]: 2025-11-26 02:05:06.751 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1636: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:08 compute-0 nova_compute[350387]: 2025-11-26 02:05:08.829 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:08 compute-0 python3[434181]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 02:05:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:05:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:05:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:05:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:05:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:05:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:05:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:05:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:11 compute-0 nova_compute[350387]: 2025-11-26 02:05:11.754 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1639: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:13 compute-0 nova_compute[350387]: 2025-11-26 02:05:13.834 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:05:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:16 compute-0 nova_compute[350387]: 2025-11-26 02:05:16.758 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1641: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:17 compute-0 podman[434221]: 2025-11-26 02:05:17.561373576 +0000 UTC m=+0.104738278 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 02:05:17 compute-0 podman[434220]: 2025-11-26 02:05:17.569108182 +0000 UTC m=+0.117318420 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Nov 26 02:05:17 compute-0 podman[434222]: 2025-11-26 02:05:17.594646598 +0000 UTC m=+0.130679954 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 02:05:18 compute-0 nova_compute[350387]: 2025-11-26 02:05:18.836 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:05:19 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:05:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:05:19 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:05:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:05:19 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:05:19 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 6c174929-f4b8-4783-98a2-bbd4a47fee20 does not exist
Nov 26 02:05:19 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5772207a-54da-4ece-8452-cbd6efcae492 does not exist
Nov 26 02:05:19 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev fffbf5cb-eb46-40d3-b20c-050892680e93 does not exist
Nov 26 02:05:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:05:19 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:05:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:05:19 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:05:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:05:19 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:05:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1642: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:05:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:05:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:05:19 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:05:20 compute-0 podman[434549]: 2025-11-26 02:05:20.283298732 +0000 UTC m=+0.098858112 container create 5d26a1ddbbc04e22882f6717c65257ad374702d9522f2452c4184030effafe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 26 02:05:20 compute-0 podman[434549]: 2025-11-26 02:05:20.246188982 +0000 UTC m=+0.061748442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:05:20 compute-0 systemd[1]: Started libpod-conmon-5d26a1ddbbc04e22882f6717c65257ad374702d9522f2452c4184030effafe72.scope.
Nov 26 02:05:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:05:20 compute-0 podman[434549]: 2025-11-26 02:05:20.440183041 +0000 UTC m=+0.255742501 container init 5d26a1ddbbc04e22882f6717c65257ad374702d9522f2452c4184030effafe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 02:05:20 compute-0 podman[434549]: 2025-11-26 02:05:20.453079783 +0000 UTC m=+0.268639183 container start 5d26a1ddbbc04e22882f6717c65257ad374702d9522f2452c4184030effafe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:05:20 compute-0 podman[434549]: 2025-11-26 02:05:20.459644677 +0000 UTC m=+0.275204127 container attach 5d26a1ddbbc04e22882f6717c65257ad374702d9522f2452c4184030effafe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:05:20 compute-0 intelligent_dirac[434564]: 167 167
Nov 26 02:05:20 compute-0 systemd[1]: libpod-5d26a1ddbbc04e22882f6717c65257ad374702d9522f2452c4184030effafe72.scope: Deactivated successfully.
Nov 26 02:05:20 compute-0 podman[434549]: 2025-11-26 02:05:20.465163871 +0000 UTC m=+0.280723271 container died 5d26a1ddbbc04e22882f6717c65257ad374702d9522f2452c4184030effafe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dirac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:05:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-520e0cb5b3af2ce45b361ed188239ca433d9327d76c4db6658302ca9f91f1e00-merged.mount: Deactivated successfully.
Nov 26 02:05:20 compute-0 podman[434549]: 2025-11-26 02:05:20.556679297 +0000 UTC m=+0.372238697 container remove 5d26a1ddbbc04e22882f6717c65257ad374702d9522f2452c4184030effafe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 26 02:05:20 compute-0 systemd[1]: libpod-conmon-5d26a1ddbbc04e22882f6717c65257ad374702d9522f2452c4184030effafe72.scope: Deactivated successfully.
Nov 26 02:05:20 compute-0 podman[434588]: 2025-11-26 02:05:20.873212321 +0000 UTC m=+0.084522411 container create 9912495c2340b60cdf5fd5f801894ae76e1e7a2681a179792bbee0b27f0af172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dewdney, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 02:05:20 compute-0 podman[434588]: 2025-11-26 02:05:20.842708066 +0000 UTC m=+0.054018196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:05:20 compute-0 systemd[1]: Started libpod-conmon-9912495c2340b60cdf5fd5f801894ae76e1e7a2681a179792bbee0b27f0af172.scope.
Nov 26 02:05:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7389e83951db55c22dab7c02718a128c6f72302953e8633a11a28b3fb47158a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7389e83951db55c22dab7c02718a128c6f72302953e8633a11a28b3fb47158a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7389e83951db55c22dab7c02718a128c6f72302953e8633a11a28b3fb47158a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7389e83951db55c22dab7c02718a128c6f72302953e8633a11a28b3fb47158a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7389e83951db55c22dab7c02718a128c6f72302953e8633a11a28b3fb47158a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:05:21 compute-0 podman[434588]: 2025-11-26 02:05:21.039505693 +0000 UTC m=+0.250815843 container init 9912495c2340b60cdf5fd5f801894ae76e1e7a2681a179792bbee0b27f0af172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 02:05:21 compute-0 podman[434588]: 2025-11-26 02:05:21.070353498 +0000 UTC m=+0.281663588 container start 9912495c2340b60cdf5fd5f801894ae76e1e7a2681a179792bbee0b27f0af172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dewdney, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 02:05:21 compute-0 podman[434588]: 2025-11-26 02:05:21.077250181 +0000 UTC m=+0.288560281 container attach 9912495c2340b60cdf5fd5f801894ae76e1e7a2681a179792bbee0b27f0af172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dewdney, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:05:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1643: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:21 compute-0 nova_compute[350387]: 2025-11-26 02:05:21.760 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:22 compute-0 beautiful_dewdney[434604]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:05:22 compute-0 beautiful_dewdney[434604]: --> relative data size: 1.0
Nov 26 02:05:22 compute-0 beautiful_dewdney[434604]: --> All data devices are unavailable
Nov 26 02:05:22 compute-0 systemd[1]: libpod-9912495c2340b60cdf5fd5f801894ae76e1e7a2681a179792bbee0b27f0af172.scope: Deactivated successfully.
Nov 26 02:05:22 compute-0 podman[434588]: 2025-11-26 02:05:22.281516701 +0000 UTC m=+1.492826781 container died 9912495c2340b60cdf5fd5f801894ae76e1e7a2681a179792bbee0b27f0af172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dewdney, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 02:05:22 compute-0 systemd[1]: libpod-9912495c2340b60cdf5fd5f801894ae76e1e7a2681a179792bbee0b27f0af172.scope: Consumed 1.141s CPU time.
Nov 26 02:05:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7389e83951db55c22dab7c02718a128c6f72302953e8633a11a28b3fb47158a-merged.mount: Deactivated successfully.
Nov 26 02:05:22 compute-0 podman[434588]: 2025-11-26 02:05:22.361674338 +0000 UTC m=+1.572984388 container remove 9912495c2340b60cdf5fd5f801894ae76e1e7a2681a179792bbee0b27f0af172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:05:22 compute-0 systemd[1]: libpod-conmon-9912495c2340b60cdf5fd5f801894ae76e1e7a2681a179792bbee0b27f0af172.scope: Deactivated successfully.
Nov 26 02:05:22 compute-0 podman[434634]: 2025-11-26 02:05:22.44627538 +0000 UTC m=+0.116565429 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 02:05:22 compute-0 podman[434641]: 2025-11-26 02:05:22.500750537 +0000 UTC m=+0.158013930 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 02:05:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1644: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:23 compute-0 podman[434860]: 2025-11-26 02:05:23.485435533 +0000 UTC m=+0.080386625 container create 5c58eea7624e21bf3d123c0b59b203475259f28d621c30c29285d0cc550f4f29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 02:05:23 compute-0 podman[434860]: 2025-11-26 02:05:23.451309786 +0000 UTC m=+0.046260968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:05:23 compute-0 systemd[1]: Started libpod-conmon-5c58eea7624e21bf3d123c0b59b203475259f28d621c30c29285d0cc550f4f29.scope.
Nov 26 02:05:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:05:23 compute-0 podman[434860]: 2025-11-26 02:05:23.642086024 +0000 UTC m=+0.237037206 container init 5c58eea7624e21bf3d123c0b59b203475259f28d621c30c29285d0cc550f4f29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:05:23 compute-0 podman[434860]: 2025-11-26 02:05:23.653122553 +0000 UTC m=+0.248073635 container start 5c58eea7624e21bf3d123c0b59b203475259f28d621c30c29285d0cc550f4f29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bardeen, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 02:05:23 compute-0 podman[434860]: 2025-11-26 02:05:23.657368043 +0000 UTC m=+0.252319175 container attach 5c58eea7624e21bf3d123c0b59b203475259f28d621c30c29285d0cc550f4f29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 02:05:23 compute-0 sad_bardeen[434914]: 167 167
Nov 26 02:05:23 compute-0 systemd[1]: libpod-5c58eea7624e21bf3d123c0b59b203475259f28d621c30c29285d0cc550f4f29.scope: Deactivated successfully.
Nov 26 02:05:23 compute-0 podman[434860]: 2025-11-26 02:05:23.661469128 +0000 UTC m=+0.256420210 container died 5c58eea7624e21bf3d123c0b59b203475259f28d621c30c29285d0cc550f4f29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bardeen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:05:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-687ca9d30f4faefb7f7199572f83a944a24fb285604a10eae6bf01f69a6ff919-merged.mount: Deactivated successfully.
Nov 26 02:05:23 compute-0 podman[434860]: 2025-11-26 02:05:23.718320701 +0000 UTC m=+0.313271783 container remove 5c58eea7624e21bf3d123c0b59b203475259f28d621c30c29285d0cc550f4f29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:05:23 compute-0 systemd[1]: libpod-conmon-5c58eea7624e21bf3d123c0b59b203475259f28d621c30c29285d0cc550f4f29.scope: Deactivated successfully.
Nov 26 02:05:23 compute-0 nova_compute[350387]: 2025-11-26 02:05:23.838 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:23 compute-0 podman[434985]: 2025-11-26 02:05:23.974355109 +0000 UTC m=+0.079391177 container create bc02a1a443cf432d01e4d40d3b26e93e256f9c8a0d5844dc8bdff6f267b86c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 26 02:05:24 compute-0 podman[434985]: 2025-11-26 02:05:23.941641082 +0000 UTC m=+0.046677150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:05:24 compute-0 systemd[1]: Started libpod-conmon-bc02a1a443cf432d01e4d40d3b26e93e256f9c8a0d5844dc8bdff6f267b86c5e.scope.
Nov 26 02:05:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff257b009f0520b0ae3d4889d6d5da6f233a261b0c11759583fe23a6c8970805/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff257b009f0520b0ae3d4889d6d5da6f233a261b0c11759583fe23a6c8970805/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff257b009f0520b0ae3d4889d6d5da6f233a261b0c11759583fe23a6c8970805/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff257b009f0520b0ae3d4889d6d5da6f233a261b0c11759583fe23a6c8970805/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:05:24 compute-0 podman[434985]: 2025-11-26 02:05:24.176499856 +0000 UTC m=+0.281535964 container init bc02a1a443cf432d01e4d40d3b26e93e256f9c8a0d5844dc8bdff6f267b86c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 02:05:24 compute-0 podman[434985]: 2025-11-26 02:05:24.19875047 +0000 UTC m=+0.303786538 container start bc02a1a443cf432d01e4d40d3b26e93e256f9c8a0d5844dc8bdff6f267b86c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_cartwright, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 02:05:24 compute-0 podman[434985]: 2025-11-26 02:05:24.223234006 +0000 UTC m=+0.328270044 container attach bc02a1a443cf432d01e4d40d3b26e93e256f9c8a0d5844dc8bdff6f267b86c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_cartwright, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 02:05:24 compute-0 python3[435057]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 02:05:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:05:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:05:24.987 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:05:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:05:24.988 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:05:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:05:24.989 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:05:25 compute-0 determined_cartwright[435035]: {
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:    "0": [
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:        {
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "devices": [
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "/dev/loop3"
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            ],
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "lv_name": "ceph_lv0",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "lv_size": "21470642176",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "name": "ceph_lv0",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "tags": {
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.cluster_name": "ceph",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.crush_device_class": "",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.encrypted": "0",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.osd_id": "0",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.type": "block",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.vdo": "0"
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            },
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "type": "block",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "vg_name": "ceph_vg0"
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:        }
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:    ],
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:    "1": [
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:        {
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "devices": [
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "/dev/loop4"
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            ],
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "lv_name": "ceph_lv1",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "lv_size": "21470642176",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "name": "ceph_lv1",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "tags": {
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.cluster_name": "ceph",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.crush_device_class": "",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.encrypted": "0",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.osd_id": "1",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.type": "block",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.vdo": "0"
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            },
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "type": "block",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "vg_name": "ceph_vg1"
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:        }
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:    ],
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:    "2": [
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:        {
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "devices": [
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "/dev/loop5"
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            ],
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "lv_name": "ceph_lv2",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "lv_size": "21470642176",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "name": "ceph_lv2",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "tags": {
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.cluster_name": "ceph",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.crush_device_class": "",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.encrypted": "0",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.osd_id": "2",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.type": "block",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:                "ceph.vdo": "0"
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            },
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "type": "block",
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:            "vg_name": "ceph_vg2"
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:        }
Nov 26 02:05:25 compute-0 determined_cartwright[435035]:    ]
Nov 26 02:05:25 compute-0 determined_cartwright[435035]: }
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.040206) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122725040270, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 799, "num_deletes": 253, "total_data_size": 982638, "memory_usage": 996416, "flush_reason": "Manual Compaction"}
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122725047427, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 630598, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33192, "largest_seqno": 33990, "table_properties": {"data_size": 627167, "index_size": 1211, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9210, "raw_average_key_size": 20, "raw_value_size": 619828, "raw_average_value_size": 1399, "num_data_blocks": 55, "num_entries": 443, "num_filter_entries": 443, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764122660, "oldest_key_time": 1764122660, "file_creation_time": 1764122725, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 7319 microseconds, and 3907 cpu microseconds.
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.047522) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 630598 bytes OK
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.047553) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.050543) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.050567) EVENT_LOG_v1 {"time_micros": 1764122725050559, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.050591) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 978616, prev total WAL file size 978616, number of live WAL files 2.
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.051922) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323534' seq:72057594037927935, type:22 .. '6D6772737461740031353037' seq:0, type:0; will stop at (end)
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(615KB)], [74(9954KB)]
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122725051997, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 10824009, "oldest_snapshot_seqno": -1}
Nov 26 02:05:25 compute-0 systemd[1]: libpod-bc02a1a443cf432d01e4d40d3b26e93e256f9c8a0d5844dc8bdff6f267b86c5e.scope: Deactivated successfully.
Nov 26 02:05:25 compute-0 podman[434985]: 2025-11-26 02:05:25.058551624 +0000 UTC m=+1.163587672 container died bc02a1a443cf432d01e4d40d3b26e93e256f9c8a0d5844dc8bdff6f267b86c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 5317 keys, 7850683 bytes, temperature: kUnknown
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122725101749, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 7850683, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7816518, "index_size": 19783, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 134485, "raw_average_key_size": 25, "raw_value_size": 7721659, "raw_average_value_size": 1452, "num_data_blocks": 818, "num_entries": 5317, "num_filter_entries": 5317, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764122725, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:05:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff257b009f0520b0ae3d4889d6d5da6f233a261b0c11759583fe23a6c8970805-merged.mount: Deactivated successfully.
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.102085) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 7850683 bytes
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.109004) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.0 rd, 157.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.7 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(29.6) write-amplify(12.4) OK, records in: 5811, records dropped: 494 output_compression: NoCompression
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.109039) EVENT_LOG_v1 {"time_micros": 1764122725109025, "job": 42, "event": "compaction_finished", "compaction_time_micros": 49889, "compaction_time_cpu_micros": 32643, "output_level": 6, "num_output_files": 1, "total_output_size": 7850683, "num_input_records": 5811, "num_output_records": 5317, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122725109360, "job": 42, "event": "table_file_deletion", "file_number": 76}
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122725112401, "job": 42, "event": "table_file_deletion", "file_number": 74}
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.051619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.112987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.112995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.112997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.112998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:05:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:05:25.113000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:05:25 compute-0 podman[434985]: 2025-11-26 02:05:25.152422236 +0000 UTC m=+1.257458274 container remove bc02a1a443cf432d01e4d40d3b26e93e256f9c8a0d5844dc8bdff6f267b86c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_cartwright, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 02:05:25 compute-0 systemd[1]: libpod-conmon-bc02a1a443cf432d01e4d40d3b26e93e256f9c8a0d5844dc8bdff6f267b86c5e.scope: Deactivated successfully.
Nov 26 02:05:25 compute-0 podman[435102]: 2025-11-26 02:05:25.245583706 +0000 UTC m=+0.147611808 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 26 02:05:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1645: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:26 compute-0 podman[435266]: 2025-11-26 02:05:26.226438294 +0000 UTC m=+0.081812295 container create aa67a794d3e0e74171877c3e2da6394a6b67028c92ced3934a18bf818d8d5594 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:05:26 compute-0 podman[435266]: 2025-11-26 02:05:26.189692034 +0000 UTC m=+0.045066005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:05:26 compute-0 systemd[1]: Started libpod-conmon-aa67a794d3e0e74171877c3e2da6394a6b67028c92ced3934a18bf818d8d5594.scope.
Nov 26 02:05:26 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:05:26 compute-0 podman[435266]: 2025-11-26 02:05:26.367488088 +0000 UTC m=+0.222862119 container init aa67a794d3e0e74171877c3e2da6394a6b67028c92ced3934a18bf818d8d5594 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 26 02:05:26 compute-0 podman[435266]: 2025-11-26 02:05:26.382434997 +0000 UTC m=+0.237808968 container start aa67a794d3e0e74171877c3e2da6394a6b67028c92ced3934a18bf818d8d5594 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 02:05:26 compute-0 podman[435266]: 2025-11-26 02:05:26.388634581 +0000 UTC m=+0.244008622 container attach aa67a794d3e0e74171877c3e2da6394a6b67028c92ced3934a18bf818d8d5594 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 02:05:26 compute-0 dazzling_nightingale[435282]: 167 167
Nov 26 02:05:26 compute-0 systemd[1]: libpod-aa67a794d3e0e74171877c3e2da6394a6b67028c92ced3934a18bf818d8d5594.scope: Deactivated successfully.
Nov 26 02:05:26 compute-0 podman[435266]: 2025-11-26 02:05:26.394698851 +0000 UTC m=+0.250072822 container died aa67a794d3e0e74171877c3e2da6394a6b67028c92ced3934a18bf818d8d5594 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 02:05:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-51aca3f4d92b5e4783540f60bb6e818709e2e821b0a6ad7370c4001498293771-merged.mount: Deactivated successfully.
Nov 26 02:05:26 compute-0 podman[435266]: 2025-11-26 02:05:26.469035495 +0000 UTC m=+0.324409496 container remove aa67a794d3e0e74171877c3e2da6394a6b67028c92ced3934a18bf818d8d5594 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:05:26 compute-0 systemd[1]: libpod-conmon-aa67a794d3e0e74171877c3e2da6394a6b67028c92ced3934a18bf818d8d5594.scope: Deactivated successfully.
Nov 26 02:05:26 compute-0 podman[435305]: 2025-11-26 02:05:26.764641562 +0000 UTC m=+0.086609369 container create b52e782dbc25eb09b54832b1c6fec00745f0f327ab1828ae10569e5b5fac677f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_snyder, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:05:26 compute-0 nova_compute[350387]: 2025-11-26 02:05:26.764 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:26 compute-0 podman[435305]: 2025-11-26 02:05:26.729137517 +0000 UTC m=+0.051105324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:05:26 compute-0 systemd[1]: Started libpod-conmon-b52e782dbc25eb09b54832b1c6fec00745f0f327ab1828ae10569e5b5fac677f.scope.
Nov 26 02:05:26 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a448e8b21ef838ad20d3d42b3666a0cb6beb48129bb1d5bd0458598d3c6af0df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a448e8b21ef838ad20d3d42b3666a0cb6beb48129bb1d5bd0458598d3c6af0df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a448e8b21ef838ad20d3d42b3666a0cb6beb48129bb1d5bd0458598d3c6af0df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a448e8b21ef838ad20d3d42b3666a0cb6beb48129bb1d5bd0458598d3c6af0df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:05:26 compute-0 podman[435305]: 2025-11-26 02:05:26.921594372 +0000 UTC m=+0.243562179 container init b52e782dbc25eb09b54832b1c6fec00745f0f327ab1828ae10569e5b5fac677f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 02:05:26 compute-0 podman[435305]: 2025-11-26 02:05:26.95613374 +0000 UTC m=+0.278101547 container start b52e782dbc25eb09b54832b1c6fec00745f0f327ab1828ae10569e5b5fac677f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_snyder, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:05:26 compute-0 podman[435305]: 2025-11-26 02:05:26.962803167 +0000 UTC m=+0.284770944 container attach b52e782dbc25eb09b54832b1c6fec00745f0f327ab1828ae10569e5b5fac677f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_snyder, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:05:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:05:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/472492892' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:05:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:05:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/472492892' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:05:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:28 compute-0 youthful_snyder[435321]: {
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:        "osd_id": 0,
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:        "type": "bluestore"
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:    },
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:        "osd_id": 2,
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:        "type": "bluestore"
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:    },
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:        "osd_id": 1,
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:        "type": "bluestore"
Nov 26 02:05:28 compute-0 youthful_snyder[435321]:    }
Nov 26 02:05:28 compute-0 youthful_snyder[435321]: }
Nov 26 02:05:28 compute-0 systemd[1]: libpod-b52e782dbc25eb09b54832b1c6fec00745f0f327ab1828ae10569e5b5fac677f.scope: Deactivated successfully.
Nov 26 02:05:28 compute-0 systemd[1]: libpod-b52e782dbc25eb09b54832b1c6fec00745f0f327ab1828ae10569e5b5fac677f.scope: Consumed 1.183s CPU time.
Nov 26 02:05:28 compute-0 podman[435305]: 2025-11-26 02:05:28.148035535 +0000 UTC m=+1.470003362 container died b52e782dbc25eb09b54832b1c6fec00745f0f327ab1828ae10569e5b5fac677f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_snyder, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:05:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a448e8b21ef838ad20d3d42b3666a0cb6beb48129bb1d5bd0458598d3c6af0df-merged.mount: Deactivated successfully.
Nov 26 02:05:28 compute-0 podman[435305]: 2025-11-26 02:05:28.23955566 +0000 UTC m=+1.561523467 container remove b52e782dbc25eb09b54832b1c6fec00745f0f327ab1828ae10569e5b5fac677f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:05:28 compute-0 systemd[1]: libpod-conmon-b52e782dbc25eb09b54832b1c6fec00745f0f327ab1828ae10569e5b5fac677f.scope: Deactivated successfully.
Nov 26 02:05:28 compute-0 podman[435355]: 2025-11-26 02:05:28.288915334 +0000 UTC m=+0.111640201 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 02:05:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:05:28 compute-0 nova_compute[350387]: 2025-11-26 02:05:28.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:05:28 compute-0 nova_compute[350387]: 2025-11-26 02:05:28.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:05:28 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:05:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:05:28 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:05:28 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 16a129e7-2636-4aff-a35f-15ed67405fd4 does not exist
Nov 26 02:05:28 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 15656442-605f-4729-b2f5-bb916096c52b does not exist
Nov 26 02:05:28 compute-0 nova_compute[350387]: 2025-11-26 02:05:28.331 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:05:28 compute-0 nova_compute[350387]: 2025-11-26 02:05:28.331 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:05:28 compute-0 nova_compute[350387]: 2025-11-26 02:05:28.331 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:05:28 compute-0 nova_compute[350387]: 2025-11-26 02:05:28.332 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:05:28 compute-0 nova_compute[350387]: 2025-11-26 02:05:28.332 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:05:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:05:28 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16709554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:05:28 compute-0 nova_compute[350387]: 2025-11-26 02:05:28.838 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:05:28 compute-0 nova_compute[350387]: 2025-11-26 02:05:28.842 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:28 compute-0 nova_compute[350387]: 2025-11-26 02:05:28.984 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:05:28 compute-0 nova_compute[350387]: 2025-11-26 02:05:28.984 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:05:28 compute-0 nova_compute[350387]: 2025-11-26 02:05:28.985 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:05:28 compute-0 nova_compute[350387]: 2025-11-26 02:05:28.996 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:05:28 compute-0 nova_compute[350387]: 2025-11-26 02:05:28.997 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:05:28 compute-0 nova_compute[350387]: 2025-11-26 02:05:28.998 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:05:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:29 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:05:29 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:05:29 compute-0 nova_compute[350387]: 2025-11-26 02:05:29.653 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:05:29 compute-0 nova_compute[350387]: 2025-11-26 02:05:29.655 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3570MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:05:29 compute-0 nova_compute[350387]: 2025-11-26 02:05:29.656 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:05:29 compute-0 nova_compute[350387]: 2025-11-26 02:05:29.657 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:05:29 compute-0 podman[158021]: time="2025-11-26T02:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:05:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:05:29 compute-0 nova_compute[350387]: 2025-11-26 02:05:29.769 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:05:29 compute-0 nova_compute[350387]: 2025-11-26 02:05:29.769 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance d32050dc-c041-47df-994e-7d05cf1f489a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:05:29 compute-0 nova_compute[350387]: 2025-11-26 02:05:29.769 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:05:29 compute-0 nova_compute[350387]: 2025-11-26 02:05:29.770 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:05:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8638 "" "Go-http-client/1.1"
Nov 26 02:05:29 compute-0 nova_compute[350387]: 2025-11-26 02:05:29.843 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:05:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:05:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:05:30 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/331606824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:05:30 compute-0 nova_compute[350387]: 2025-11-26 02:05:30.446 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:05:30 compute-0 nova_compute[350387]: 2025-11-26 02:05:30.454 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:05:30 compute-0 nova_compute[350387]: 2025-11-26 02:05:30.476 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:05:30 compute-0 nova_compute[350387]: 2025-11-26 02:05:30.478 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:05:30 compute-0 nova_compute[350387]: 2025-11-26 02:05:30.478 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:05:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:31 compute-0 openstack_network_exporter[367323]: ERROR   02:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:05:31 compute-0 openstack_network_exporter[367323]: ERROR   02:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:05:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:05:31 compute-0 openstack_network_exporter[367323]: ERROR   02:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:05:31 compute-0 openstack_network_exporter[367323]: ERROR   02:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:05:31 compute-0 openstack_network_exporter[367323]: ERROR   02:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:05:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:05:31 compute-0 podman[435478]: 2025-11-26 02:05:31.558663879 +0000 UTC m=+0.114760788 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Nov 26 02:05:31 compute-0 podman[435479]: 2025-11-26 02:05:31.59864859 +0000 UTC m=+0.147051374 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 02:05:31 compute-0 nova_compute[350387]: 2025-11-26 02:05:31.766 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:32 compute-0 nova_compute[350387]: 2025-11-26 02:05:32.478 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:05:32 compute-0 nova_compute[350387]: 2025-11-26 02:05:32.479 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:05:32 compute-0 nova_compute[350387]: 2025-11-26 02:05:32.479 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:05:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:33 compute-0 nova_compute[350387]: 2025-11-26 02:05:33.295 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:05:33 compute-0 nova_compute[350387]: 2025-11-26 02:05:33.845 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:34 compute-0 nova_compute[350387]: 2025-11-26 02:05:34.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:05:34 compute-0 nova_compute[350387]: 2025-11-26 02:05:34.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:05:34 compute-0 nova_compute[350387]: 2025-11-26 02:05:34.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:05:34 compute-0 nova_compute[350387]: 2025-11-26 02:05:34.946 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:05:34 compute-0 nova_compute[350387]: 2025-11-26 02:05:34.946 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:05:34 compute-0 nova_compute[350387]: 2025-11-26 02:05:34.947 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:05:34 compute-0 nova_compute[350387]: 2025-11-26 02:05:34.947 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid b1c088bc-7a6b-4580-93ff-685731747189 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:05:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:05:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:36 compute-0 nova_compute[350387]: 2025-11-26 02:05:36.432 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updating instance_info_cache with network_info: [{"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:05:36 compute-0 nova_compute[350387]: 2025-11-26 02:05:36.450 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:05:36 compute-0 nova_compute[350387]: 2025-11-26 02:05:36.451 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:05:36 compute-0 nova_compute[350387]: 2025-11-26 02:05:36.452 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:05:36 compute-0 nova_compute[350387]: 2025-11-26 02:05:36.453 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:05:36 compute-0 nova_compute[350387]: 2025-11-26 02:05:36.454 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:05:36 compute-0 nova_compute[350387]: 2025-11-26 02:05:36.769 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:38 compute-0 nova_compute[350387]: 2025-11-26 02:05:38.849 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:05:41
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.meta', 'volumes', 'vms', '.mgr', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.data']
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1653: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:41 compute-0 nova_compute[350387]: 2025-11-26 02:05:41.450 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:05:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:05:41 compute-0 nova_compute[350387]: 2025-11-26 02:05:41.772 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1654: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:43 compute-0 nova_compute[350387]: 2025-11-26 02:05:43.853 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:05:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:46 compute-0 nova_compute[350387]: 2025-11-26 02:05:46.775 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:48 compute-0 podman[435524]: 2025-11-26 02:05:48.601406273 +0000 UTC m=+0.130842089 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 02:05:48 compute-0 podman[435523]: 2025-11-26 02:05:48.621369483 +0000 UTC m=+0.158727961 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 26 02:05:48 compute-0 podman[435522]: 2025-11-26 02:05:48.626347272 +0000 UTC m=+0.168290619 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 02:05:48 compute-0 nova_compute[350387]: 2025-11-26 02:05:48.856 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:05:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:05:51 compute-0 nova_compute[350387]: 2025-11-26 02:05:51.779 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:05:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 7549 writes, 29K keys, 7549 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7549 writes, 1690 syncs, 4.47 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 868 writes, 2568 keys, 868 commit groups, 1.0 writes per commit group, ingest: 2.17 MB, 0.00 MB/s#012Interval WAL: 868 writes, 376 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:05:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:53 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 26 02:05:53 compute-0 podman[435585]: 2025-11-26 02:05:53.575625703 +0000 UTC m=+0.144875512 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Nov 26 02:05:53 compute-0 podman[435586]: 2025-11-26 02:05:53.636529711 +0000 UTC m=+0.199337510 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 02:05:53 compute-0 nova_compute[350387]: 2025-11-26 02:05:53.860 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:54 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 26 02:05:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:05:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:55 compute-0 podman[435630]: 2025-11-26 02:05:55.582690169 +0000 UTC m=+0.135332534 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.29.0, architecture=x86_64, io.openshift.tags=base rhel9, release=1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, release-0.7.12=, managed_by=edpm_ansible, name=ubi9, build-date=2024-09-18T21:23:30, vcs-type=git, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 02:05:56 compute-0 nova_compute[350387]: 2025-11-26 02:05:56.783 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:58 compute-0 podman[435650]: 2025-11-26 02:05:58.588511906 +0000 UTC m=+0.132746922 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 26 02:05:58 compute-0 nova_compute[350387]: 2025-11-26 02:05:58.864 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:05:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:05:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 8724 writes, 34K keys, 8724 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8724 writes, 2036 syncs, 4.28 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 878 writes, 2698 keys, 878 commit groups, 1.0 writes per commit group, ingest: 1.85 MB, 0.00 MB/s#012Interval WAL: 878 writes, 391 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:05:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:05:59 compute-0 podman[158021]: time="2025-11-26T02:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:05:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:05:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8638 "" "Go-http-client/1.1"
Nov 26 02:06:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:06:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1663: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:01 compute-0 openstack_network_exporter[367323]: ERROR   02:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:06:01 compute-0 openstack_network_exporter[367323]: ERROR   02:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:06:01 compute-0 openstack_network_exporter[367323]: ERROR   02:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:06:01 compute-0 openstack_network_exporter[367323]: ERROR   02:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:06:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:06:01 compute-0 openstack_network_exporter[367323]: ERROR   02:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:06:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:06:01 compute-0 nova_compute[350387]: 2025-11-26 02:06:01.786 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:02 compute-0 podman[435670]: 2025-11-26 02:06:02.578496085 +0000 UTC m=+0.111849727 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 02:06:02 compute-0 podman[435669]: 2025-11-26 02:06:02.596610283 +0000 UTC m=+0.137082324 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, version=9.6, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, io.openshift.expose-services=)
Nov 26 02:06:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:03 compute-0 nova_compute[350387]: 2025-11-26 02:06:03.868 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:06:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:06:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 7526 writes, 29K keys, 7526 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7526 writes, 1699 syncs, 4.43 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 848 writes, 2356 keys, 848 commit groups, 1.0 writes per commit group, ingest: 1.52 MB, 0.00 MB/s#012Interval WAL: 848 writes, 383 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:06:06 compute-0 ceph-mgr[193049]: [devicehealth INFO root] Check health
Nov 26 02:06:06 compute-0 nova_compute[350387]: 2025-11-26 02:06:06.789 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:08 compute-0 nova_compute[350387]: 2025-11-26 02:06:08.870 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:06:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:06:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:06:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:06:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:06:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:06:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:06:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1668: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:11 compute-0 nova_compute[350387]: 2025-11-26 02:06:11.793 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:13 compute-0 nova_compute[350387]: 2025-11-26 02:06:13.873 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:06:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:16 compute-0 nova_compute[350387]: 2025-11-26 02:06:16.796 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1671: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:18 compute-0 nova_compute[350387]: 2025-11-26 02:06:18.876 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:19 compute-0 podman[435714]: 2025-11-26 02:06:19.577088725 +0000 UTC m=+0.104824829 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:06:19 compute-0 podman[435713]: 2025-11-26 02:06:19.58223085 +0000 UTC m=+0.114840831 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:06:19 compute-0 podman[435712]: 2025-11-26 02:06:19.582571939 +0000 UTC m=+0.122454514 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 26 02:06:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:06:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1673: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:21 compute-0 nova_compute[350387]: 2025-11-26 02:06:21.799 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:23 compute-0 nova_compute[350387]: 2025-11-26 02:06:23.878 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:24 compute-0 systemd[1]: session-62.scope: Deactivated successfully.
Nov 26 02:06:24 compute-0 systemd[1]: session-62.scope: Consumed 5.808s CPU time.
Nov 26 02:06:24 compute-0 systemd-logind[800]: Session 62 logged out. Waiting for processes to exit.
Nov 26 02:06:24 compute-0 systemd-logind[800]: Removed session 62.
Nov 26 02:06:24 compute-0 podman[435771]: 2025-11-26 02:06:24.587443146 +0000 UTC m=+0.134241334 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 02:06:24 compute-0 podman[435772]: 2025-11-26 02:06:24.659930908 +0000 UTC m=+0.199589856 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 02:06:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:06:24.988 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:06:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:06:24.989 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:06:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:06:24.990 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:06:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:06:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:26 compute-0 podman[435814]: 2025-11-26 02:06:26.572235547 +0000 UTC m=+0.117284589 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release-0.7.12=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Nov 26 02:06:26 compute-0 nova_compute[350387]: 2025-11-26 02:06:26.802 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:06:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3428183106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:06:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:06:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3428183106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:06:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.804718) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122787805247, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 743, "num_deletes": 251, "total_data_size": 952901, "memory_usage": 967544, "flush_reason": "Manual Compaction"}
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122787817457, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 944050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33991, "largest_seqno": 34733, "table_properties": {"data_size": 940172, "index_size": 1658, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8472, "raw_average_key_size": 19, "raw_value_size": 932538, "raw_average_value_size": 2124, "num_data_blocks": 74, "num_entries": 439, "num_filter_entries": 439, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764122726, "oldest_key_time": 1764122726, "file_creation_time": 1764122787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 12793 microseconds, and 7381 cpu microseconds.
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.817518) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 944050 bytes OK
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.817547) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.820226) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.820248) EVENT_LOG_v1 {"time_micros": 1764122787820240, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.820271) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 949129, prev total WAL file size 949129, number of live WAL files 2.
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.821412) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(921KB)], [77(7666KB)]
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122787821532, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 8794733, "oldest_snapshot_seqno": -1}
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 5244 keys, 7074711 bytes, temperature: kUnknown
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122787882110, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 7074711, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7041773, "index_size": 18748, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 133605, "raw_average_key_size": 25, "raw_value_size": 6948875, "raw_average_value_size": 1325, "num_data_blocks": 767, "num_entries": 5244, "num_filter_entries": 5244, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764122787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.882439) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 7074711 bytes
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.885056) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.0 rd, 116.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 7.5 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(16.8) write-amplify(7.5) OK, records in: 5756, records dropped: 512 output_compression: NoCompression
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.885094) EVENT_LOG_v1 {"time_micros": 1764122787885076, "job": 44, "event": "compaction_finished", "compaction_time_micros": 60667, "compaction_time_cpu_micros": 40106, "output_level": 6, "num_output_files": 1, "total_output_size": 7074711, "num_input_records": 5756, "num_output_records": 5244, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122787885609, "job": 44, "event": "table_file_deletion", "file_number": 79}
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764122787888488, "job": 44, "event": "table_file_deletion", "file_number": 77}
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.821120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.888689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.888696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.888699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.888702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:06:27 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:06:27.888705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:06:28 compute-0 nova_compute[350387]: 2025-11-26 02:06:28.882 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:28 compute-0 podman[435858]: 2025-11-26 02:06:28.914161761 +0000 UTC m=+0.119020468 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, tcib_managed=true)
Nov 26 02:06:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1677: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:29 compute-0 nova_compute[350387]: 2025-11-26 02:06:29.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:06:29 compute-0 nova_compute[350387]: 2025-11-26 02:06:29.341 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:06:29 compute-0 nova_compute[350387]: 2025-11-26 02:06:29.342 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:06:29 compute-0 nova_compute[350387]: 2025-11-26 02:06:29.343 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:06:29 compute-0 nova_compute[350387]: 2025-11-26 02:06:29.343 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:06:29 compute-0 nova_compute[350387]: 2025-11-26 02:06:29.343 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:06:29 compute-0 podman[158021]: time="2025-11-26T02:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:06:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:06:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8646 "" "Go-http-client/1.1"
Nov 26 02:06:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:06:29 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:06:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:06:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:06:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:06:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:06:29 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 8fd84259-9b5b-4c44-ac5b-133c70d6f447 does not exist
Nov 26 02:06:29 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 8598a1aa-3d4d-478a-80b4-fa4099be2e3b does not exist
Nov 26 02:06:29 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 54382455-7538-434b-9920-d500ecf984e1 does not exist
Nov 26 02:06:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:06:29 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:06:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:06:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:06:29 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:06:29 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:06:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:06:29 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:06:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:06:29 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3259200582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:06:29 compute-0 nova_compute[350387]: 2025-11-26 02:06:29.878 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:06:29 compute-0 nova_compute[350387]: 2025-11-26 02:06:29.986 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:06:29 compute-0 nova_compute[350387]: 2025-11-26 02:06:29.987 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:06:29 compute-0 nova_compute[350387]: 2025-11-26 02:06:29.987 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:06:29 compute-0 nova_compute[350387]: 2025-11-26 02:06:29.994 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:06:29 compute-0 nova_compute[350387]: 2025-11-26 02:06:29.994 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:06:29 compute-0 nova_compute[350387]: 2025-11-26 02:06:29.994 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:06:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:06:30 compute-0 nova_compute[350387]: 2025-11-26 02:06:30.508 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:06:30 compute-0 nova_compute[350387]: 2025-11-26 02:06:30.509 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3622MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:06:30 compute-0 nova_compute[350387]: 2025-11-26 02:06:30.509 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:06:30 compute-0 nova_compute[350387]: 2025-11-26 02:06:30.510 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:06:30 compute-0 nova_compute[350387]: 2025-11-26 02:06:30.619 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:06:30 compute-0 nova_compute[350387]: 2025-11-26 02:06:30.620 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance d32050dc-c041-47df-994e-7d05cf1f489a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:06:30 compute-0 nova_compute[350387]: 2025-11-26 02:06:30.620 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:06:30 compute-0 nova_compute[350387]: 2025-11-26 02:06:30.620 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:06:30 compute-0 nova_compute[350387]: 2025-11-26 02:06:30.642 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing inventories for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 02:06:30 compute-0 nova_compute[350387]: 2025-11-26 02:06:30.674 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating ProviderTree inventory for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 02:06:30 compute-0 nova_compute[350387]: 2025-11-26 02:06:30.674 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating inventory in ProviderTree for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 02:06:30 compute-0 nova_compute[350387]: 2025-11-26 02:06:30.695 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing aggregate associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 02:06:30 compute-0 nova_compute[350387]: 2025-11-26 02:06:30.722 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing trait associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, traits: COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 02:06:30 compute-0 nova_compute[350387]: 2025-11-26 02:06:30.807 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:06:30 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:06:30 compute-0 podman[436143]: 2025-11-26 02:06:30.920777855 +0000 UTC m=+0.073759869 container create 749b18dc0d987a1d2d08b8d683dd44fcd87a45eb41a58826680c65c03024aef4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 02:06:30 compute-0 podman[436143]: 2025-11-26 02:06:30.883960323 +0000 UTC m=+0.036942427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:06:30 compute-0 systemd[1]: Started libpod-conmon-749b18dc0d987a1d2d08b8d683dd44fcd87a45eb41a58826680c65c03024aef4.scope.
Nov 26 02:06:31 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:06:31 compute-0 podman[436143]: 2025-11-26 02:06:31.051610233 +0000 UTC m=+0.204592327 container init 749b18dc0d987a1d2d08b8d683dd44fcd87a45eb41a58826680c65c03024aef4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_khayyam, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 02:06:31 compute-0 podman[436143]: 2025-11-26 02:06:31.067370405 +0000 UTC m=+0.220352459 container start 749b18dc0d987a1d2d08b8d683dd44fcd87a45eb41a58826680c65c03024aef4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_khayyam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:06:31 compute-0 podman[436143]: 2025-11-26 02:06:31.073595549 +0000 UTC m=+0.226577603 container attach 749b18dc0d987a1d2d08b8d683dd44fcd87a45eb41a58826680c65c03024aef4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:06:31 compute-0 stupefied_khayyam[436176]: 167 167
Nov 26 02:06:31 compute-0 systemd[1]: libpod-749b18dc0d987a1d2d08b8d683dd44fcd87a45eb41a58826680c65c03024aef4.scope: Deactivated successfully.
Nov 26 02:06:31 compute-0 podman[436143]: 2025-11-26 02:06:31.082692814 +0000 UTC m=+0.235674858 container died 749b18dc0d987a1d2d08b8d683dd44fcd87a45eb41a58826680c65c03024aef4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:06:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ef28102a11ca9951ef2972b7f71008800cac2686a1338c6f1ad8e9de945a1b8-merged.mount: Deactivated successfully.
Nov 26 02:06:31 compute-0 podman[436143]: 2025-11-26 02:06:31.166333399 +0000 UTC m=+0.319315433 container remove 749b18dc0d987a1d2d08b8d683dd44fcd87a45eb41a58826680c65c03024aef4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:06:31 compute-0 systemd[1]: libpod-conmon-749b18dc0d987a1d2d08b8d683dd44fcd87a45eb41a58826680c65c03024aef4.scope: Deactivated successfully.
Nov 26 02:06:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:06:31 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3202837870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:06:31 compute-0 nova_compute[350387]: 2025-11-26 02:06:31.259 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:06:31 compute-0 nova_compute[350387]: 2025-11-26 02:06:31.272 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:06:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:31 compute-0 nova_compute[350387]: 2025-11-26 02:06:31.291 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:06:31 compute-0 nova_compute[350387]: 2025-11-26 02:06:31.294 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:06:31 compute-0 nova_compute[350387]: 2025-11-26 02:06:31.295 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:06:31 compute-0 openstack_network_exporter[367323]: ERROR   02:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:06:31 compute-0 openstack_network_exporter[367323]: ERROR   02:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:06:31 compute-0 openstack_network_exporter[367323]: ERROR   02:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:06:31 compute-0 openstack_network_exporter[367323]: ERROR   02:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:06:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:06:31 compute-0 openstack_network_exporter[367323]: ERROR   02:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:06:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:06:31 compute-0 podman[436202]: 2025-11-26 02:06:31.43386575 +0000 UTC m=+0.087257928 container create 835d6c7cae0bc0b674de7b060d77f853cfc87733aee9cb8bd91c49a60d5f4ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 02:06:31 compute-0 podman[436202]: 2025-11-26 02:06:31.386393159 +0000 UTC m=+0.039785357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:06:31 compute-0 systemd[1]: Started libpod-conmon-835d6c7cae0bc0b674de7b060d77f853cfc87733aee9cb8bd91c49a60d5f4ccb.scope.
Nov 26 02:06:33 compute-0 nova_compute[350387]: 2025-11-26 02:06:32.295 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:06:33 compute-0 nova_compute[350387]: 2025-11-26 02:06:32.296 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:06:33 compute-0 nova_compute[350387]: 2025-11-26 02:06:32.296 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:06:33 compute-0 nova_compute[350387]: 2025-11-26 02:06:32.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:06:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:06:33 compute-0 nova_compute[350387]: 2025-11-26 02:06:33.208 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeecf0f22cfea9df963480f44e8ca42b4f7e20928cf532fcd662332668681e8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:06:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeecf0f22cfea9df963480f44e8ca42b4f7e20928cf532fcd662332668681e8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:06:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeecf0f22cfea9df963480f44e8ca42b4f7e20928cf532fcd662332668681e8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:06:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeecf0f22cfea9df963480f44e8ca42b4f7e20928cf532fcd662332668681e8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:06:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeecf0f22cfea9df963480f44e8ca42b4f7e20928cf532fcd662332668681e8a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:06:33 compute-0 podman[436202]: 2025-11-26 02:06:33.248705256 +0000 UTC m=+1.902097524 container init 835d6c7cae0bc0b674de7b060d77f853cfc87733aee9cb8bd91c49a60d5f4ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 02:06:33 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 02:06:33 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 02:06:33 compute-0 podman[436202]: 2025-11-26 02:06:33.280317153 +0000 UTC m=+1.933709331 container start 835d6c7cae0bc0b674de7b060d77f853cfc87733aee9cb8bd91c49a60d5f4ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:06:33 compute-0 podman[436202]: 2025-11-26 02:06:33.28521328 +0000 UTC m=+1.938605548 container attach 835d6c7cae0bc0b674de7b060d77f853cfc87733aee9cb8bd91c49a60d5f4ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:06:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:33 compute-0 podman[436222]: 2025-11-26 02:06:33.382326032 +0000 UTC m=+0.138358969 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 02:06:33 compute-0 podman[436221]: 2025-11-26 02:06:33.38329628 +0000 UTC m=+0.141832737 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container)
Nov 26 02:06:33 compute-0 nova_compute[350387]: 2025-11-26 02:06:33.886 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:34 compute-0 nova_compute[350387]: 2025-11-26 02:06:34.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:06:34 compute-0 nova_compute[350387]: 2025-11-26 02:06:34.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:06:34 compute-0 jovial_cerf[436218]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:06:34 compute-0 jovial_cerf[436218]: --> relative data size: 1.0
Nov 26 02:06:34 compute-0 jovial_cerf[436218]: --> All data devices are unavailable
Nov 26 02:06:34 compute-0 systemd[1]: libpod-835d6c7cae0bc0b674de7b060d77f853cfc87733aee9cb8bd91c49a60d5f4ccb.scope: Deactivated successfully.
Nov 26 02:06:34 compute-0 systemd[1]: libpod-835d6c7cae0bc0b674de7b060d77f853cfc87733aee9cb8bd91c49a60d5f4ccb.scope: Consumed 1.294s CPU time.
Nov 26 02:06:34 compute-0 podman[436291]: 2025-11-26 02:06:34.721598818 +0000 UTC m=+0.048349946 container died 835d6c7cae0bc0b674de7b060d77f853cfc87733aee9cb8bd91c49a60d5f4ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 02:06:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-aeecf0f22cfea9df963480f44e8ca42b4f7e20928cf532fcd662332668681e8a-merged.mount: Deactivated successfully.
Nov 26 02:06:34 compute-0 podman[436291]: 2025-11-26 02:06:34.808391421 +0000 UTC m=+0.135142499 container remove 835d6c7cae0bc0b674de7b060d77f853cfc87733aee9cb8bd91c49a60d5f4ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 02:06:34 compute-0 systemd[1]: libpod-conmon-835d6c7cae0bc0b674de7b060d77f853cfc87733aee9cb8bd91c49a60d5f4ccb.scope: Deactivated successfully.
Nov 26 02:06:34 compute-0 nova_compute[350387]: 2025-11-26 02:06:34.919 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:06:34 compute-0 nova_compute[350387]: 2025-11-26 02:06:34.921 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:06:34 compute-0 nova_compute[350387]: 2025-11-26 02:06:34.921 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:06:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:06:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:35 compute-0 podman[436444]: 2025-11-26 02:06:35.9947385 +0000 UTC m=+0.079400767 container create 6528d70e93e5ee2609ef587170518c756740c553bf949f1b208f1221652a10d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ritchie, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:06:36 compute-0 podman[436444]: 2025-11-26 02:06:35.96512729 +0000 UTC m=+0.049789647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:06:36 compute-0 systemd[1]: Started libpod-conmon-6528d70e93e5ee2609ef587170518c756740c553bf949f1b208f1221652a10d1.scope.
Nov 26 02:06:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:06:36 compute-0 podman[436444]: 2025-11-26 02:06:36.142965595 +0000 UTC m=+0.227627892 container init 6528d70e93e5ee2609ef587170518c756740c553bf949f1b208f1221652a10d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 02:06:36 compute-0 podman[436444]: 2025-11-26 02:06:36.154442217 +0000 UTC m=+0.239104514 container start 6528d70e93e5ee2609ef587170518c756740c553bf949f1b208f1221652a10d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:06:36 compute-0 podman[436444]: 2025-11-26 02:06:36.160739904 +0000 UTC m=+0.245402201 container attach 6528d70e93e5ee2609ef587170518c756740c553bf949f1b208f1221652a10d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ritchie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:06:36 compute-0 strange_ritchie[436460]: 167 167
Nov 26 02:06:36 compute-0 systemd[1]: libpod-6528d70e93e5ee2609ef587170518c756740c553bf949f1b208f1221652a10d1.scope: Deactivated successfully.
Nov 26 02:06:36 compute-0 podman[436444]: 2025-11-26 02:06:36.16560016 +0000 UTC m=+0.250262427 container died 6528d70e93e5ee2609ef587170518c756740c553bf949f1b208f1221652a10d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 02:06:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e9efdff666fa93322943a67ea2bbc00c9410d65a0cfc47e4b6a183b3cd9b6b0-merged.mount: Deactivated successfully.
Nov 26 02:06:36 compute-0 podman[436444]: 2025-11-26 02:06:36.236014914 +0000 UTC m=+0.320677181 container remove 6528d70e93e5ee2609ef587170518c756740c553bf949f1b208f1221652a10d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ritchie, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:06:36 compute-0 systemd[1]: libpod-conmon-6528d70e93e5ee2609ef587170518c756740c553bf949f1b208f1221652a10d1.scope: Deactivated successfully.
Nov 26 02:06:36 compute-0 podman[436483]: 2025-11-26 02:06:36.539920484 +0000 UTC m=+0.084502900 container create b2bd929e6dcf2d40eb88457e7b55cc1f2af992fa93311f1719b4a12215b954d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_perlman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:06:36 compute-0 podman[436483]: 2025-11-26 02:06:36.506539528 +0000 UTC m=+0.051122034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:06:36 compute-0 systemd[1]: Started libpod-conmon-b2bd929e6dcf2d40eb88457e7b55cc1f2af992fa93311f1719b4a12215b954d4.scope.
Nov 26 02:06:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad1ff6cd0fe06771641b6535d02f45b4c5154ac8397194ce05118ff9ad544d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad1ff6cd0fe06771641b6535d02f45b4c5154ac8397194ce05118ff9ad544d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad1ff6cd0fe06771641b6535d02f45b4c5154ac8397194ce05118ff9ad544d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad1ff6cd0fe06771641b6535d02f45b4c5154ac8397194ce05118ff9ad544d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:06:36 compute-0 podman[436483]: 2025-11-26 02:06:36.742560255 +0000 UTC m=+0.287142751 container init b2bd929e6dcf2d40eb88457e7b55cc1f2af992fa93311f1719b4a12215b954d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_perlman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 02:06:36 compute-0 podman[436483]: 2025-11-26 02:06:36.760886688 +0000 UTC m=+0.305469114 container start b2bd929e6dcf2d40eb88457e7b55cc1f2af992fa93311f1719b4a12215b954d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_perlman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:06:36 compute-0 podman[436483]: 2025-11-26 02:06:36.767251507 +0000 UTC m=+0.311833953 container attach b2bd929e6dcf2d40eb88457e7b55cc1f2af992fa93311f1719b4a12215b954d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_perlman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 02:06:36 compute-0 nova_compute[350387]: 2025-11-26 02:06:36.976 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Updating instance_info_cache with network_info: [{"id": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "address": "fa:16:3e:99:2d:81", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d715a2-34", "ovs_interfaceid": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:06:36 compute-0 nova_compute[350387]: 2025-11-26 02:06:36.995 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:06:36 compute-0 nova_compute[350387]: 2025-11-26 02:06:36.996 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:06:36 compute-0 nova_compute[350387]: 2025-11-26 02:06:36.996 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:06:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:37 compute-0 nova_compute[350387]: 2025-11-26 02:06:37.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:06:37 compute-0 nova_compute[350387]: 2025-11-26 02:06:37.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:06:37 compute-0 nova_compute[350387]: 2025-11-26 02:06:37.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]: {
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:    "0": [
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:        {
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "devices": [
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "/dev/loop3"
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            ],
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "lv_name": "ceph_lv0",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "lv_size": "21470642176",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "name": "ceph_lv0",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "tags": {
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.cluster_name": "ceph",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.crush_device_class": "",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.encrypted": "0",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.osd_id": "0",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.type": "block",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.vdo": "0"
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            },
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "type": "block",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "vg_name": "ceph_vg0"
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:        }
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:    ],
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:    "1": [
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:        {
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "devices": [
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "/dev/loop4"
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            ],
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "lv_name": "ceph_lv1",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "lv_size": "21470642176",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "name": "ceph_lv1",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "tags": {
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.cluster_name": "ceph",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.crush_device_class": "",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.encrypted": "0",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.osd_id": "1",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.type": "block",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.vdo": "0"
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            },
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "type": "block",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "vg_name": "ceph_vg1"
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:        }
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:    ],
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:    "2": [
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:        {
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "devices": [
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "/dev/loop5"
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            ],
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "lv_name": "ceph_lv2",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "lv_size": "21470642176",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "name": "ceph_lv2",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "tags": {
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.cluster_name": "ceph",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.crush_device_class": "",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.encrypted": "0",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.osd_id": "2",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.type": "block",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:                "ceph.vdo": "0"
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            },
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "type": "block",
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:            "vg_name": "ceph_vg2"
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:        }
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]:    ]
Nov 26 02:06:37 compute-0 heuristic_perlman[436499]: }
Nov 26 02:06:37 compute-0 systemd[1]: libpod-b2bd929e6dcf2d40eb88457e7b55cc1f2af992fa93311f1719b4a12215b954d4.scope: Deactivated successfully.
Nov 26 02:06:37 compute-0 podman[436508]: 2025-11-26 02:06:37.677441952 +0000 UTC m=+0.056248257 container died b2bd929e6dcf2d40eb88457e7b55cc1f2af992fa93311f1719b4a12215b954d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 02:06:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ad1ff6cd0fe06771641b6535d02f45b4c5154ac8397194ce05118ff9ad544d8-merged.mount: Deactivated successfully.
Nov 26 02:06:37 compute-0 podman[436508]: 2025-11-26 02:06:37.778169526 +0000 UTC m=+0.156975821 container remove b2bd929e6dcf2d40eb88457e7b55cc1f2af992fa93311f1719b4a12215b954d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:06:37 compute-0 systemd[1]: libpod-conmon-b2bd929e6dcf2d40eb88457e7b55cc1f2af992fa93311f1719b4a12215b954d4.scope: Deactivated successfully.
Nov 26 02:06:38 compute-0 nova_compute[350387]: 2025-11-26 02:06:38.206 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:38 compute-0 podman[436659]: 2025-11-26 02:06:38.851779214 +0000 UTC m=+0.081638899 container create b456617aaf98a82a331901e34c221cedb12d16f1af62aa1e87f8c26630ce7053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:06:38 compute-0 nova_compute[350387]: 2025-11-26 02:06:38.889 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:38 compute-0 podman[436659]: 2025-11-26 02:06:38.818586914 +0000 UTC m=+0.048446649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:06:38 compute-0 systemd[1]: Started libpod-conmon-b456617aaf98a82a331901e34c221cedb12d16f1af62aa1e87f8c26630ce7053.scope.
Nov 26 02:06:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:06:38 compute-0 podman[436659]: 2025-11-26 02:06:38.992797418 +0000 UTC m=+0.222657153 container init b456617aaf98a82a331901e34c221cedb12d16f1af62aa1e87f8c26630ce7053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:06:39 compute-0 podman[436659]: 2025-11-26 02:06:39.010086822 +0000 UTC m=+0.239946497 container start b456617aaf98a82a331901e34c221cedb12d16f1af62aa1e87f8c26630ce7053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:06:39 compute-0 podman[436659]: 2025-11-26 02:06:39.016938715 +0000 UTC m=+0.246798400 container attach b456617aaf98a82a331901e34c221cedb12d16f1af62aa1e87f8c26630ce7053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:06:39 compute-0 great_dewdney[436675]: 167 167
Nov 26 02:06:39 compute-0 systemd[1]: libpod-b456617aaf98a82a331901e34c221cedb12d16f1af62aa1e87f8c26630ce7053.scope: Deactivated successfully.
Nov 26 02:06:39 compute-0 podman[436659]: 2025-11-26 02:06:39.023986892 +0000 UTC m=+0.253846607 container died b456617aaf98a82a331901e34c221cedb12d16f1af62aa1e87f8c26630ce7053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 02:06:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcfb8f4b31e5c70cb2c75123684dc80ff0d7a0055df0941de1c44f1ce35c6419-merged.mount: Deactivated successfully.
Nov 26 02:06:39 compute-0 podman[436659]: 2025-11-26 02:06:39.116816965 +0000 UTC m=+0.346676640 container remove b456617aaf98a82a331901e34c221cedb12d16f1af62aa1e87f8c26630ce7053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:06:39 compute-0 systemd[1]: libpod-conmon-b456617aaf98a82a331901e34c221cedb12d16f1af62aa1e87f8c26630ce7053.scope: Deactivated successfully.
Nov 26 02:06:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:39 compute-0 podman[436697]: 2025-11-26 02:06:39.40844688 +0000 UTC m=+0.103546304 container create 9df9044be48a6b4250755e21277941d606bd0159cda00b4f3a7aaad828645b46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 02:06:39 compute-0 podman[436697]: 2025-11-26 02:06:39.350512296 +0000 UTC m=+0.045611710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:06:39 compute-0 systemd[1]: Started libpod-conmon-9df9044be48a6b4250755e21277941d606bd0159cda00b4f3a7aaad828645b46.scope.
Nov 26 02:06:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:06:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d156d1039ec28002f6c5d69cfaaf337db8444a7f971214b81256e56287ac72b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:06:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d156d1039ec28002f6c5d69cfaaf337db8444a7f971214b81256e56287ac72b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:06:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d156d1039ec28002f6c5d69cfaaf337db8444a7f971214b81256e56287ac72b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:06:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d156d1039ec28002f6c5d69cfaaf337db8444a7f971214b81256e56287ac72b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:06:39 compute-0 podman[436697]: 2025-11-26 02:06:39.55789421 +0000 UTC m=+0.252993634 container init 9df9044be48a6b4250755e21277941d606bd0159cda00b4f3a7aaad828645b46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:06:39 compute-0 podman[436697]: 2025-11-26 02:06:39.572041346 +0000 UTC m=+0.267140730 container start 9df9044be48a6b4250755e21277941d606bd0159cda00b4f3a7aaad828645b46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:06:39 compute-0 podman[436697]: 2025-11-26 02:06:39.577002296 +0000 UTC m=+0.272101690 container attach 9df9044be48a6b4250755e21277941d606bd0159cda00b4f3a7aaad828645b46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 02:06:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]: {
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:        "osd_id": 0,
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:        "type": "bluestore"
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:    },
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:        "osd_id": 2,
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:        "type": "bluestore"
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:    },
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:        "osd_id": 1,
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:        "type": "bluestore"
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]:    }
Nov 26 02:06:40 compute-0 gracious_chaplygin[436713]: }
Nov 26 02:06:40 compute-0 systemd[1]: libpod-9df9044be48a6b4250755e21277941d606bd0159cda00b4f3a7aaad828645b46.scope: Deactivated successfully.
Nov 26 02:06:40 compute-0 podman[436697]: 2025-11-26 02:06:40.604232112 +0000 UTC m=+1.299331526 container died 9df9044be48a6b4250755e21277941d606bd0159cda00b4f3a7aaad828645b46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:06:40 compute-0 systemd[1]: libpod-9df9044be48a6b4250755e21277941d606bd0159cda00b4f3a7aaad828645b46.scope: Consumed 1.028s CPU time.
Nov 26 02:06:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d156d1039ec28002f6c5d69cfaaf337db8444a7f971214b81256e56287ac72b-merged.mount: Deactivated successfully.
Nov 26 02:06:40 compute-0 podman[436697]: 2025-11-26 02:06:40.682848096 +0000 UTC m=+1.377947480 container remove 9df9044be48a6b4250755e21277941d606bd0159cda00b4f3a7aaad828645b46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:06:40 compute-0 systemd[1]: libpod-conmon-9df9044be48a6b4250755e21277941d606bd0159cda00b4f3a7aaad828645b46.scope: Deactivated successfully.
Nov 26 02:06:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:06:40 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:06:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:06:40 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:06:40 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 89bd6d67-a267-4fb2-98b2-448a89b4b3ff does not exist
Nov 26 02:06:40 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev f68e6c5c-c93b-43fc-8570-ce5685249a27 does not exist
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:06:41
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'backups', 'cephfs.cephfs.data', '.mgr']
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:06:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:06:41 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:06:41 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.871 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.872 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.872 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.873 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.879 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd32050dc-c041-47df-994e-7d05cf1f489a', 'name': 'vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {'metering.server_group': '366b90b6-2e85-40c4-9ca1-855cf9022409'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.883 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b1c088bc-7a6b-4580-93ff-685731747189', 'name': 'test_0', 'flavor': {'id': '030e95e2-5458-42ef-a5df-79a19c0b681d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '48e08d00-37a3-4465-a949-ff0b8afe4def'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4d902f6105ab4c81a51a4751fa89a83e', 'user_id': 'b130e7a8bed3424f9f5ff63b35cd2b28', 'hostId': '2ed939b37cf1f22877bdc27f08608d60d12e9512af52f6ec397107c1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.883 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.883 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.883 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.883 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.884 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.884 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.884 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.885 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.885 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.885 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.885 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T02:06:42.883589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.886 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T02:06:42.885231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.890 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.895 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.896 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.896 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.896 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.896 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.896 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.896 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.897 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T02:06:42.896711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.897 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.898 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.898 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.898 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.898 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.898 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.899 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T02:06:42.898573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.898 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.900 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.900 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.900 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.901 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.901 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.901 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.901 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.901 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.901 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T02:06:42.901282) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.902 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.903 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.903 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.903 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.903 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.903 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.903 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.bytes volume: 2426 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.904 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T02:06:42.903458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.904 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.904 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.905 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.905 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.905 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.905 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T02:06:42.905270) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.930 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/cpu volume: 43040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.958 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/cpu volume: 50750000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.959 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.959 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.959 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.959 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.959 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.960 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T02:06:42.959890) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.960 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.960 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.961 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.961 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.961 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.962 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.962 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.962 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.962 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T02:06:42.962331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.962 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.963 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/memory.usage volume: 48.93359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.963 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/memory.usage volume: 48.828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.963 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.964 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.964 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.964 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.964 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.964 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.964 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.965 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T02:06:42.964640) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.964 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.965 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.965 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes volume: 2346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.966 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.966 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.966 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.966 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.966 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.967 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T02:06:42.966722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.966 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.967 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.967 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.968 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.968 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.968 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.968 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.968 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.969 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T02:06:42.968757) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.968 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.969 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.969 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.970 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.970 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.970 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.970 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.970 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.971 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T02:06:42.970905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.971 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.971 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.971 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.972 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.972 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.972 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.972 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.972 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.973 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T02:06:42.972945) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.973 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.973 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.974 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.974 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.974 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.974 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.974 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.974 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T02:06:42.975013) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.975 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.998 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.998 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:42.999 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.032 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.032 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.033 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.033 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.033 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.033 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.033 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.034 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T02:06:43.034122) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.034 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.118 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.119 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.120 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.205 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.206 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.207 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.207 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.207 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.208 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.208 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.208 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.208 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.208 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.208 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.208 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.latency volume: 2007436788 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.208 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.latency volume: 283353651 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.208 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.latency volume: 197487344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.209 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 2182324777 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.209 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 336768448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.209 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.latency volume: 176765271 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 nova_compute[350387]: 2025-11-26 02:06:43.209 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.209 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.210 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.210 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.210 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.210 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T02:06:43.208393) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.210 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.210 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.210 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.211 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.211 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.211 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T02:06:43.210568) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.211 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.212 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.212 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.212 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.212 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.212 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.212 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.212 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T02:06:43.212594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.213 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.213 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.213 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.213 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.214 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.214 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.214 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.214 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.214 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.214 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.215 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.215 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T02:06:43.214690) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.215 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.215 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.215 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.215 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.216 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.216 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.216 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.216 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.216 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.216 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.216 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.217 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.217 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.217 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.217 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.217 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T02:06:43.216927) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.217 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.218 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.218 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.218 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.latency volume: 5738822785 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.218 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.latency volume: 28688069 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T02:06:43.218129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.218 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.218 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 5787370869 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.219 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 30575996 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.219 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.219 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.219 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.219 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.220 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.220 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.220 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.220 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.220 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T02:06:43.220150) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.220 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.220 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.221 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.222 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.222 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.222 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.222 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.222 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.222 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.223 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.223 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.223 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.223 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T02:06:43.223081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.223 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.223 15 DEBUG ceilometer.compute.pollsters [-] d32050dc-c041-47df-994e-7d05cf1f489a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.223 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.224 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.224 15 DEBUG ceilometer.compute.pollsters [-] b1c088bc-7a6b-4580-93ff-685731747189/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.224 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.224 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.224 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.225 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.225 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.225 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.225 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.225 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.225 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.225 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.225 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.225 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.225 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.225 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.225 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.225 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.225 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.225 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.226 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.226 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.226 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.226 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.226 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.226 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.226 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.226 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:06:43.226 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:06:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:43 compute-0 nova_compute[350387]: 2025-11-26 02:06:43.892 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:06:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:48 compute-0 nova_compute[350387]: 2025-11-26 02:06:48.212 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:48 compute-0 nova_compute[350387]: 2025-11-26 02:06:48.894 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:06:50 compute-0 podman[436810]: 2025-11-26 02:06:50.547599071 +0000 UTC m=+0.086022412 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 02:06:50 compute-0 podman[436809]: 2025-11-26 02:06:50.565664858 +0000 UTC m=+0.112301019 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 02:06:50 compute-0 podman[436808]: 2025-11-26 02:06:50.594344802 +0000 UTC m=+0.130925312 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:06:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:06:53 compute-0 nova_compute[350387]: 2025-11-26 02:06:53.214 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:53 compute-0 nova_compute[350387]: 2025-11-26 02:06:53.897 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:06:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:55 compute-0 podman[436865]: 2025-11-26 02:06:55.562277081 +0000 UTC m=+0.120650463 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 26 02:06:55 compute-0 podman[436866]: 2025-11-26 02:06:55.594597828 +0000 UTC m=+0.149218345 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 26 02:06:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:57 compute-0 podman[436909]: 2025-11-26 02:06:57.579875824 +0000 UTC m=+0.124212134 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, container_name=kepler, version=9.4, io.openshift.tags=base rhel9, architecture=x86_64, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible)
Nov 26 02:06:58 compute-0 nova_compute[350387]: 2025-11-26 02:06:58.217 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:58 compute-0 nova_compute[350387]: 2025-11-26 02:06:58.900 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:06:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:06:59 compute-0 podman[436927]: 2025-11-26 02:06:59.533498781 +0000 UTC m=+0.092605437 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 26 02:06:59 compute-0 podman[158021]: time="2025-11-26T02:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:06:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:06:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8643 "" "Go-http-client/1.1"
Nov 26 02:07:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:07:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:01 compute-0 openstack_network_exporter[367323]: ERROR   02:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:07:01 compute-0 openstack_network_exporter[367323]: ERROR   02:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:07:01 compute-0 openstack_network_exporter[367323]: ERROR   02:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:07:01 compute-0 openstack_network_exporter[367323]: ERROR   02:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:07:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:07:01 compute-0 openstack_network_exporter[367323]: ERROR   02:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:07:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:07:03 compute-0 nova_compute[350387]: 2025-11-26 02:07:03.221 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:03 compute-0 podman[436948]: 2025-11-26 02:07:03.582562654 +0000 UTC m=+0.120393026 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:07:03 compute-0 podman[436947]: 2025-11-26 02:07:03.614925311 +0000 UTC m=+0.162049044 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.expose-services=, name=ubi9-minimal, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 02:07:03 compute-0 nova_compute[350387]: 2025-11-26 02:07:03.903 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:07:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:08 compute-0 nova_compute[350387]: 2025-11-26 02:07:08.223 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:08 compute-0 nova_compute[350387]: 2025-11-26 02:07:08.907 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:07:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:07:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:07:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:07:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:07:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:07:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:07:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:13 compute-0 nova_compute[350387]: 2025-11-26 02:07:13.226 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:13 compute-0 nova_compute[350387]: 2025-11-26 02:07:13.910 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:07:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:18 compute-0 nova_compute[350387]: 2025-11-26 02:07:18.229 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:18 compute-0 nova_compute[350387]: 2025-11-26 02:07:18.913 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:07:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:21 compute-0 podman[436990]: 2025-11-26 02:07:21.573434356 +0000 UTC m=+0.119999755 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 26 02:07:21 compute-0 podman[436992]: 2025-11-26 02:07:21.584353282 +0000 UTC m=+0.115610892 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:07:21 compute-0 podman[436991]: 2025-11-26 02:07:21.603403936 +0000 UTC m=+0.141258431 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:07:23 compute-0 nova_compute[350387]: 2025-11-26 02:07:23.234 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:23 compute-0 nova_compute[350387]: 2025-11-26 02:07:23.916 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:24.989 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:07:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:24.990 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:07:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:24.991 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:07:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:07:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1705: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:26 compute-0 podman[437048]: 2025-11-26 02:07:26.582675458 +0000 UTC m=+0.126560389 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi)
Nov 26 02:07:26 compute-0 podman[437049]: 2025-11-26 02:07:26.647589988 +0000 UTC m=+0.180486591 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:07:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:07:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/195496613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:07:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:07:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/195496613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:07:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:28 compute-0 nova_compute[350387]: 2025-11-26 02:07:28.237 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:28 compute-0 podman[437092]: 2025-11-26 02:07:28.606437202 +0000 UTC m=+0.154999486 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, version=9.4, container_name=kepler)
Nov 26 02:07:28 compute-0 nova_compute[350387]: 2025-11-26 02:07:28.921 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:29 compute-0 nova_compute[350387]: 2025-11-26 02:07:29.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:07:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:29 compute-0 nova_compute[350387]: 2025-11-26 02:07:29.333 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:07:29 compute-0 nova_compute[350387]: 2025-11-26 02:07:29.333 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:07:29 compute-0 nova_compute[350387]: 2025-11-26 02:07:29.334 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:07:29 compute-0 nova_compute[350387]: 2025-11-26 02:07:29.334 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:07:29 compute-0 nova_compute[350387]: 2025-11-26 02:07:29.335 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:07:29 compute-0 podman[158021]: time="2025-11-26T02:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:07:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:07:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8637 "" "Go-http-client/1.1"
Nov 26 02:07:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:07:29 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614387938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:07:29 compute-0 nova_compute[350387]: 2025-11-26 02:07:29.889 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:07:30 compute-0 nova_compute[350387]: 2025-11-26 02:07:30.014 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:07:30 compute-0 nova_compute[350387]: 2025-11-26 02:07:30.014 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:07:30 compute-0 nova_compute[350387]: 2025-11-26 02:07:30.015 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:07:30 compute-0 nova_compute[350387]: 2025-11-26 02:07:30.024 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:07:30 compute-0 nova_compute[350387]: 2025-11-26 02:07:30.024 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:07:30 compute-0 nova_compute[350387]: 2025-11-26 02:07:30.025 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:07:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:07:30 compute-0 podman[437134]: 2025-11-26 02:07:30.879446045 +0000 UTC m=+0.423932175 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 02:07:30 compute-0 nova_compute[350387]: 2025-11-26 02:07:30.958 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:07:30 compute-0 nova_compute[350387]: 2025-11-26 02:07:30.959 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3639MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:07:30 compute-0 nova_compute[350387]: 2025-11-26 02:07:30.959 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:07:30 compute-0 nova_compute[350387]: 2025-11-26 02:07:30.959 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:07:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:31 compute-0 nova_compute[350387]: 2025-11-26 02:07:31.359 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance b1c088bc-7a6b-4580-93ff-685731747189 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:07:31 compute-0 nova_compute[350387]: 2025-11-26 02:07:31.359 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance d32050dc-c041-47df-994e-7d05cf1f489a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:07:31 compute-0 nova_compute[350387]: 2025-11-26 02:07:31.360 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:07:31 compute-0 nova_compute[350387]: 2025-11-26 02:07:31.360 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:07:31 compute-0 openstack_network_exporter[367323]: ERROR   02:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:07:31 compute-0 openstack_network_exporter[367323]: ERROR   02:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:07:31 compute-0 openstack_network_exporter[367323]: ERROR   02:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:07:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:07:31 compute-0 openstack_network_exporter[367323]: ERROR   02:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:07:31 compute-0 openstack_network_exporter[367323]: ERROR   02:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:07:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:07:31 compute-0 nova_compute[350387]: 2025-11-26 02:07:31.582 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:07:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:07:32 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4082595073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:07:32 compute-0 nova_compute[350387]: 2025-11-26 02:07:32.079 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:07:32 compute-0 nova_compute[350387]: 2025-11-26 02:07:32.090 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:07:32 compute-0 nova_compute[350387]: 2025-11-26 02:07:32.108 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:07:32 compute-0 nova_compute[350387]: 2025-11-26 02:07:32.111 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:07:32 compute-0 nova_compute[350387]: 2025-11-26 02:07:32.112 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:07:32 compute-0 nova_compute[350387]: 2025-11-26 02:07:32.113 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.144 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.145 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.145 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.146 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.241 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.364 350391 DEBUG oslo_concurrency.lockutils [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "d32050dc-c041-47df-994e-7d05cf1f489a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.365 350391 DEBUG oslo_concurrency.lockutils [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.365 350391 DEBUG oslo_concurrency.lockutils [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.365 350391 DEBUG oslo_concurrency.lockutils [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.366 350391 DEBUG oslo_concurrency.lockutils [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.368 350391 INFO nova.compute.manager [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Terminating instance#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.369 350391 DEBUG nova.compute.manager [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 02:07:33 compute-0 kernel: tap25d715a2-34 (unregistering): left promiscuous mode
Nov 26 02:07:33 compute-0 NetworkManager[48886]: <info>  [1764122853.4991] device (tap25d715a2-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 02:07:33 compute-0 ovn_controller[89102]: 2025-11-26T02:07:33Z|00058|binding|INFO|Releasing lport 25d715a2-34af-4ad1-bc6d-0303fb8763f1 from this chassis (sb_readonly=0)
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.516 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:33 compute-0 ovn_controller[89102]: 2025-11-26T02:07:33Z|00059|binding|INFO|Setting lport 25d715a2-34af-4ad1-bc6d-0303fb8763f1 down in Southbound
Nov 26 02:07:33 compute-0 ovn_controller[89102]: 2025-11-26T02:07:33Z|00060|binding|INFO|Removing iface tap25d715a2-34 ovn-installed in OVS
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.520 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:33 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:33.527 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:2d:81 192.168.0.232'], port_security=['fa:16:3e:99:2d:81 192.168.0.232'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vnceagrg57o4-2ev52kuax77s-ynduxzek5ukb-port-7xnbby5gttbg', 'neutron:cidrs': '192.168.0.232/24', 'neutron:device_id': 'd32050dc-c041-47df-994e-7d05cf1f489a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97f5f89-70be-4349-beb5-5f8e6065072e', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vnceagrg57o4-2ev52kuax77s-ynduxzek5ukb-port-7xnbby5gttbg', 'neutron:project_id': '4d902f6105ab4c81a51a4751fa89a83e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd3202a1a-8d71-42b1-ae70-18469fa18607', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5f5986b-4ad4-4edf-b238-68c26c7002dd, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=25d715a2-34af-4ad1-bc6d-0303fb8763f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:07:33 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:33.528 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 25d715a2-34af-4ad1-bc6d-0303fb8763f1 in datapath c97f5f89-70be-4349-beb5-5f8e6065072e unbound from our chassis#033[00m
Nov 26 02:07:33 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:33.529 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97f5f89-70be-4349-beb5-5f8e6065072e#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.543 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:33 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:33.548 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[853511c8-835a-4ad3-acc8-fbeb283f370b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:07:33 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 26 02:07:33 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2min 15.705s CPU time.
Nov 26 02:07:33 compute-0 systemd-machined[138512]: Machine qemu-4-instance-00000004 terminated.
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.592 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:33 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:33.591 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[93db050d-04a7-4768-888f-6a41fa204c3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:07:33 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:33.595 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4ffc4a-3de2-4a47-a258-9f3ac6648d02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.600 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.611 350391 INFO nova.virt.libvirt.driver [-] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Instance destroyed successfully.#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.611 350391 DEBUG nova.objects.instance [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lazy-loading 'resources' on Instance uuid d32050dc-c041-47df-994e-7d05cf1f489a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:07:33 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:33.625 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[2615adee-d846-41b2-8558-2d0ac9f24456]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:07:33 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:33.646 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[e37c0479-478c-4813-a3f2-ccd95c3c709c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97f5f89-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:e8:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 16, 'rx_bytes': 532, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 16, 'rx_bytes': 532, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544483, 'reachable_time': 33241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 437198, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.649 350391 DEBUG nova.virt.libvirt.vif [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T01:57:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-grg57o4-2ev52kuax77s-ynduxzek5ukb-vnf-4yjvctsjnhrt',id=4,image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-26T01:57:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='366b90b6-2e85-40c4-9ca1-855cf9022409'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d902f6105ab4c81a51a4751fa89a83e',ramdisk_id='',reservation_id='r-4lu5o0hu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T01:57:21Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MzI3MTgyMzMyNTg2Mzg4NjM2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgzMjcxODIzMzI1ODYzODg2MzY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODMyNzE4MjMzMjU4NjM4ODYzNj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgzMjcxODIzMzI1ODYzODg2MzY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MzI3MTgyMzMyNTg2Mzg4NjM2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MzI3MTgyMzMyNTg2Mzg4NjM2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Nov 26 02:07:33 compute-0 nova_compute[350387]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODMyNzE4MjMzMjU4NjM4ODYzNj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgzMjcxODIzMzI1ODYzODg2MzY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MzI3MTgyMzMyNTg2Mzg4NjM2PT0tLQo=',user_id='b130e7a8bed3424f9f5ff63b35cd2b28',uuid=d32050dc-c041-47df-994e-7d05cf1f489a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "address": "fa:16:3e:99:2d:81", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d715a2-34", "ovs_interfaceid": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.649 350391 DEBUG nova.network.os_vif_util [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converting VIF {"id": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "address": "fa:16:3e:99:2d:81", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d715a2-34", "ovs_interfaceid": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.651 350391 DEBUG nova.network.os_vif_util [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:2d:81,bridge_name='br-int',has_traffic_filtering=True,id=25d715a2-34af-4ad1-bc6d-0303fb8763f1,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25d715a2-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.651 350391 DEBUG os_vif [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:2d:81,bridge_name='br-int',has_traffic_filtering=True,id=25d715a2-34af-4ad1-bc6d-0303fb8763f1,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25d715a2-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.653 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.654 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25d715a2-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.656 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.659 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.662 350391 INFO os_vif [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:2d:81,bridge_name='br-int',has_traffic_filtering=True,id=25d715a2-34af-4ad1-bc6d-0303fb8763f1,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25d715a2-34')#033[00m
Nov 26 02:07:33 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:33.665 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[203b119b-5e97-44d2-8b92-5d2ac096644b]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc97f5f89-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544500, 'tstamp': 544500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 437200, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc97f5f89-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544503, 'tstamp': 544503}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 437200, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:07:33 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:33.667 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97f5f89-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:07:33 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:33.671 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97f5f89-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:07:33 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:33.672 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:07:33 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:33.672 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97f5f89-70, col_values=(('external_ids', {'iface-id': '3824ec63-7278-42dc-8c72-8ec8e06c2f0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:07:33 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:33.673 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.690 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:33 compute-0 podman[437199]: 2025-11-26 02:07:33.774136366 +0000 UTC m=+0.099875811 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Nov 26 02:07:33 compute-0 rsyslogd[188548]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 02:07:33.649 350391 DEBUG nova.virt.libvirt.vif [None req-01b0a814-d6 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 02:07:33 compute-0 podman[437201]: 2025-11-26 02:07:33.783376484 +0000 UTC m=+0.111622710 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.850 350391 DEBUG nova.compute.manager [req-7a7d13ee-a8bb-4e3b-b6f4-193e736b235c req-a8c9798a-4de4-48da-8eec-304b801a506a 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Received event network-vif-unplugged-25d715a2-34af-4ad1-bc6d-0303fb8763f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.850 350391 DEBUG oslo_concurrency.lockutils [req-7a7d13ee-a8bb-4e3b-b6f4-193e736b235c req-a8c9798a-4de4-48da-8eec-304b801a506a 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.850 350391 DEBUG oslo_concurrency.lockutils [req-7a7d13ee-a8bb-4e3b-b6f4-193e736b235c req-a8c9798a-4de4-48da-8eec-304b801a506a 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.851 350391 DEBUG oslo_concurrency.lockutils [req-7a7d13ee-a8bb-4e3b-b6f4-193e736b235c req-a8c9798a-4de4-48da-8eec-304b801a506a 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.851 350391 DEBUG nova.compute.manager [req-7a7d13ee-a8bb-4e3b-b6f4-193e736b235c req-a8c9798a-4de4-48da-8eec-304b801a506a 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] No waiting events found dispatching network-vif-unplugged-25d715a2-34af-4ad1-bc6d-0303fb8763f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:07:33 compute-0 nova_compute[350387]: 2025-11-26 02:07:33.851 350391 DEBUG nova.compute.manager [req-7a7d13ee-a8bb-4e3b-b6f4-193e736b235c req-a8c9798a-4de4-48da-8eec-304b801a506a 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Received event network-vif-unplugged-25d715a2-34af-4ad1-bc6d-0303fb8763f1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 02:07:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:34.231 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:07:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:34.232 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 02:07:34 compute-0 nova_compute[350387]: 2025-11-26 02:07:34.238 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.046 350391 INFO nova.virt.libvirt.driver [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Deleting instance files /var/lib/nova/instances/d32050dc-c041-47df-994e-7d05cf1f489a_del#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.047 350391 INFO nova.virt.libvirt.driver [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Deletion of /var/lib/nova/instances/d32050dc-c041-47df-994e-7d05cf1f489a_del complete#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.113 350391 INFO nova.compute.manager [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Took 1.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.114 350391 DEBUG oslo.service.loopingcall [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.115 350391 DEBUG nova.compute.manager [-] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.115 350391 DEBUG nova.network.neutron [-] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.301 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.326 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 26 02:07:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 121 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 37 op/s
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.961 350391 DEBUG nova.compute.manager [req-3c89f3c5-0095-44be-8447-069bff037854 req-21c1bec5-3611-44b3-ab49-23c5c58572d0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Received event network-vif-plugged-25d715a2-34af-4ad1-bc6d-0303fb8763f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.962 350391 DEBUG oslo_concurrency.lockutils [req-3c89f3c5-0095-44be-8447-069bff037854 req-21c1bec5-3611-44b3-ab49-23c5c58572d0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.962 350391 DEBUG oslo_concurrency.lockutils [req-3c89f3c5-0095-44be-8447-069bff037854 req-21c1bec5-3611-44b3-ab49-23c5c58572d0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.963 350391 DEBUG oslo_concurrency.lockutils [req-3c89f3c5-0095-44be-8447-069bff037854 req-21c1bec5-3611-44b3-ab49-23c5c58572d0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.963 350391 DEBUG nova.compute.manager [req-3c89f3c5-0095-44be-8447-069bff037854 req-21c1bec5-3611-44b3-ab49-23c5c58572d0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] No waiting events found dispatching network-vif-plugged-25d715a2-34af-4ad1-bc6d-0303fb8763f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.963 350391 WARNING nova.compute.manager [req-3c89f3c5-0095-44be-8447-069bff037854 req-21c1bec5-3611-44b3-ab49-23c5c58572d0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Received unexpected event network-vif-plugged-25d715a2-34af-4ad1-bc6d-0303fb8763f1 for instance with vm_state active and task_state deleting.#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.964 350391 DEBUG nova.compute.manager [req-3c89f3c5-0095-44be-8447-069bff037854 req-21c1bec5-3611-44b3-ab49-23c5c58572d0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Received event network-changed-25d715a2-34af-4ad1-bc6d-0303fb8763f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.964 350391 DEBUG nova.compute.manager [req-3c89f3c5-0095-44be-8447-069bff037854 req-21c1bec5-3611-44b3-ab49-23c5c58572d0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Refreshing instance network info cache due to event network-changed-25d715a2-34af-4ad1-bc6d-0303fb8763f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.965 350391 DEBUG oslo_concurrency.lockutils [req-3c89f3c5-0095-44be-8447-069bff037854 req-21c1bec5-3611-44b3-ab49-23c5c58572d0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.965 350391 DEBUG oslo_concurrency.lockutils [req-3c89f3c5-0095-44be-8447-069bff037854 req-21c1bec5-3611-44b3-ab49-23c5c58572d0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:07:35 compute-0 nova_compute[350387]: 2025-11-26 02:07:35.965 350391 DEBUG nova.network.neutron [req-3c89f3c5-0095-44be-8447-069bff037854 req-21c1bec5-3611-44b3-ab49-23c5c58572d0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Refreshing network info cache for port 25d715a2-34af-4ad1-bc6d-0303fb8763f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:07:36 compute-0 nova_compute[350387]: 2025-11-26 02:07:36.000 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:07:36 compute-0 nova_compute[350387]: 2025-11-26 02:07:36.000 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:07:36 compute-0 nova_compute[350387]: 2025-11-26 02:07:36.001 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:07:36 compute-0 nova_compute[350387]: 2025-11-26 02:07:36.001 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid b1c088bc-7a6b-4580-93ff-685731747189 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:07:36 compute-0 nova_compute[350387]: 2025-11-26 02:07:36.776 350391 DEBUG nova.network.neutron [-] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:07:36 compute-0 nova_compute[350387]: 2025-11-26 02:07:36.798 350391 INFO nova.compute.manager [-] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Took 1.68 seconds to deallocate network for instance.#033[00m
Nov 26 02:07:36 compute-0 nova_compute[350387]: 2025-11-26 02:07:36.846 350391 DEBUG oslo_concurrency.lockutils [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:07:36 compute-0 nova_compute[350387]: 2025-11-26 02:07:36.847 350391 DEBUG oslo_concurrency.lockutils [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:07:36 compute-0 nova_compute[350387]: 2025-11-26 02:07:36.943 350391 DEBUG oslo_concurrency.processutils [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:07:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1711: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 1.7 KiB/s wr, 96 op/s
Nov 26 02:07:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:07:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/897605321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:07:37 compute-0 nova_compute[350387]: 2025-11-26 02:07:37.458 350391 DEBUG oslo_concurrency.processutils [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:07:37 compute-0 nova_compute[350387]: 2025-11-26 02:07:37.470 350391 DEBUG nova.compute.provider_tree [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:07:37 compute-0 nova_compute[350387]: 2025-11-26 02:07:37.488 350391 DEBUG nova.scheduler.client.report [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:07:37 compute-0 nova_compute[350387]: 2025-11-26 02:07:37.511 350391 DEBUG oslo_concurrency.lockutils [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:07:37 compute-0 nova_compute[350387]: 2025-11-26 02:07:37.554 350391 INFO nova.scheduler.client.report [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Deleted allocations for instance d32050dc-c041-47df-994e-7d05cf1f489a#033[00m
Nov 26 02:07:37 compute-0 nova_compute[350387]: 2025-11-26 02:07:37.622 350391 DEBUG oslo_concurrency.lockutils [None req-01b0a814-d68f-4310-bbb7-0b48bbf3cbd8 b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "d32050dc-c041-47df-994e-7d05cf1f489a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:07:37 compute-0 nova_compute[350387]: 2025-11-26 02:07:37.634 350391 DEBUG nova.network.neutron [req-3c89f3c5-0095-44be-8447-069bff037854 req-21c1bec5-3611-44b3-ab49-23c5c58572d0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Updated VIF entry in instance network info cache for port 25d715a2-34af-4ad1-bc6d-0303fb8763f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:07:37 compute-0 nova_compute[350387]: 2025-11-26 02:07:37.635 350391 DEBUG nova.network.neutron [req-3c89f3c5-0095-44be-8447-069bff037854 req-21c1bec5-3611-44b3-ab49-23c5c58572d0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Updating instance_info_cache with network_info: [{"id": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "address": "fa:16:3e:99:2d:81", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.232", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25d715a2-34", "ovs_interfaceid": "25d715a2-34af-4ad1-bc6d-0303fb8763f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:07:37 compute-0 nova_compute[350387]: 2025-11-26 02:07:37.660 350391 DEBUG oslo_concurrency.lockutils [req-3c89f3c5-0095-44be-8447-069bff037854 req-21c1bec5-3611-44b3-ab49-23c5c58572d0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-d32050dc-c041-47df-994e-7d05cf1f489a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:07:37 compute-0 nova_compute[350387]: 2025-11-26 02:07:37.803 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updating instance_info_cache with network_info: [{"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:07:37 compute-0 nova_compute[350387]: 2025-11-26 02:07:37.824 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-b1c088bc-7a6b-4580-93ff-685731747189" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:07:37 compute-0 nova_compute[350387]: 2025-11-26 02:07:37.825 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:07:38 compute-0 nova_compute[350387]: 2025-11-26 02:07:38.244 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:38 compute-0 nova_compute[350387]: 2025-11-26 02:07:38.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:07:38 compute-0 nova_compute[350387]: 2025-11-26 02:07:38.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:07:38 compute-0 nova_compute[350387]: 2025-11-26 02:07:38.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:07:38 compute-0 nova_compute[350387]: 2025-11-26 02:07:38.657 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:39 compute-0 nova_compute[350387]: 2025-11-26 02:07:39.322 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:07:39 compute-0 nova_compute[350387]: 2025-11-26 02:07:39.323 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:07:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 1.7 KiB/s wr, 96 op/s
Nov 26 02:07:39 compute-0 nova_compute[350387]: 2025-11-26 02:07:39.838 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:07:39 compute-0 nova_compute[350387]: 2025-11-26 02:07:39.876 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Triggering sync for uuid b1c088bc-7a6b-4580-93ff-685731747189 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 26 02:07:39 compute-0 nova_compute[350387]: 2025-11-26 02:07:39.877 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "b1c088bc-7a6b-4580-93ff-685731747189" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:07:39 compute-0 nova_compute[350387]: 2025-11-26 02:07:39.878 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "b1c088bc-7a6b-4580-93ff-685731747189" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:07:39 compute-0 nova_compute[350387]: 2025-11-26 02:07:39.908 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "b1c088bc-7a6b-4580-93ff-685731747189" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:07:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:07:41
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'backups', 'volumes', '.mgr', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log']
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 1.7 KiB/s wr, 99 op/s
Nov 26 02:07:41 compute-0 nova_compute[350387]: 2025-11-26 02:07:41.334 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:07:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:07:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:07:42 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:07:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:07:42 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:07:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:07:42 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:07:42 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 0cbcc475-4989-48a7-be8d-470e39a4e499 does not exist
Nov 26 02:07:42 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 9a39e4e8-475b-4d04-937a-fc16b012fc46 does not exist
Nov 26 02:07:42 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 8eab8ef7-729f-479b-b1fc-70f011a14034 does not exist
Nov 26 02:07:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:07:42 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:07:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:07:42 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:07:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:07:42 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:07:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:07:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:07:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:07:43 compute-0 nova_compute[350387]: 2025-11-26 02:07:43.250 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 1.7 KiB/s wr, 99 op/s
Nov 26 02:07:43 compute-0 nova_compute[350387]: 2025-11-26 02:07:43.660 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:43 compute-0 podman[437555]: 2025-11-26 02:07:43.706345327 +0000 UTC m=+0.097710810 container create d4e5d195ccd81e400e2fc38fcece4ada3c5f200a961f332855dd77f730077195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:07:43 compute-0 podman[437555]: 2025-11-26 02:07:43.666789748 +0000 UTC m=+0.058155281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:07:43 compute-0 systemd[1]: Started libpod-conmon-d4e5d195ccd81e400e2fc38fcece4ada3c5f200a961f332855dd77f730077195.scope.
Nov 26 02:07:43 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:07:43 compute-0 podman[437555]: 2025-11-26 02:07:43.874883892 +0000 UTC m=+0.266249425 container init d4e5d195ccd81e400e2fc38fcece4ada3c5f200a961f332855dd77f730077195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 02:07:43 compute-0 podman[437555]: 2025-11-26 02:07:43.893293009 +0000 UTC m=+0.284658482 container start d4e5d195ccd81e400e2fc38fcece4ada3c5f200a961f332855dd77f730077195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 02:07:43 compute-0 podman[437555]: 2025-11-26 02:07:43.900403688 +0000 UTC m=+0.291769231 container attach d4e5d195ccd81e400e2fc38fcece4ada3c5f200a961f332855dd77f730077195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 02:07:43 compute-0 agitated_faraday[437571]: 167 167
Nov 26 02:07:43 compute-0 systemd[1]: libpod-d4e5d195ccd81e400e2fc38fcece4ada3c5f200a961f332855dd77f730077195.scope: Deactivated successfully.
Nov 26 02:07:43 compute-0 podman[437555]: 2025-11-26 02:07:43.906671714 +0000 UTC m=+0.298037197 container died d4e5d195ccd81e400e2fc38fcece4ada3c5f200a961f332855dd77f730077195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:07:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba41be711713208fcf1cd296fa046782bcb05cffef7f24c5c83649a840a605e8-merged.mount: Deactivated successfully.
Nov 26 02:07:44 compute-0 podman[437555]: 2025-11-26 02:07:44.005379091 +0000 UTC m=+0.396744544 container remove d4e5d195ccd81e400e2fc38fcece4ada3c5f200a961f332855dd77f730077195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 02:07:44 compute-0 systemd[1]: libpod-conmon-d4e5d195ccd81e400e2fc38fcece4ada3c5f200a961f332855dd77f730077195.scope: Deactivated successfully.
Nov 26 02:07:44 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:44.235 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:07:44 compute-0 podman[437593]: 2025-11-26 02:07:44.249769362 +0000 UTC m=+0.086593729 container create 968480def1ddf185d85784b399310cd928221d11d89675b3a3155dcb84a4bac9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:07:44 compute-0 podman[437593]: 2025-11-26 02:07:44.208285919 +0000 UTC m=+0.045110346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:07:44 compute-0 systemd[1]: Started libpod-conmon-968480def1ddf185d85784b399310cd928221d11d89675b3a3155dcb84a4bac9.scope.
Nov 26 02:07:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:07:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dac67d61a1e53529d9b1d96cdf96c8185b3911dd0dc4ecfe34f92b1f1e07cd5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:07:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dac67d61a1e53529d9b1d96cdf96c8185b3911dd0dc4ecfe34f92b1f1e07cd5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:07:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dac67d61a1e53529d9b1d96cdf96c8185b3911dd0dc4ecfe34f92b1f1e07cd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:07:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dac67d61a1e53529d9b1d96cdf96c8185b3911dd0dc4ecfe34f92b1f1e07cd5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:07:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dac67d61a1e53529d9b1d96cdf96c8185b3911dd0dc4ecfe34f92b1f1e07cd5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:07:44 compute-0 podman[437593]: 2025-11-26 02:07:44.443571576 +0000 UTC m=+0.280396023 container init 968480def1ddf185d85784b399310cd928221d11d89675b3a3155dcb84a4bac9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pike, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 02:07:44 compute-0 podman[437593]: 2025-11-26 02:07:44.464642246 +0000 UTC m=+0.301466623 container start 968480def1ddf185d85784b399310cd928221d11d89675b3a3155dcb84a4bac9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pike, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:07:44 compute-0 podman[437593]: 2025-11-26 02:07:44.472318391 +0000 UTC m=+0.309142808 container attach 968480def1ddf185d85784b399310cd928221d11d89675b3a3155dcb84a4bac9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pike, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:07:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:07:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 1.7 KiB/s wr, 99 op/s
Nov 26 02:07:45 compute-0 hungry_pike[437609]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:07:45 compute-0 hungry_pike[437609]: --> relative data size: 1.0
Nov 26 02:07:45 compute-0 hungry_pike[437609]: --> All data devices are unavailable
Nov 26 02:07:45 compute-0 systemd[1]: libpod-968480def1ddf185d85784b399310cd928221d11d89675b3a3155dcb84a4bac9.scope: Deactivated successfully.
Nov 26 02:07:45 compute-0 podman[437593]: 2025-11-26 02:07:45.825215728 +0000 UTC m=+1.662040105 container died 968480def1ddf185d85784b399310cd928221d11d89675b3a3155dcb84a4bac9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 02:07:45 compute-0 systemd[1]: libpod-968480def1ddf185d85784b399310cd928221d11d89675b3a3155dcb84a4bac9.scope: Consumed 1.285s CPU time.
Nov 26 02:07:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dac67d61a1e53529d9b1d96cdf96c8185b3911dd0dc4ecfe34f92b1f1e07cd5-merged.mount: Deactivated successfully.
Nov 26 02:07:45 compute-0 podman[437593]: 2025-11-26 02:07:45.926441626 +0000 UTC m=+1.763266003 container remove 968480def1ddf185d85784b399310cd928221d11d89675b3a3155dcb84a4bac9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pike, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 02:07:45 compute-0 systemd[1]: libpod-conmon-968480def1ddf185d85784b399310cd928221d11d89675b3a3155dcb84a4bac9.scope: Deactivated successfully.
Nov 26 02:07:47 compute-0 podman[437789]: 2025-11-26 02:07:47.201491941 +0000 UTC m=+0.096023813 container create be082def54c9c44c0fcca4b6805babb943150e6bfb1f67269d017296c4bb37c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_tesla, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:07:47 compute-0 podman[437789]: 2025-11-26 02:07:47.168016223 +0000 UTC m=+0.062548155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:07:47 compute-0 systemd[1]: Started libpod-conmon-be082def54c9c44c0fcca4b6805babb943150e6bfb1f67269d017296c4bb37c8.scope.
Nov 26 02:07:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:07:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1716: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.7 KiB/s wr, 62 op/s
Nov 26 02:07:47 compute-0 podman[437789]: 2025-11-26 02:07:47.342759352 +0000 UTC m=+0.237291274 container init be082def54c9c44c0fcca4b6805babb943150e6bfb1f67269d017296c4bb37c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_tesla, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:07:47 compute-0 podman[437789]: 2025-11-26 02:07:47.359890552 +0000 UTC m=+0.254422434 container start be082def54c9c44c0fcca4b6805babb943150e6bfb1f67269d017296c4bb37c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_tesla, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:07:47 compute-0 podman[437789]: 2025-11-26 02:07:47.367350551 +0000 UTC m=+0.261882433 container attach be082def54c9c44c0fcca4b6805babb943150e6bfb1f67269d017296c4bb37c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:07:47 compute-0 jolly_tesla[437805]: 167 167
Nov 26 02:07:47 compute-0 systemd[1]: libpod-be082def54c9c44c0fcca4b6805babb943150e6bfb1f67269d017296c4bb37c8.scope: Deactivated successfully.
Nov 26 02:07:47 compute-0 podman[437789]: 2025-11-26 02:07:47.373174434 +0000 UTC m=+0.267706306 container died be082def54c9c44c0fcca4b6805babb943150e6bfb1f67269d017296c4bb37c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 02:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-943ac1d6c90542251d199e480a4fc0cf3ed72ad52e61a2c1501dd5ca382a1853-merged.mount: Deactivated successfully.
Nov 26 02:07:47 compute-0 podman[437789]: 2025-11-26 02:07:47.444638998 +0000 UTC m=+0.339170870 container remove be082def54c9c44c0fcca4b6805babb943150e6bfb1f67269d017296c4bb37c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 02:07:47 compute-0 systemd[1]: libpod-conmon-be082def54c9c44c0fcca4b6805babb943150e6bfb1f67269d017296c4bb37c8.scope: Deactivated successfully.
Nov 26 02:07:47 compute-0 podman[437827]: 2025-11-26 02:07:47.683931006 +0000 UTC m=+0.082416101 container create 4b9651d2547e8b9f48a8b4c26b93d28fef5728f3444154e85bb54ee473b9deb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kepler, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 02:07:47 compute-0 podman[437827]: 2025-11-26 02:07:47.648712179 +0000 UTC m=+0.047197364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:07:47 compute-0 systemd[1]: Started libpod-conmon-4b9651d2547e8b9f48a8b4c26b93d28fef5728f3444154e85bb54ee473b9deb2.scope.
Nov 26 02:07:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d7d4189f3da418789248c523dd759b16e6a6e32f565d9204d0abb98e40e5e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d7d4189f3da418789248c523dd759b16e6a6e32f565d9204d0abb98e40e5e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d7d4189f3da418789248c523dd759b16e6a6e32f565d9204d0abb98e40e5e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d7d4189f3da418789248c523dd759b16e6a6e32f565d9204d0abb98e40e5e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:07:47 compute-0 podman[437827]: 2025-11-26 02:07:47.826247626 +0000 UTC m=+0.224732761 container init 4b9651d2547e8b9f48a8b4c26b93d28fef5728f3444154e85bb54ee473b9deb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kepler, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:07:47 compute-0 podman[437827]: 2025-11-26 02:07:47.85814374 +0000 UTC m=+0.256628865 container start 4b9651d2547e8b9f48a8b4c26b93d28fef5728f3444154e85bb54ee473b9deb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kepler, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:07:47 compute-0 podman[437827]: 2025-11-26 02:07:47.865385683 +0000 UTC m=+0.263870808 container attach 4b9651d2547e8b9f48a8b4c26b93d28fef5728f3444154e85bb54ee473b9deb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 02:07:48 compute-0 nova_compute[350387]: 2025-11-26 02:07:48.252 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:48 compute-0 nova_compute[350387]: 2025-11-26 02:07:48.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:07:48 compute-0 nova_compute[350387]: 2025-11-26 02:07:48.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 02:07:48 compute-0 nova_compute[350387]: 2025-11-26 02:07:48.608 350391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764122853.6063066, d32050dc-c041-47df-994e-7d05cf1f489a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:07:48 compute-0 nova_compute[350387]: 2025-11-26 02:07:48.609 350391 INFO nova.compute.manager [-] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] VM Stopped (Lifecycle Event)#033[00m
Nov 26 02:07:48 compute-0 nova_compute[350387]: 2025-11-26 02:07:48.631 350391 DEBUG nova.compute.manager [None req-7f6099ea-cfdb-4c10-9bee-028b7bd0bd46 - - - - - -] [instance: d32050dc-c041-47df-994e-7d05cf1f489a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:07:48 compute-0 nova_compute[350387]: 2025-11-26 02:07:48.662 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:48 compute-0 tender_kepler[437843]: {
Nov 26 02:07:48 compute-0 tender_kepler[437843]:    "0": [
Nov 26 02:07:48 compute-0 tender_kepler[437843]:        {
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "devices": [
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "/dev/loop3"
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            ],
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "lv_name": "ceph_lv0",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "lv_size": "21470642176",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "name": "ceph_lv0",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "tags": {
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.cluster_name": "ceph",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.crush_device_class": "",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.encrypted": "0",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.osd_id": "0",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.type": "block",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.vdo": "0"
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            },
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "type": "block",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "vg_name": "ceph_vg0"
Nov 26 02:07:48 compute-0 tender_kepler[437843]:        }
Nov 26 02:07:48 compute-0 tender_kepler[437843]:    ],
Nov 26 02:07:48 compute-0 tender_kepler[437843]:    "1": [
Nov 26 02:07:48 compute-0 tender_kepler[437843]:        {
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "devices": [
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "/dev/loop4"
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            ],
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "lv_name": "ceph_lv1",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "lv_size": "21470642176",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "name": "ceph_lv1",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "tags": {
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.cluster_name": "ceph",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.crush_device_class": "",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.encrypted": "0",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.osd_id": "1",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.type": "block",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.vdo": "0"
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            },
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "type": "block",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "vg_name": "ceph_vg1"
Nov 26 02:07:48 compute-0 tender_kepler[437843]:        }
Nov 26 02:07:48 compute-0 tender_kepler[437843]:    ],
Nov 26 02:07:48 compute-0 tender_kepler[437843]:    "2": [
Nov 26 02:07:48 compute-0 tender_kepler[437843]:        {
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "devices": [
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "/dev/loop5"
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            ],
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "lv_name": "ceph_lv2",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "lv_size": "21470642176",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "name": "ceph_lv2",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "tags": {
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.cluster_name": "ceph",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.crush_device_class": "",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.encrypted": "0",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.osd_id": "2",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.type": "block",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:                "ceph.vdo": "0"
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            },
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "type": "block",
Nov 26 02:07:48 compute-0 tender_kepler[437843]:            "vg_name": "ceph_vg2"
Nov 26 02:07:48 compute-0 tender_kepler[437843]:        }
Nov 26 02:07:48 compute-0 tender_kepler[437843]:    ]
Nov 26 02:07:48 compute-0 tender_kepler[437843]: }
Nov 26 02:07:48 compute-0 systemd[1]: libpod-4b9651d2547e8b9f48a8b4c26b93d28fef5728f3444154e85bb54ee473b9deb2.scope: Deactivated successfully.
Nov 26 02:07:48 compute-0 podman[437827]: 2025-11-26 02:07:48.757619755 +0000 UTC m=+1.156104860 container died 4b9651d2547e8b9f48a8b4c26b93d28fef5728f3444154e85bb54ee473b9deb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kepler, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:07:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8d7d4189f3da418789248c523dd759b16e6a6e32f565d9204d0abb98e40e5e7-merged.mount: Deactivated successfully.
Nov 26 02:07:48 compute-0 podman[437827]: 2025-11-26 02:07:48.850309364 +0000 UTC m=+1.248794489 container remove 4b9651d2547e8b9f48a8b4c26b93d28fef5728f3444154e85bb54ee473b9deb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kepler, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 02:07:48 compute-0 systemd[1]: libpod-conmon-4b9651d2547e8b9f48a8b4c26b93d28fef5728f3444154e85bb54ee473b9deb2.scope: Deactivated successfully.
Nov 26 02:07:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s
Nov 26 02:07:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:07:50 compute-0 podman[438000]: 2025-11-26 02:07:50.077190829 +0000 UTC m=+0.075870408 container create 51e76364a2130de69383a64c51f4bf2482fec099c7ef120f5957e83bdd7738a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 02:07:50 compute-0 podman[438000]: 2025-11-26 02:07:50.045815889 +0000 UTC m=+0.044495498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:07:50 compute-0 systemd[1]: Started libpod-conmon-51e76364a2130de69383a64c51f4bf2482fec099c7ef120f5957e83bdd7738a0.scope.
Nov 26 02:07:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:07:50 compute-0 podman[438000]: 2025-11-26 02:07:50.239989663 +0000 UTC m=+0.238669272 container init 51e76364a2130de69383a64c51f4bf2482fec099c7ef120f5957e83bdd7738a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_boyd, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 02:07:50 compute-0 podman[438000]: 2025-11-26 02:07:50.248519832 +0000 UTC m=+0.247199401 container start 51e76364a2130de69383a64c51f4bf2482fec099c7ef120f5957e83bdd7738a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 02:07:50 compute-0 podman[438000]: 2025-11-26 02:07:50.256407603 +0000 UTC m=+0.255087152 container attach 51e76364a2130de69383a64c51f4bf2482fec099c7ef120f5957e83bdd7738a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 02:07:50 compute-0 brave_boyd[438017]: 167 167
Nov 26 02:07:50 compute-0 systemd[1]: libpod-51e76364a2130de69383a64c51f4bf2482fec099c7ef120f5957e83bdd7738a0.scope: Deactivated successfully.
Nov 26 02:07:50 compute-0 podman[438000]: 2025-11-26 02:07:50.260614671 +0000 UTC m=+0.259294200 container died 51e76364a2130de69383a64c51f4bf2482fec099c7ef120f5957e83bdd7738a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:07:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-35fc0edd1e72f351607a41681cb648345114fe4ee7d832bed09c31af103c8156-merged.mount: Deactivated successfully.
Nov 26 02:07:50 compute-0 podman[438000]: 2025-11-26 02:07:50.327772494 +0000 UTC m=+0.326452053 container remove 51e76364a2130de69383a64c51f4bf2482fec099c7ef120f5957e83bdd7738a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 02:07:50 compute-0 systemd[1]: libpod-conmon-51e76364a2130de69383a64c51f4bf2482fec099c7ef120f5957e83bdd7738a0.scope: Deactivated successfully.
Nov 26 02:07:50 compute-0 podman[438040]: 2025-11-26 02:07:50.576029363 +0000 UTC m=+0.071756202 container create 45e1e2ba832a176c1f5093382614e5fb0cabe0834fd553a7e83dcef3ac179e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_allen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:07:50 compute-0 podman[438040]: 2025-11-26 02:07:50.546353681 +0000 UTC m=+0.042080590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:07:50 compute-0 systemd[1]: Started libpod-conmon-45e1e2ba832a176c1f5093382614e5fb0cabe0834fd553a7e83dcef3ac179e22.scope.
Nov 26 02:07:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bfa798c773e69440160f10d0740b2c08e6ded2233d4c71f110fcef4f0f1b435/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bfa798c773e69440160f10d0740b2c08e6ded2233d4c71f110fcef4f0f1b435/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bfa798c773e69440160f10d0740b2c08e6ded2233d4c71f110fcef4f0f1b435/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bfa798c773e69440160f10d0740b2c08e6ded2233d4c71f110fcef4f0f1b435/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:07:50 compute-0 podman[438040]: 2025-11-26 02:07:50.728385815 +0000 UTC m=+0.224112674 container init 45e1e2ba832a176c1f5093382614e5fb0cabe0834fd553a7e83dcef3ac179e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_allen, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 02:07:50 compute-0 podman[438040]: 2025-11-26 02:07:50.757592913 +0000 UTC m=+0.253319782 container start 45e1e2ba832a176c1f5093382614e5fb0cabe0834fd553a7e83dcef3ac179e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_allen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 02:07:50 compute-0 podman[438040]: 2025-11-26 02:07:50.764165088 +0000 UTC m=+0.259891957 container attach 45e1e2ba832a176c1f5093382614e5fb0cabe0834fd553a7e83dcef3ac179e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 2 op/s
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005513314368040633 of space, bias 1.0, pg target 0.16539943104121899 quantized to 32 (current 32)
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:07:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:07:51 compute-0 quirky_allen[438055]: {
Nov 26 02:07:51 compute-0 quirky_allen[438055]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:07:51 compute-0 quirky_allen[438055]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:07:51 compute-0 quirky_allen[438055]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:07:51 compute-0 quirky_allen[438055]:        "osd_id": 0,
Nov 26 02:07:51 compute-0 quirky_allen[438055]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:07:51 compute-0 quirky_allen[438055]:        "type": "bluestore"
Nov 26 02:07:51 compute-0 quirky_allen[438055]:    },
Nov 26 02:07:51 compute-0 quirky_allen[438055]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:07:51 compute-0 quirky_allen[438055]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:07:51 compute-0 quirky_allen[438055]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:07:51 compute-0 quirky_allen[438055]:        "osd_id": 2,
Nov 26 02:07:51 compute-0 quirky_allen[438055]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:07:51 compute-0 quirky_allen[438055]:        "type": "bluestore"
Nov 26 02:07:51 compute-0 quirky_allen[438055]:    },
Nov 26 02:07:51 compute-0 quirky_allen[438055]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:07:51 compute-0 quirky_allen[438055]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:07:51 compute-0 quirky_allen[438055]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:07:51 compute-0 quirky_allen[438055]:        "osd_id": 1,
Nov 26 02:07:51 compute-0 quirky_allen[438055]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:07:51 compute-0 quirky_allen[438055]:        "type": "bluestore"
Nov 26 02:07:51 compute-0 quirky_allen[438055]:    }
Nov 26 02:07:51 compute-0 quirky_allen[438055]: }
Nov 26 02:07:51 compute-0 systemd[1]: libpod-45e1e2ba832a176c1f5093382614e5fb0cabe0834fd553a7e83dcef3ac179e22.scope: Deactivated successfully.
Nov 26 02:07:51 compute-0 systemd[1]: libpod-45e1e2ba832a176c1f5093382614e5fb0cabe0834fd553a7e83dcef3ac179e22.scope: Consumed 1.209s CPU time.
Nov 26 02:07:52 compute-0 podman[438088]: 2025-11-26 02:07:52.056003162 +0000 UTC m=+0.051449723 container died 45e1e2ba832a176c1f5093382614e5fb0cabe0834fd553a7e83dcef3ac179e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_allen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 02:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bfa798c773e69440160f10d0740b2c08e6ded2233d4c71f110fcef4f0f1b435-merged.mount: Deactivated successfully.
Nov 26 02:07:52 compute-0 podman[438092]: 2025-11-26 02:07:52.13189811 +0000 UTC m=+0.113581595 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 02:07:52 compute-0 podman[438088]: 2025-11-26 02:07:52.142302332 +0000 UTC m=+0.137748833 container remove 45e1e2ba832a176c1f5093382614e5fb0cabe0834fd553a7e83dcef3ac179e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 02:07:52 compute-0 podman[438095]: 2025-11-26 02:07:52.148989039 +0000 UTC m=+0.116520467 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 02:07:52 compute-0 systemd[1]: libpod-conmon-45e1e2ba832a176c1f5093382614e5fb0cabe0834fd553a7e83dcef3ac179e22.scope: Deactivated successfully.
Nov 26 02:07:52 compute-0 podman[438089]: 2025-11-26 02:07:52.155375568 +0000 UTC m=+0.132849065 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute)
Nov 26 02:07:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:07:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:07:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:07:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:07:52 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev b004a6a1-ab2c-4986-ab95-86c854a24ec5 does not exist
Nov 26 02:07:52 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev a36dc34f-1152-4b7c-a0f1-9a0e82e04378 does not exist
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.353 350391 DEBUG oslo_concurrency.lockutils [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "b1c088bc-7a6b-4580-93ff-685731747189" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.355 350391 DEBUG oslo_concurrency.lockutils [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.356 350391 DEBUG oslo_concurrency.lockutils [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "b1c088bc-7a6b-4580-93ff-685731747189-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.356 350391 DEBUG oslo_concurrency.lockutils [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.356 350391 DEBUG oslo_concurrency.lockutils [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.358 350391 INFO nova.compute.manager [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Terminating instance#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.359 350391 DEBUG nova.compute.manager [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 02:07:52 compute-0 kernel: tapa47ff2b9-72 (unregistering): left promiscuous mode
Nov 26 02:07:52 compute-0 NetworkManager[48886]: <info>  [1764122872.5270] device (tapa47ff2b9-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.546 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:52 compute-0 ovn_controller[89102]: 2025-11-26T02:07:52Z|00061|binding|INFO|Releasing lport a47ff2b9-72e9-48d0-9756-5fe939cf4b29 from this chassis (sb_readonly=0)
Nov 26 02:07:52 compute-0 ovn_controller[89102]: 2025-11-26T02:07:52Z|00062|binding|INFO|Setting lport a47ff2b9-72e9-48d0-9756-5fe939cf4b29 down in Southbound
Nov 26 02:07:52 compute-0 ovn_controller[89102]: 2025-11-26T02:07:52Z|00063|binding|INFO|Removing iface tapa47ff2b9-72 ovn-installed in OVS
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.555 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:52 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:52.559 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:66:48 192.168.0.29'], port_security=['fa:16:3e:0f:66:48 192.168.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.29/24', 'neutron:device_id': 'b1c088bc-7a6b-4580-93ff-685731747189', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97f5f89-70be-4349-beb5-5f8e6065072e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d902f6105ab4c81a51a4751fa89a83e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd3202a1a-8d71-42b1-ae70-18469fa18607', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5f5986b-4ad4-4edf-b238-68c26c7002dd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=a47ff2b9-72e9-48d0-9756-5fe939cf4b29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:07:52 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:52.560 286844 INFO neutron.agent.ovn.metadata.agent [-] Port a47ff2b9-72e9-48d0-9756-5fe939cf4b29 in datapath c97f5f89-70be-4349-beb5-5f8e6065072e unbound from our chassis#033[00m
Nov 26 02:07:52 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:52.561 286844 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c97f5f89-70be-4349-beb5-5f8e6065072e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 02:07:52 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:52.562 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[110cf35b-8a44-417e-bde9-02482257bbb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:07:52 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:52.563 286844 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e namespace which is not needed anymore#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.595 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:52 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 26 02:07:52 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 29.497s CPU time.
Nov 26 02:07:52 compute-0 systemd-machined[138512]: Machine qemu-1-instance-00000001 terminated.
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.740 350391 DEBUG nova.compute.manager [req-3a6efa97-5332-45a3-ab59-351f1b9596ab req-f26ee80a-1cec-48bf-a2a5-352810f207f1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Received event network-vif-unplugged-a47ff2b9-72e9-48d0-9756-5fe939cf4b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.743 350391 DEBUG oslo_concurrency.lockutils [req-3a6efa97-5332-45a3-ab59-351f1b9596ab req-f26ee80a-1cec-48bf-a2a5-352810f207f1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "b1c088bc-7a6b-4580-93ff-685731747189-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.745 350391 DEBUG oslo_concurrency.lockutils [req-3a6efa97-5332-45a3-ab59-351f1b9596ab req-f26ee80a-1cec-48bf-a2a5-352810f207f1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.746 350391 DEBUG oslo_concurrency.lockutils [req-3a6efa97-5332-45a3-ab59-351f1b9596ab req-f26ee80a-1cec-48bf-a2a5-352810f207f1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.746 350391 DEBUG nova.compute.manager [req-3a6efa97-5332-45a3-ab59-351f1b9596ab req-f26ee80a-1cec-48bf-a2a5-352810f207f1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] No waiting events found dispatching network-vif-unplugged-a47ff2b9-72e9-48d0-9756-5fe939cf4b29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.749 350391 DEBUG nova.compute.manager [req-3a6efa97-5332-45a3-ab59-351f1b9596ab req-f26ee80a-1cec-48bf-a2a5-352810f207f1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Received event network-vif-unplugged-a47ff2b9-72e9-48d0-9756-5fe939cf4b29 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.796 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:52 compute-0 neutron-haproxy-ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e[413606]: [NOTICE]   (413610) : haproxy version is 2.8.14-c23fe91
Nov 26 02:07:52 compute-0 neutron-haproxy-ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e[413606]: [NOTICE]   (413610) : path to executable is /usr/sbin/haproxy
Nov 26 02:07:52 compute-0 neutron-haproxy-ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e[413606]: [WARNING]  (413610) : Exiting Master process...
Nov 26 02:07:52 compute-0 neutron-haproxy-ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e[413606]: [ALERT]    (413610) : Current worker (413612) exited with code 143 (Terminated)
Nov 26 02:07:52 compute-0 neutron-haproxy-ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e[413606]: [WARNING]  (413610) : All workers exited. Exiting... (0)
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.811 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:52 compute-0 systemd[1]: libpod-ee275691fbfe4f32cb1c3d7f656e22c8d3c7f237f3c1d6d74f8461fa56bad7bb.scope: Deactivated successfully.
Nov 26 02:07:52 compute-0 podman[438234]: 2025-11-26 02:07:52.821265566 +0000 UTC m=+0.089449069 container died ee275691fbfe4f32cb1c3d7f656e22c8d3c7f237f3c1d6d74f8461fa56bad7bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.824 350391 INFO nova.virt.libvirt.driver [-] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Instance destroyed successfully.#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.825 350391 DEBUG nova.objects.instance [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lazy-loading 'resources' on Instance uuid b1c088bc-7a6b-4580-93ff-685731747189 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.840 350391 DEBUG nova.virt.libvirt.vif [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T01:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-26T01:50:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d902f6105ab4c81a51a4751fa89a83e',ramdisk_id='',reservation_id='r-sw0o23i9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='48e08d00-37a3-4465-a949-ff0b8afe4def',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T01:50:10Z,user_data=None,user_id='b130e7a8bed3424f9f5ff63b35cd2b28',uuid=b1c088bc-7a6b-4580-93ff-685731747189,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.841 350391 DEBUG nova.network.os_vif_util [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converting VIF {"id": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "address": "fa:16:3e:0f:66:48", "network": {"id": "c97f5f89-70be-4349-beb5-5f8e6065072e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d902f6105ab4c81a51a4751fa89a83e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa47ff2b9-72", "ovs_interfaceid": "a47ff2b9-72e9-48d0-9756-5fe939cf4b29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.842 350391 DEBUG nova.network.os_vif_util [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:66:48,bridge_name='br-int',has_traffic_filtering=True,id=a47ff2b9-72e9-48d0-9756-5fe939cf4b29,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa47ff2b9-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.843 350391 DEBUG os_vif [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:66:48,bridge_name='br-int',has_traffic_filtering=True,id=a47ff2b9-72e9-48d0-9756-5fe939cf4b29,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa47ff2b9-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.844 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.846 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa47ff2b9-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.849 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.851 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:07:52 compute-0 nova_compute[350387]: 2025-11-26 02:07:52.855 350391 INFO os_vif [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:66:48,bridge_name='br-int',has_traffic_filtering=True,id=a47ff2b9-72e9-48d0-9756-5fe939cf4b29,network=Network(c97f5f89-70be-4349-beb5-5f8e6065072e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa47ff2b9-72')#033[00m
Nov 26 02:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee275691fbfe4f32cb1c3d7f656e22c8d3c7f237f3c1d6d74f8461fa56bad7bb-userdata-shm.mount: Deactivated successfully.
Nov 26 02:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-531fbc26ef1368fc38f47483b7b95402bea388fed13a021a2986e31021e8082c-merged.mount: Deactivated successfully.
Nov 26 02:07:52 compute-0 podman[438234]: 2025-11-26 02:07:52.89846679 +0000 UTC m=+0.166650263 container cleanup ee275691fbfe4f32cb1c3d7f656e22c8d3c7f237f3c1d6d74f8461fa56bad7bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:07:52 compute-0 systemd[1]: libpod-conmon-ee275691fbfe4f32cb1c3d7f656e22c8d3c7f237f3c1d6d74f8461fa56bad7bb.scope: Deactivated successfully.
Nov 26 02:07:53 compute-0 podman[438286]: 2025-11-26 02:07:53.011629233 +0000 UTC m=+0.074358346 container remove ee275691fbfe4f32cb1c3d7f656e22c8d3c7f237f3c1d6d74f8461fa56bad7bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 02:07:53 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:53.020 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd4e570-314e-4e28-8dbf-044532443f6d]: (4, ('Wed Nov 26 02:07:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e (ee275691fbfe4f32cb1c3d7f656e22c8d3c7f237f3c1d6d74f8461fa56bad7bb)\nee275691fbfe4f32cb1c3d7f656e22c8d3c7f237f3c1d6d74f8461fa56bad7bb\nWed Nov 26 02:07:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e (ee275691fbfe4f32cb1c3d7f656e22c8d3c7f237f3c1d6d74f8461fa56bad7bb)\nee275691fbfe4f32cb1c3d7f656e22c8d3c7f237f3c1d6d74f8461fa56bad7bb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:07:53 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:07:53 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:07:53 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:53.023 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[c7bf9cca-e3bd-4473-b62a-0b9f0c1d6e2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:07:53 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:53.024 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97f5f89-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:07:53 compute-0 nova_compute[350387]: 2025-11-26 02:07:53.028 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:53 compute-0 kernel: tapc97f5f89-70: left promiscuous mode
Nov 26 02:07:53 compute-0 nova_compute[350387]: 2025-11-26 02:07:53.054 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:53 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:53.058 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[34bbcf0b-e7d1-40fe-b84f-63afd6ada950]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:07:53 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:53.074 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[f6863eeb-ee9e-4f9f-953d-98f09a4031d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:07:53 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:53.076 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[b2931c26-22e8-48a1-8e0d-324772f8d438]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:07:53 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:53.105 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[aeedd240-46b7-4661-9a31-13af125938b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544468, 'reachable_time': 24351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 438305, 'error': None, 'target': 'ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:07:53 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:53.121 287175 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c97f5f89-70be-4349-beb5-5f8e6065072e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 02:07:53 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:07:53.122 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[7327f2f0-8cad-42ae-9abb-f8d520411ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:07:53 compute-0 systemd[1]: run-netns-ovnmeta\x2dc97f5f89\x2d70be\x2d4349\x2dbeb5\x2d5f8e6065072e.mount: Deactivated successfully.
Nov 26 02:07:53 compute-0 nova_compute[350387]: 2025-11-26 02:07:53.256 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1719: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:07:54 compute-0 nova_compute[350387]: 2025-11-26 02:07:54.094 350391 INFO nova.virt.libvirt.driver [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Deleting instance files /var/lib/nova/instances/b1c088bc-7a6b-4580-93ff-685731747189_del#033[00m
Nov 26 02:07:54 compute-0 nova_compute[350387]: 2025-11-26 02:07:54.095 350391 INFO nova.virt.libvirt.driver [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Deletion of /var/lib/nova/instances/b1c088bc-7a6b-4580-93ff-685731747189_del complete#033[00m
Nov 26 02:07:54 compute-0 nova_compute[350387]: 2025-11-26 02:07:54.154 350391 INFO nova.compute.manager [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Took 1.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 02:07:54 compute-0 nova_compute[350387]: 2025-11-26 02:07:54.155 350391 DEBUG oslo.service.loopingcall [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 02:07:54 compute-0 nova_compute[350387]: 2025-11-26 02:07:54.156 350391 DEBUG nova.compute.manager [-] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 02:07:54 compute-0 nova_compute[350387]: 2025-11-26 02:07:54.157 350391 DEBUG nova.network.neutron [-] [instance: b1c088bc-7a6b-4580-93ff-685731747189] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 02:07:54 compute-0 nova_compute[350387]: 2025-11-26 02:07:54.324 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:07:54 compute-0 nova_compute[350387]: 2025-11-26 02:07:54.325 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 02:07:54 compute-0 nova_compute[350387]: 2025-11-26 02:07:54.349 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 02:07:54 compute-0 nova_compute[350387]: 2025-11-26 02:07:54.864 350391 DEBUG nova.compute.manager [req-bb389860-c631-4fb6-b4c6-6ca17b4c73f1 req-6cb011e5-ef5a-4b2b-a317-7238486edfa1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Received event network-vif-plugged-a47ff2b9-72e9-48d0-9756-5fe939cf4b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:07:54 compute-0 nova_compute[350387]: 2025-11-26 02:07:54.864 350391 DEBUG oslo_concurrency.lockutils [req-bb389860-c631-4fb6-b4c6-6ca17b4c73f1 req-6cb011e5-ef5a-4b2b-a317-7238486edfa1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "b1c088bc-7a6b-4580-93ff-685731747189-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:07:54 compute-0 nova_compute[350387]: 2025-11-26 02:07:54.865 350391 DEBUG oslo_concurrency.lockutils [req-bb389860-c631-4fb6-b4c6-6ca17b4c73f1 req-6cb011e5-ef5a-4b2b-a317-7238486edfa1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:07:54 compute-0 nova_compute[350387]: 2025-11-26 02:07:54.866 350391 DEBUG oslo_concurrency.lockutils [req-bb389860-c631-4fb6-b4c6-6ca17b4c73f1 req-6cb011e5-ef5a-4b2b-a317-7238486edfa1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:07:54 compute-0 nova_compute[350387]: 2025-11-26 02:07:54.866 350391 DEBUG nova.compute.manager [req-bb389860-c631-4fb6-b4c6-6ca17b4c73f1 req-6cb011e5-ef5a-4b2b-a317-7238486edfa1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] No waiting events found dispatching network-vif-plugged-a47ff2b9-72e9-48d0-9756-5fe939cf4b29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:07:54 compute-0 nova_compute[350387]: 2025-11-26 02:07:54.867 350391 WARNING nova.compute.manager [req-bb389860-c631-4fb6-b4c6-6ca17b4c73f1 req-6cb011e5-ef5a-4b2b-a317-7238486edfa1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Received unexpected event network-vif-plugged-a47ff2b9-72e9-48d0-9756-5fe939cf4b29 for instance with vm_state active and task_state deleting.#033[00m
Nov 26 02:07:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:07:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 47 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1023 B/s wr, 27 op/s
Nov 26 02:07:55 compute-0 nova_compute[350387]: 2025-11-26 02:07:55.690 350391 DEBUG nova.network.neutron [-] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:07:55 compute-0 nova_compute[350387]: 2025-11-26 02:07:55.711 350391 INFO nova.compute.manager [-] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Took 1.55 seconds to deallocate network for instance.#033[00m
Nov 26 02:07:55 compute-0 nova_compute[350387]: 2025-11-26 02:07:55.763 350391 DEBUG oslo_concurrency.lockutils [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:07:55 compute-0 nova_compute[350387]: 2025-11-26 02:07:55.764 350391 DEBUG oslo_concurrency.lockutils [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:07:55 compute-0 nova_compute[350387]: 2025-11-26 02:07:55.769 350391 DEBUG nova.compute.manager [req-14387261-f35e-4fba-ad80-4aab78a15fe8 req-d5c74417-ef7d-49e2-b3de-b78adb0a3a44 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Received event network-vif-deleted-a47ff2b9-72e9-48d0-9756-5fe939cf4b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:07:55 compute-0 nova_compute[350387]: 2025-11-26 02:07:55.831 350391 DEBUG oslo_concurrency.processutils [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:07:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:07:56 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2580575668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:07:56 compute-0 nova_compute[350387]: 2025-11-26 02:07:56.323 350391 DEBUG oslo_concurrency.processutils [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:07:56 compute-0 nova_compute[350387]: 2025-11-26 02:07:56.335 350391 DEBUG nova.compute.provider_tree [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:07:56 compute-0 nova_compute[350387]: 2025-11-26 02:07:56.348 350391 DEBUG nova.scheduler.client.report [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:07:56 compute-0 nova_compute[350387]: 2025-11-26 02:07:56.365 350391 DEBUG oslo_concurrency.lockutils [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:07:56 compute-0 nova_compute[350387]: 2025-11-26 02:07:56.399 350391 INFO nova.scheduler.client.report [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Deleted allocations for instance b1c088bc-7a6b-4580-93ff-685731747189#033[00m
Nov 26 02:07:56 compute-0 nova_compute[350387]: 2025-11-26 02:07:56.482 350391 DEBUG oslo_concurrency.lockutils [None req-f7994d12-a7cf-4647-a294-a54f3d50a5fb b130e7a8bed3424f9f5ff63b35cd2b28 4d902f6105ab4c81a51a4751fa89a83e - - default default] Lock "b1c088bc-7a6b-4580-93ff-685731747189" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:07:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1721: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 26 02:07:57 compute-0 podman[438330]: 2025-11-26 02:07:57.598354399 +0000 UTC m=+0.143834644 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 02:07:57 compute-0 podman[438331]: 2025-11-26 02:07:57.659948905 +0000 UTC m=+0.202877778 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 26 02:07:57 compute-0 nova_compute[350387]: 2025-11-26 02:07:57.850 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:58 compute-0 nova_compute[350387]: 2025-11-26 02:07:58.259 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:07:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 26 02:07:59 compute-0 podman[438376]: 2025-11-26 02:07:59.582949335 +0000 UTC m=+0.132539447 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, build-date=2024-09-18T21:23:30, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, config_id=edpm, container_name=kepler, io.openshift.tags=base rhel9, release=1214.1726694543, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container)
Nov 26 02:07:59 compute-0 podman[158021]: time="2025-11-26T02:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:07:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:07:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8176 "" "Go-http-client/1.1"
Nov 26 02:08:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:08:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1723: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 26 02:08:01 compute-0 openstack_network_exporter[367323]: ERROR   02:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:08:01 compute-0 openstack_network_exporter[367323]: ERROR   02:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:08:01 compute-0 openstack_network_exporter[367323]: ERROR   02:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:08:01 compute-0 openstack_network_exporter[367323]: ERROR   02:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:08:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:08:01 compute-0 openstack_network_exporter[367323]: ERROR   02:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:08:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:08:01 compute-0 podman[438398]: 2025-11-26 02:08:01.587667466 +0000 UTC m=+0.120622142 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd)
Nov 26 02:08:02 compute-0 nova_compute[350387]: 2025-11-26 02:08:02.853 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:03 compute-0 nova_compute[350387]: 2025-11-26 02:08:03.261 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 26 02:08:04 compute-0 podman[438418]: 2025-11-26 02:08:04.57669857 +0000 UTC m=+0.117617178 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:08:04 compute-0 podman[438417]: 2025-11-26 02:08:04.585880838 +0000 UTC m=+0.129469601 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7)
Nov 26 02:08:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:08:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1725: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 26 02:08:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 767 B/s wr, 12 op/s
Nov 26 02:08:07 compute-0 nova_compute[350387]: 2025-11-26 02:08:07.820 350391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764122872.81795, b1c088bc-7a6b-4580-93ff-685731747189 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:08:07 compute-0 nova_compute[350387]: 2025-11-26 02:08:07.820 350391 INFO nova.compute.manager [-] [instance: b1c088bc-7a6b-4580-93ff-685731747189] VM Stopped (Lifecycle Event)#033[00m
Nov 26 02:08:07 compute-0 nova_compute[350387]: 2025-11-26 02:08:07.858 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:07 compute-0 nova_compute[350387]: 2025-11-26 02:08:07.870 350391 DEBUG nova.compute.manager [None req-d0927c91-0d01-4a9c-80c0-8e3bcd649a5b - - - - - -] [instance: b1c088bc-7a6b-4580-93ff-685731747189] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:08:08 compute-0 nova_compute[350387]: 2025-11-26 02:08:08.264 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:08:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:08:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:08:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:08:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:08:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:08:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:08:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:12 compute-0 nova_compute[350387]: 2025-11-26 02:08:12.862 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:13 compute-0 nova_compute[350387]: 2025-11-26 02:08:13.269 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:08:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:17 compute-0 nova_compute[350387]: 2025-11-26 02:08:17.866 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:18 compute-0 nova_compute[350387]: 2025-11-26 02:08:18.272 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:08:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:22 compute-0 podman[438463]: 2025-11-26 02:08:22.586257607 +0000 UTC m=+0.122257428 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:08:22 compute-0 podman[438462]: 2025-11-26 02:08:22.593065988 +0000 UTC m=+0.132275579 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:08:22 compute-0 podman[438461]: 2025-11-26 02:08:22.616877116 +0000 UTC m=+0.173291159 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:08:22 compute-0 nova_compute[350387]: 2025-11-26 02:08:22.870 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:23 compute-0 nova_compute[350387]: 2025-11-26 02:08:23.277 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:08:24.991 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:08:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:08:24.992 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:08:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:08:24.992 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:08:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:08:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:25 compute-0 ovn_controller[89102]: 2025-11-26T02:08:25Z|00064|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Nov 26 02:08:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:08:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1380980319' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:08:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:08:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1380980319' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:08:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:27 compute-0 nova_compute[350387]: 2025-11-26 02:08:27.874 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:28 compute-0 nova_compute[350387]: 2025-11-26 02:08:28.279 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:28 compute-0 podman[438520]: 2025-11-26 02:08:28.579975107 +0000 UTC m=+0.134657566 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 26 02:08:28 compute-0 podman[438521]: 2025-11-26 02:08:28.643400295 +0000 UTC m=+0.184726300 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:08:29 compute-0 nova_compute[350387]: 2025-11-26 02:08:29.324 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:08:29 compute-0 nova_compute[350387]: 2025-11-26 02:08:29.352 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:08:29 compute-0 nova_compute[350387]: 2025-11-26 02:08:29.353 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:08:29 compute-0 nova_compute[350387]: 2025-11-26 02:08:29.353 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:08:29 compute-0 nova_compute[350387]: 2025-11-26 02:08:29.353 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:08:29 compute-0 nova_compute[350387]: 2025-11-26 02:08:29.354 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:08:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:29 compute-0 podman[158021]: time="2025-11-26T02:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:08:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:08:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8173 "" "Go-http-client/1.1"
Nov 26 02:08:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:08:29 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1236979481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:08:29 compute-0 nova_compute[350387]: 2025-11-26 02:08:29.910 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:08:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:08:30 compute-0 podman[438587]: 2025-11-26 02:08:30.143467829 +0000 UTC m=+0.155074419 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, distribution-scope=public, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, io.openshift.tags=base rhel9, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 26 02:08:30 compute-0 nova_compute[350387]: 2025-11-26 02:08:30.467 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:08:30 compute-0 nova_compute[350387]: 2025-11-26 02:08:30.468 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4180MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:08:30 compute-0 nova_compute[350387]: 2025-11-26 02:08:30.469 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:08:30 compute-0 nova_compute[350387]: 2025-11-26 02:08:30.469 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:08:30 compute-0 nova_compute[350387]: 2025-11-26 02:08:30.585 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:08:30 compute-0 nova_compute[350387]: 2025-11-26 02:08:30.586 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:08:30 compute-0 nova_compute[350387]: 2025-11-26 02:08:30.613 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:08:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:08:31 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185463500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:08:31 compute-0 nova_compute[350387]: 2025-11-26 02:08:31.126 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:08:31 compute-0 nova_compute[350387]: 2025-11-26 02:08:31.138 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:08:31 compute-0 nova_compute[350387]: 2025-11-26 02:08:31.175 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:08:31 compute-0 nova_compute[350387]: 2025-11-26 02:08:31.200 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:08:31 compute-0 nova_compute[350387]: 2025-11-26 02:08:31.201 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:08:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:31 compute-0 openstack_network_exporter[367323]: ERROR   02:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:08:31 compute-0 openstack_network_exporter[367323]: ERROR   02:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:08:31 compute-0 openstack_network_exporter[367323]: ERROR   02:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:08:31 compute-0 openstack_network_exporter[367323]: ERROR   02:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:08:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:08:31 compute-0 openstack_network_exporter[367323]: ERROR   02:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:08:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:08:32 compute-0 podman[438629]: 2025-11-26 02:08:32.575773846 +0000 UTC m=+0.121829376 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 02:08:32 compute-0 nova_compute[350387]: 2025-11-26 02:08:32.878 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:33 compute-0 nova_compute[350387]: 2025-11-26 02:08:33.282 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:34 compute-0 nova_compute[350387]: 2025-11-26 02:08:34.177 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:08:34 compute-0 nova_compute[350387]: 2025-11-26 02:08:34.177 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:08:34 compute-0 nova_compute[350387]: 2025-11-26 02:08:34.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:08:34 compute-0 nova_compute[350387]: 2025-11-26 02:08:34.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:08:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:08:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:35 compute-0 podman[438647]: 2025-11-26 02:08:35.569536554 +0000 UTC m=+0.116909519 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible)
Nov 26 02:08:35 compute-0 podman[438648]: 2025-11-26 02:08:35.598245339 +0000 UTC m=+0.139831811 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 02:08:37 compute-0 nova_compute[350387]: 2025-11-26 02:08:37.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:08:37 compute-0 nova_compute[350387]: 2025-11-26 02:08:37.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:08:37 compute-0 nova_compute[350387]: 2025-11-26 02:08:37.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:08:37 compute-0 nova_compute[350387]: 2025-11-26 02:08:37.312 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 02:08:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:37 compute-0 nova_compute[350387]: 2025-11-26 02:08:37.883 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:38 compute-0 nova_compute[350387]: 2025-11-26 02:08:38.286 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:38 compute-0 nova_compute[350387]: 2025-11-26 02:08:38.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:08:38 compute-0 nova_compute[350387]: 2025-11-26 02:08:38.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:08:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:08:41
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'volumes', 'images', '.rgw.root', 'default.rgw.log', 'default.rgw.control', '.mgr']
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:08:41 compute-0 nova_compute[350387]: 2025-11-26 02:08:41.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:08:41 compute-0 nova_compute[350387]: 2025-11-26 02:08:41.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:08:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.872 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.873 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.873 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.874 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.877 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.881 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.882 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.882 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.882 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.882 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.883 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.883 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.883 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.883 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.883 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.883 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.884 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.884 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.884 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.884 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.884 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.885 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.885 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.885 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.885 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.885 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.885 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.886 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.886 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.886 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.886 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.887 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 nova_compute[350387]: 2025-11-26 02:08:42.886 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.887 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.887 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.887 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.888 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.888 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.888 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.888 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.888 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.889 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.889 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.889 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.889 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.889 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.889 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.890 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.890 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.890 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.890 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.890 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.891 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.891 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.891 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.891 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.892 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.892 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.892 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.893 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.893 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.893 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.893 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.893 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.893 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.894 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.894 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.894 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.894 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.894 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.894 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.894 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.895 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.895 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.895 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.895 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.895 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.895 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.895 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.896 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.896 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:08:42.896 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:08:43 compute-0 nova_compute[350387]: 2025-11-26 02:08:43.289 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:08:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:47 compute-0 nova_compute[350387]: 2025-11-26 02:08:47.890 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:48 compute-0 nova_compute[350387]: 2025-11-26 02:08:48.293 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:08:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:08:52 compute-0 podman[438719]: 2025-11-26 02:08:52.773190221 +0000 UTC m=+0.108833402 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:08:52 compute-0 podman[438718]: 2025-11-26 02:08:52.773610183 +0000 UTC m=+0.113295257 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 26 02:08:52 compute-0 podman[438720]: 2025-11-26 02:08:52.807272617 +0000 UTC m=+0.135090649 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 26 02:08:52 compute-0 nova_compute[350387]: 2025-11-26 02:08:52.892 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:53 compute-0 nova_compute[350387]: 2025-11-26 02:08:53.295 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:08:53 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:08:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:08:53 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:08:54 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:08:54 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:08:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:08:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:08:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:08:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:08:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:08:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:08:54 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 6c509eaf-92f2-4b83-97a4-738980db05ae does not exist
Nov 26 02:08:54 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev aaacd1a8-7e05-45ad-894e-b7990ee8bb08 does not exist
Nov 26 02:08:54 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 119cd76b-f0ae-484d-93d6-95e7d0269ec9 does not exist
Nov 26 02:08:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:08:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:08:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:08:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:08:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:08:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:08:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:08:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:55 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:08:55 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:08:55 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:08:55 compute-0 podman[439145]: 2025-11-26 02:08:55.874049662 +0000 UTC m=+0.074009446 container create ffe08457ac775151ab1f4cf7c720bfc661c023c34d83b7407d794b2f7c9d5408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:08:55 compute-0 podman[439145]: 2025-11-26 02:08:55.842396224 +0000 UTC m=+0.042356018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:08:55 compute-0 systemd[1]: Started libpod-conmon-ffe08457ac775151ab1f4cf7c720bfc661c023c34d83b7407d794b2f7c9d5408.scope.
Nov 26 02:08:55 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:08:56 compute-0 podman[439145]: 2025-11-26 02:08:56.021580538 +0000 UTC m=+0.221540362 container init ffe08457ac775151ab1f4cf7c720bfc661c023c34d83b7407d794b2f7c9d5408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 26 02:08:56 compute-0 podman[439145]: 2025-11-26 02:08:56.036984289 +0000 UTC m=+0.236944073 container start ffe08457ac775151ab1f4cf7c720bfc661c023c34d83b7407d794b2f7c9d5408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_johnson, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:08:56 compute-0 podman[439145]: 2025-11-26 02:08:56.043251145 +0000 UTC m=+0.243210969 container attach ffe08457ac775151ab1f4cf7c720bfc661c023c34d83b7407d794b2f7c9d5408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_johnson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 02:08:56 compute-0 hopeful_johnson[439162]: 167 167
Nov 26 02:08:56 compute-0 systemd[1]: libpod-ffe08457ac775151ab1f4cf7c720bfc661c023c34d83b7407d794b2f7c9d5408.scope: Deactivated successfully.
Nov 26 02:08:56 compute-0 podman[439145]: 2025-11-26 02:08:56.04878167 +0000 UTC m=+0.248741454 container died ffe08457ac775151ab1f4cf7c720bfc661c023c34d83b7407d794b2f7c9d5408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:08:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c44be0f75589db6edfe3900e65aae8382ee0771ba6a8f41f8df6d42e1ec157d-merged.mount: Deactivated successfully.
Nov 26 02:08:56 compute-0 podman[439145]: 2025-11-26 02:08:56.129157483 +0000 UTC m=+0.329117267 container remove ffe08457ac775151ab1f4cf7c720bfc661c023c34d83b7407d794b2f7c9d5408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_johnson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:08:56 compute-0 systemd[1]: libpod-conmon-ffe08457ac775151ab1f4cf7c720bfc661c023c34d83b7407d794b2f7c9d5408.scope: Deactivated successfully.
Nov 26 02:08:56 compute-0 podman[439185]: 2025-11-26 02:08:56.415208552 +0000 UTC m=+0.090094666 container create 3c9185be8f2faa0fa2ebd200535e77248821f8ea0c3c0be28f83f6e3c66f9deb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 26 02:08:56 compute-0 podman[439185]: 2025-11-26 02:08:56.381369814 +0000 UTC m=+0.056255978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:08:56 compute-0 systemd[1]: Started libpod-conmon-3c9185be8f2faa0fa2ebd200535e77248821f8ea0c3c0be28f83f6e3c66f9deb.scope.
Nov 26 02:08:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5055c79b87b594a8fe9cd0cd6caaa02e7262da76db3e2bbfd33ea7e8788b4cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5055c79b87b594a8fe9cd0cd6caaa02e7262da76db3e2bbfd33ea7e8788b4cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5055c79b87b594a8fe9cd0cd6caaa02e7262da76db3e2bbfd33ea7e8788b4cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5055c79b87b594a8fe9cd0cd6caaa02e7262da76db3e2bbfd33ea7e8788b4cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5055c79b87b594a8fe9cd0cd6caaa02e7262da76db3e2bbfd33ea7e8788b4cc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:08:56 compute-0 podman[439185]: 2025-11-26 02:08:56.622962556 +0000 UTC m=+0.297848720 container init 3c9185be8f2faa0fa2ebd200535e77248821f8ea0c3c0be28f83f6e3c66f9deb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 02:08:56 compute-0 podman[439185]: 2025-11-26 02:08:56.650301322 +0000 UTC m=+0.325187426 container start 3c9185be8f2faa0fa2ebd200535e77248821f8ea0c3c0be28f83f6e3c66f9deb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:08:56 compute-0 podman[439185]: 2025-11-26 02:08:56.656957019 +0000 UTC m=+0.331843143 container attach 3c9185be8f2faa0fa2ebd200535e77248821f8ea0c3c0be28f83f6e3c66f9deb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 02:08:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:57 compute-0 nova_compute[350387]: 2025-11-26 02:08:57.895 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:57 compute-0 wizardly_goldwasser[439201]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:08:57 compute-0 wizardly_goldwasser[439201]: --> relative data size: 1.0
Nov 26 02:08:57 compute-0 wizardly_goldwasser[439201]: --> All data devices are unavailable
Nov 26 02:08:57 compute-0 systemd[1]: libpod-3c9185be8f2faa0fa2ebd200535e77248821f8ea0c3c0be28f83f6e3c66f9deb.scope: Deactivated successfully.
Nov 26 02:08:57 compute-0 podman[439185]: 2025-11-26 02:08:57.957748936 +0000 UTC m=+1.632635060 container died 3c9185be8f2faa0fa2ebd200535e77248821f8ea0c3c0be28f83f6e3c66f9deb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:08:57 compute-0 systemd[1]: libpod-3c9185be8f2faa0fa2ebd200535e77248821f8ea0c3c0be28f83f6e3c66f9deb.scope: Consumed 1.256s CPU time.
Nov 26 02:08:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5055c79b87b594a8fe9cd0cd6caaa02e7262da76db3e2bbfd33ea7e8788b4cc-merged.mount: Deactivated successfully.
Nov 26 02:08:58 compute-0 podman[439185]: 2025-11-26 02:08:58.059228991 +0000 UTC m=+1.734115095 container remove 3c9185be8f2faa0fa2ebd200535e77248821f8ea0c3c0be28f83f6e3c66f9deb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:08:58 compute-0 systemd[1]: libpod-conmon-3c9185be8f2faa0fa2ebd200535e77248821f8ea0c3c0be28f83f6e3c66f9deb.scope: Deactivated successfully.
Nov 26 02:08:58 compute-0 nova_compute[350387]: 2025-11-26 02:08:58.298 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:08:58 compute-0 podman[439339]: 2025-11-26 02:08:58.812630432 +0000 UTC m=+0.136208950 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:08:59 compute-0 podman[439367]: 2025-11-26 02:08:59.026459226 +0000 UTC m=+0.185934323 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 26 02:08:59 compute-0 podman[439422]: 2025-11-26 02:08:59.211169405 +0000 UTC m=+0.084915322 container create 5a58c8477994bc90edb4ee6c1070aaedb878f250b70399530300d217d123cf91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 02:08:59 compute-0 podman[439422]: 2025-11-26 02:08:59.177953114 +0000 UTC m=+0.051699071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:08:59 compute-0 systemd[1]: Started libpod-conmon-5a58c8477994bc90edb4ee6c1070aaedb878f250b70399530300d217d123cf91.scope.
Nov 26 02:08:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:08:59 compute-0 podman[439422]: 2025-11-26 02:08:59.339733029 +0000 UTC m=+0.213478986 container init 5a58c8477994bc90edb4ee6c1070aaedb878f250b70399530300d217d123cf91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:08:59 compute-0 podman[439422]: 2025-11-26 02:08:59.357755284 +0000 UTC m=+0.231501201 container start 5a58c8477994bc90edb4ee6c1070aaedb878f250b70399530300d217d123cf91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 02:08:59 compute-0 podman[439422]: 2025-11-26 02:08:59.363997889 +0000 UTC m=+0.237743806 container attach 5a58c8477994bc90edb4ee6c1070aaedb878f250b70399530300d217d123cf91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:08:59 compute-0 admiring_keller[439438]: 167 167
Nov 26 02:08:59 compute-0 systemd[1]: libpod-5a58c8477994bc90edb4ee6c1070aaedb878f250b70399530300d217d123cf91.scope: Deactivated successfully.
Nov 26 02:08:59 compute-0 podman[439422]: 2025-11-26 02:08:59.371205451 +0000 UTC m=+0.244951358 container died 5a58c8477994bc90edb4ee6c1070aaedb878f250b70399530300d217d123cf91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:08:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:08:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e945bf446e901f7ff44d958764b7c0851e61e042341716c41e5116a34f892b1-merged.mount: Deactivated successfully.
Nov 26 02:08:59 compute-0 podman[439422]: 2025-11-26 02:08:59.446590645 +0000 UTC m=+0.320336532 container remove 5a58c8477994bc90edb4ee6c1070aaedb878f250b70399530300d217d123cf91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:08:59 compute-0 systemd[1]: libpod-conmon-5a58c8477994bc90edb4ee6c1070aaedb878f250b70399530300d217d123cf91.scope: Deactivated successfully.
Nov 26 02:08:59 compute-0 podman[439461]: 2025-11-26 02:08:59.739529457 +0000 UTC m=+0.090021945 container create 59cb91c3935bc1eb355295b379643708dc6300e4ebd99b5a7c4ef803580af183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:08:59 compute-0 podman[158021]: time="2025-11-26T02:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:08:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44146 "" "Go-http-client/1.1"
Nov 26 02:08:59 compute-0 podman[439461]: 2025-11-26 02:08:59.705649107 +0000 UTC m=+0.056141635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:08:59 compute-0 systemd[1]: Started libpod-conmon-59cb91c3935bc1eb355295b379643708dc6300e4ebd99b5a7c4ef803580af183.scope.
Nov 26 02:08:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7452e612ee88dc06f78ec9a9b89123252a1ff919b7502481a86e058ddb460cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7452e612ee88dc06f78ec9a9b89123252a1ff919b7502481a86e058ddb460cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7452e612ee88dc06f78ec9a9b89123252a1ff919b7502481a86e058ddb460cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7452e612ee88dc06f78ec9a9b89123252a1ff919b7502481a86e058ddb460cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:08:59 compute-0 podman[439461]: 2025-11-26 02:08:59.939555134 +0000 UTC m=+0.290047652 container init 59cb91c3935bc1eb355295b379643708dc6300e4ebd99b5a7c4ef803580af183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 02:08:59 compute-0 podman[439461]: 2025-11-26 02:08:59.95757991 +0000 UTC m=+0.308072398 container start 59cb91c3935bc1eb355295b379643708dc6300e4ebd99b5a7c4ef803580af183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 02:08:59 compute-0 podman[439461]: 2025-11-26 02:08:59.964592136 +0000 UTC m=+0.315084634 container attach 59cb91c3935bc1eb355295b379643708dc6300e4ebd99b5a7c4ef803580af183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 02:08:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8577 "" "Go-http-client/1.1"
Nov 26 02:09:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:09:00 compute-0 podman[439482]: 2025-11-26 02:09:00.62252815 +0000 UTC m=+0.170409648 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, managed_by=edpm_ansible, maintainer=Red Hat, Inc.)
Nov 26 02:09:00 compute-0 stupefied_bell[439477]: {
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:    "0": [
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:        {
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "devices": [
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "/dev/loop3"
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            ],
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "lv_name": "ceph_lv0",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "lv_size": "21470642176",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "name": "ceph_lv0",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "tags": {
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.cluster_name": "ceph",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.crush_device_class": "",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.encrypted": "0",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.osd_id": "0",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.type": "block",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.vdo": "0"
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            },
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "type": "block",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "vg_name": "ceph_vg0"
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:        }
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:    ],
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:    "1": [
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:        {
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "devices": [
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "/dev/loop4"
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            ],
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "lv_name": "ceph_lv1",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "lv_size": "21470642176",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "name": "ceph_lv1",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "tags": {
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.cluster_name": "ceph",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.crush_device_class": "",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.encrypted": "0",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.osd_id": "1",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.type": "block",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.vdo": "0"
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            },
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "type": "block",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "vg_name": "ceph_vg1"
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:        }
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:    ],
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:    "2": [
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:        {
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "devices": [
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "/dev/loop5"
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            ],
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "lv_name": "ceph_lv2",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "lv_size": "21470642176",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "name": "ceph_lv2",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "tags": {
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.cluster_name": "ceph",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.crush_device_class": "",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.encrypted": "0",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.osd_id": "2",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.type": "block",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:                "ceph.vdo": "0"
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            },
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "type": "block",
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:            "vg_name": "ceph_vg2"
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:        }
Nov 26 02:09:00 compute-0 stupefied_bell[439477]:    ]
Nov 26 02:09:00 compute-0 stupefied_bell[439477]: }
Nov 26 02:09:00 compute-0 systemd[1]: libpod-59cb91c3935bc1eb355295b379643708dc6300e4ebd99b5a7c4ef803580af183.scope: Deactivated successfully.
Nov 26 02:09:00 compute-0 podman[439461]: 2025-11-26 02:09:00.828588517 +0000 UTC m=+1.179080985 container died 59cb91c3935bc1eb355295b379643708dc6300e4ebd99b5a7c4ef803580af183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bell, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7452e612ee88dc06f78ec9a9b89123252a1ff919b7502481a86e058ddb460cf-merged.mount: Deactivated successfully.
Nov 26 02:09:00 compute-0 podman[439461]: 2025-11-26 02:09:00.931363668 +0000 UTC m=+1.281856136 container remove 59cb91c3935bc1eb355295b379643708dc6300e4ebd99b5a7c4ef803580af183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 02:09:00 compute-0 systemd[1]: libpod-conmon-59cb91c3935bc1eb355295b379643708dc6300e4ebd99b5a7c4ef803580af183.scope: Deactivated successfully.
Nov 26 02:09:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1753: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:01 compute-0 openstack_network_exporter[367323]: ERROR   02:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:09:01 compute-0 openstack_network_exporter[367323]: ERROR   02:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:09:01 compute-0 openstack_network_exporter[367323]: ERROR   02:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:09:01 compute-0 openstack_network_exporter[367323]: ERROR   02:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:09:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:09:01 compute-0 openstack_network_exporter[367323]: ERROR   02:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:09:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:09:02 compute-0 podman[439658]: 2025-11-26 02:09:02.128116428 +0000 UTC m=+0.096171917 container create a9cdc0f71c959fe0768040465cd354b22eb9b8233cc8c56f37f70443c0cea540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:09:02 compute-0 podman[439658]: 2025-11-26 02:09:02.094345402 +0000 UTC m=+0.062400951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:09:02 compute-0 systemd[1]: Started libpod-conmon-a9cdc0f71c959fe0768040465cd354b22eb9b8233cc8c56f37f70443c0cea540.scope.
Nov 26 02:09:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:09:02 compute-0 podman[439658]: 2025-11-26 02:09:02.298781313 +0000 UTC m=+0.266836862 container init a9cdc0f71c959fe0768040465cd354b22eb9b8233cc8c56f37f70443c0cea540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 02:09:02 compute-0 podman[439658]: 2025-11-26 02:09:02.316895741 +0000 UTC m=+0.284951240 container start a9cdc0f71c959fe0768040465cd354b22eb9b8233cc8c56f37f70443c0cea540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 02:09:02 compute-0 podman[439658]: 2025-11-26 02:09:02.325624265 +0000 UTC m=+0.293679814 container attach a9cdc0f71c959fe0768040465cd354b22eb9b8233cc8c56f37f70443c0cea540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 02:09:02 compute-0 serene_albattani[439674]: 167 167
Nov 26 02:09:02 compute-0 systemd[1]: libpod-a9cdc0f71c959fe0768040465cd354b22eb9b8233cc8c56f37f70443c0cea540.scope: Deactivated successfully.
Nov 26 02:09:02 compute-0 podman[439658]: 2025-11-26 02:09:02.330264185 +0000 UTC m=+0.298319674 container died a9cdc0f71c959fe0768040465cd354b22eb9b8233cc8c56f37f70443c0cea540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 02:09:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7a90be18e69eaee2b6b7fc0a8527c07c4e4470d09f615ffc1ffd49fddbbb5a1-merged.mount: Deactivated successfully.
Nov 26 02:09:02 compute-0 podman[439658]: 2025-11-26 02:09:02.400184976 +0000 UTC m=+0.368240445 container remove a9cdc0f71c959fe0768040465cd354b22eb9b8233cc8c56f37f70443c0cea540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:09:02 compute-0 systemd[1]: libpod-conmon-a9cdc0f71c959fe0768040465cd354b22eb9b8233cc8c56f37f70443c0cea540.scope: Deactivated successfully.
Nov 26 02:09:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:09:02.464 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:09:02 compute-0 nova_compute[350387]: 2025-11-26 02:09:02.466 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:02 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:09:02.466 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 02:09:02 compute-0 podman[439697]: 2025-11-26 02:09:02.659787983 +0000 UTC m=+0.099673045 container create c4e357bc276083b238ca55f8e8e7ab988028a82eaf53e20d02898dc05503c081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_varahamihira, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 02:09:02 compute-0 podman[439697]: 2025-11-26 02:09:02.626107949 +0000 UTC m=+0.065993091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:09:02 compute-0 systemd[1]: Started libpod-conmon-c4e357bc276083b238ca55f8e8e7ab988028a82eaf53e20d02898dc05503c081.scope.
Nov 26 02:09:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55fb0e605788fab2929ae5cb659607b343989a3f023df8d0dd038eba3462f41b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55fb0e605788fab2929ae5cb659607b343989a3f023df8d0dd038eba3462f41b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55fb0e605788fab2929ae5cb659607b343989a3f023df8d0dd038eba3462f41b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55fb0e605788fab2929ae5cb659607b343989a3f023df8d0dd038eba3462f41b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:09:02 compute-0 podman[439697]: 2025-11-26 02:09:02.830726696 +0000 UTC m=+0.270611818 container init c4e357bc276083b238ca55f8e8e7ab988028a82eaf53e20d02898dc05503c081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 02:09:02 compute-0 podman[439697]: 2025-11-26 02:09:02.854324497 +0000 UTC m=+0.294209559 container start c4e357bc276083b238ca55f8e8e7ab988028a82eaf53e20d02898dc05503c081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_varahamihira, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:09:02 compute-0 podman[439697]: 2025-11-26 02:09:02.86120906 +0000 UTC m=+0.301094132 container attach c4e357bc276083b238ca55f8e8e7ab988028a82eaf53e20d02898dc05503c081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_varahamihira, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:09:02 compute-0 podman[439710]: 2025-11-26 02:09:02.879235215 +0000 UTC m=+0.144615675 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 02:09:02 compute-0 nova_compute[350387]: 2025-11-26 02:09:02.901 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:03 compute-0 nova_compute[350387]: 2025-11-26 02:09:03.303 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]: {
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:        "osd_id": 0,
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:        "type": "bluestore"
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:    },
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:        "osd_id": 2,
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:        "type": "bluestore"
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:    },
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:        "osd_id": 1,
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:        "type": "bluestore"
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]:    }
Nov 26 02:09:04 compute-0 trusting_varahamihira[439719]: }
Nov 26 02:09:04 compute-0 systemd[1]: libpod-c4e357bc276083b238ca55f8e8e7ab988028a82eaf53e20d02898dc05503c081.scope: Deactivated successfully.
Nov 26 02:09:04 compute-0 systemd[1]: libpod-c4e357bc276083b238ca55f8e8e7ab988028a82eaf53e20d02898dc05503c081.scope: Consumed 1.217s CPU time.
Nov 26 02:09:04 compute-0 podman[439697]: 2025-11-26 02:09:04.071133848 +0000 UTC m=+1.511018930 container died c4e357bc276083b238ca55f8e8e7ab988028a82eaf53e20d02898dc05503c081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 02:09:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-55fb0e605788fab2929ae5cb659607b343989a3f023df8d0dd038eba3462f41b-merged.mount: Deactivated successfully.
Nov 26 02:09:04 compute-0 podman[439697]: 2025-11-26 02:09:04.170251957 +0000 UTC m=+1.610136999 container remove c4e357bc276083b238ca55f8e8e7ab988028a82eaf53e20d02898dc05503c081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 02:09:04 compute-0 systemd[1]: libpod-conmon-c4e357bc276083b238ca55f8e8e7ab988028a82eaf53e20d02898dc05503c081.scope: Deactivated successfully.
Nov 26 02:09:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:09:04 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:09:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:09:04 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:09:04 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 8f800141-1b12-40c8-bfe9-aa741921496c does not exist
Nov 26 02:09:04 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev ac1f44bb-9f39-4243-bf9f-4bb65a557862 does not exist
Nov 26 02:09:04 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:09:04 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:09:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:09:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1755: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:06 compute-0 podman[439828]: 2025-11-26 02:09:06.583541403 +0000 UTC m=+0.123538955 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 02:09:06 compute-0 podman[439827]: 2025-11-26 02:09:06.589350685 +0000 UTC m=+0.134467720 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, architecture=x86_64, release=1755695350)
Nov 26 02:09:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:07 compute-0 nova_compute[350387]: 2025-11-26 02:09:07.904 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:08 compute-0 nova_compute[350387]: 2025-11-26 02:09:08.305 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 26 02:09:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 26 02:09:08 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 26 02:09:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:09:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 26 02:09:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 26 02:09:10 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 26 02:09:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:09:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:09:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:09:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:09:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:09:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:09:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 44 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.6 MiB/s wr, 24 op/s
Nov 26 02:09:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:09:12.470 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:09:12 compute-0 nova_compute[350387]: 2025-11-26 02:09:12.908 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:13 compute-0 nova_compute[350387]: 2025-11-26 02:09:13.310 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 26 02:09:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:09:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 57 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 26 02:09:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.8 MiB/s wr, 44 op/s
Nov 26 02:09:17 compute-0 nova_compute[350387]: 2025-11-26 02:09:17.912 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:18 compute-0 nova_compute[350387]: 2025-11-26 02:09:18.312 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Nov 26 02:09:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:09:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.1 MiB/s wr, 35 op/s
Nov 26 02:09:22 compute-0 nova_compute[350387]: 2025-11-26 02:09:22.916 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:23 compute-0 nova_compute[350387]: 2025-11-26 02:09:23.316 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.0 MiB/s wr, 15 op/s
Nov 26 02:09:23 compute-0 podman[439868]: 2025-11-26 02:09:23.587875673 +0000 UTC m=+0.127401822 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 02:09:23 compute-0 podman[439870]: 2025-11-26 02:09:23.590664351 +0000 UTC m=+0.120874799 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:09:23 compute-0 podman[439869]: 2025-11-26 02:09:23.615307322 +0000 UTC m=+0.149981975 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 26 02:09:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:09:24.993 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:09:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:09:24.993 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:09:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:09:24.994 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:09:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:09:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 85 B/s wr, 0 op/s
Nov 26 02:09:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:09:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1887629242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:09:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:09:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1887629242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:09:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 26 02:09:27 compute-0 nova_compute[350387]: 2025-11-26 02:09:27.919 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:28 compute-0 nova_compute[350387]: 2025-11-26 02:09:28.320 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:29 compute-0 podman[439930]: 2025-11-26 02:09:29.595986233 +0000 UTC m=+0.148084432 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:09:29 compute-0 podman[439931]: 2025-11-26 02:09:29.6140738 +0000 UTC m=+0.158614368 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 02:09:29 compute-0 podman[158021]: time="2025-11-26T02:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:09:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:09:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8178 "" "Go-http-client/1.1"
Nov 26 02:09:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:09:31 compute-0 nova_compute[350387]: 2025-11-26 02:09:31.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:09:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:31 compute-0 openstack_network_exporter[367323]: ERROR   02:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:09:31 compute-0 openstack_network_exporter[367323]: ERROR   02:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:09:31 compute-0 openstack_network_exporter[367323]: ERROR   02:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:09:31 compute-0 openstack_network_exporter[367323]: ERROR   02:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:09:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:09:31 compute-0 openstack_network_exporter[367323]: ERROR   02:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:09:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:09:31 compute-0 nova_compute[350387]: 2025-11-26 02:09:31.573 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:09:31 compute-0 nova_compute[350387]: 2025-11-26 02:09:31.573 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:09:31 compute-0 nova_compute[350387]: 2025-11-26 02:09:31.574 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:09:31 compute-0 nova_compute[350387]: 2025-11-26 02:09:31.574 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:09:31 compute-0 nova_compute[350387]: 2025-11-26 02:09:31.575 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:09:31 compute-0 podman[439974]: 2025-11-26 02:09:31.614558052 +0000 UTC m=+0.160462069 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., container_name=kepler, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 02:09:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:09:32 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/524859293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:09:32 compute-0 nova_compute[350387]: 2025-11-26 02:09:32.089 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:09:32 compute-0 ovn_controller[89102]: 2025-11-26T02:09:32Z|00065|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 26 02:09:32 compute-0 nova_compute[350387]: 2025-11-26 02:09:32.687 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:09:32 compute-0 nova_compute[350387]: 2025-11-26 02:09:32.689 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4142MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:09:32 compute-0 nova_compute[350387]: 2025-11-26 02:09:32.689 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:09:32 compute-0 nova_compute[350387]: 2025-11-26 02:09:32.690 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:09:32 compute-0 nova_compute[350387]: 2025-11-26 02:09:32.854 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:09:32 compute-0 nova_compute[350387]: 2025-11-26 02:09:32.855 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:09:32 compute-0 nova_compute[350387]: 2025-11-26 02:09:32.881 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:09:32 compute-0 nova_compute[350387]: 2025-11-26 02:09:32.923 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:33 compute-0 nova_compute[350387]: 2025-11-26 02:09:33.323 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:09:33 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2657837735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:09:33 compute-0 nova_compute[350387]: 2025-11-26 02:09:33.364 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:09:33 compute-0 nova_compute[350387]: 2025-11-26 02:09:33.375 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:09:33 compute-0 nova_compute[350387]: 2025-11-26 02:09:33.389 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:09:33 compute-0 nova_compute[350387]: 2025-11-26 02:09:33.391 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:09:33 compute-0 nova_compute[350387]: 2025-11-26 02:09:33.391 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:09:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:33 compute-0 podman[440036]: 2025-11-26 02:09:33.564510737 +0000 UTC m=+0.124113701 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 02:09:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:09:35 compute-0 nova_compute[350387]: 2025-11-26 02:09:35.392 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:09:35 compute-0 nova_compute[350387]: 2025-11-26 02:09:35.393 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:09:35 compute-0 nova_compute[350387]: 2025-11-26 02:09:35.393 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:09:35 compute-0 nova_compute[350387]: 2025-11-26 02:09:35.394 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:09:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:37 compute-0 podman[440056]: 2025-11-26 02:09:37.559587606 +0000 UTC m=+0.109214953 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:09:37 compute-0 podman[440055]: 2025-11-26 02:09:37.591790589 +0000 UTC m=+0.147181478 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=edpm, version=9.6, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible)
Nov 26 02:09:37 compute-0 nova_compute[350387]: 2025-11-26 02:09:37.927 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:38 compute-0 nova_compute[350387]: 2025-11-26 02:09:38.327 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:39 compute-0 nova_compute[350387]: 2025-11-26 02:09:39.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:09:39 compute-0 nova_compute[350387]: 2025-11-26 02:09:39.301 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:09:39 compute-0 nova_compute[350387]: 2025-11-26 02:09:39.301 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:09:39 compute-0 nova_compute[350387]: 2025-11-26 02:09:39.330 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 02:09:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:09:40 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:09:40.108 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:09:40 compute-0 nova_compute[350387]: 2025-11-26 02:09:40.108 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:40 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:09:40.109 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 02:09:40 compute-0 nova_compute[350387]: 2025-11-26 02:09:40.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:09:40 compute-0 nova_compute[350387]: 2025-11-26 02:09:40.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:09:41
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'volumes', 'vms', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'images']
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1775: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:09:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:09:42 compute-0 nova_compute[350387]: 2025-11-26 02:09:42.930 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:43 compute-0 nova_compute[350387]: 2025-11-26 02:09:43.293 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:09:43 compute-0 nova_compute[350387]: 2025-11-26 02:09:43.315 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:09:43 compute-0 nova_compute[350387]: 2025-11-26 02:09:43.315 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:09:43 compute-0 nova_compute[350387]: 2025-11-26 02:09:43.330 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:44 compute-0 nova_compute[350387]: 2025-11-26 02:09:44.525 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:09:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:45 compute-0 nova_compute[350387]: 2025-11-26 02:09:45.613 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:45 compute-0 nova_compute[350387]: 2025-11-26 02:09:45.947 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:45 compute-0 nova_compute[350387]: 2025-11-26 02:09:45.976 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:46 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:09:46.112 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:09:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:47 compute-0 nova_compute[350387]: 2025-11-26 02:09:47.933 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:48 compute-0 nova_compute[350387]: 2025-11-26 02:09:48.333 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:09:50 compute-0 nova_compute[350387]: 2025-11-26 02:09:50.684 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:09:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:51 compute-0 nova_compute[350387]: 2025-11-26 02:09:51.590 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:52 compute-0 nova_compute[350387]: 2025-11-26 02:09:52.936 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:53 compute-0 nova_compute[350387]: 2025-11-26 02:09:53.337 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:53 compute-0 nova_compute[350387]: 2025-11-26 02:09:53.430 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:53 compute-0 nova_compute[350387]: 2025-11-26 02:09:53.759 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:54 compute-0 podman[440103]: 2025-11-26 02:09:54.580023029 +0000 UTC m=+0.110998353 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:09:54 compute-0 podman[440101]: 2025-11-26 02:09:54.583321491 +0000 UTC m=+0.132687331 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 02:09:54 compute-0 podman[440102]: 2025-11-26 02:09:54.587748315 +0000 UTC m=+0.117957528 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 26 02:09:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:09:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:57 compute-0 nova_compute[350387]: 2025-11-26 02:09:57.940 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:58 compute-0 nova_compute[350387]: 2025-11-26 02:09:58.341 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:09:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:09:59 compute-0 podman[158021]: time="2025-11-26T02:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:09:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:09:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8171 "" "Go-http-client/1.1"
Nov 26 02:10:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:10:00 compute-0 podman[440161]: 2025-11-26 02:10:00.603784341 +0000 UTC m=+0.149917274 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 26 02:10:00 compute-0 podman[440162]: 2025-11-26 02:10:00.648417552 +0000 UTC m=+0.185024258 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 26 02:10:01 compute-0 openstack_network_exporter[367323]: ERROR   02:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:10:01 compute-0 openstack_network_exporter[367323]: ERROR   02:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:10:01 compute-0 openstack_network_exporter[367323]: ERROR   02:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:10:01 compute-0 openstack_network_exporter[367323]: ERROR   02:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:10:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:10:01 compute-0 openstack_network_exporter[367323]: ERROR   02:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:10:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:10:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:10:02 compute-0 podman[440205]: 2025-11-26 02:10:02.586731899 +0000 UTC m=+0.127264719 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, io.openshift.tags=base rhel9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:10:02 compute-0 nova_compute[350387]: 2025-11-26 02:10:02.944 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:03 compute-0 nova_compute[350387]: 2025-11-26 02:10:03.343 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:10:04 compute-0 podman[440226]: 2025-11-26 02:10:04.568541636 +0000 UTC m=+0.119967754 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 26 02:10:04 compute-0 nova_compute[350387]: 2025-11-26 02:10:04.847 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Acquiring lock "5c8719f7-1028-4983-aa89-c99a459b6295" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:04 compute-0 nova_compute[350387]: 2025-11-26 02:10:04.849 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:04 compute-0 nova_compute[350387]: 2025-11-26 02:10:04.883 350391 DEBUG nova.compute.manager [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.004 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.005 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.018 350391 DEBUG nova.virt.hardware [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.019 350391 INFO nova.compute.claims [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 02:10:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.150 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1787: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:10:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:10:05 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/189743802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.667 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.680 350391 DEBUG nova.compute.provider_tree [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.719 350391 DEBUG nova.scheduler.client.report [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.748 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.749 350391 DEBUG nova.compute.manager [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.804 350391 DEBUG nova.compute.manager [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.806 350391 DEBUG nova.network.neutron [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.851 350391 INFO nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 02:10:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 26 02:10:05 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 02:10:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:10:05 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:10:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:10:05 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.873 350391 DEBUG nova.compute.manager [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 02:10:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:10:05 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:10:05 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 8a394456-0bf8-46c9-8eef-931f87ac7a85 does not exist
Nov 26 02:10:05 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev cbc4edb3-f02e-4558-af00-558d0937eb64 does not exist
Nov 26 02:10:05 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev f671f2d3-6e36-4e6d-ad5d-df615e4789dd does not exist
Nov 26 02:10:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:10:05 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:10:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:10:05 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:10:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:10:05 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.957 350391 DEBUG nova.compute.manager [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.960 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 02:10:05 compute-0 nova_compute[350387]: 2025-11-26 02:10:05.961 350391 INFO nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Creating image(s)#033[00m
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.013 350391 DEBUG nova.storage.rbd_utils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] rbd image 5c8719f7-1028-4983-aa89-c99a459b6295_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.071 350391 DEBUG nova.storage.rbd_utils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] rbd image 5c8719f7-1028-4983-aa89-c99a459b6295_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.110 350391 DEBUG nova.storage.rbd_utils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] rbd image 5c8719f7-1028-4983-aa89-c99a459b6295_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.119 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Acquiring lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.121 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.151 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Acquiring lock "270d952c-e221-49ae-ba25-b259f07a2be3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.151 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "270d952c-e221-49ae-ba25-b259f07a2be3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.160 350391 DEBUG nova.policy [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aadae2b9a9834185b051c2bc59c6054a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '339deb116b764070abc6d50520ee33c8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.167 350391 DEBUG nova.compute.manager [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.248 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.249 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.261 350391 DEBUG nova.virt.hardware [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.262 350391 INFO nova.compute.claims [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.449 350391 DEBUG oslo_concurrency.processutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:06 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 02:10:06 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:10:06 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:10:06 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.814 350391 DEBUG nova.virt.libvirt.imagebackend [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Image locations are: [{'url': 'rbd://36901f64-240e-5c29-a2e2-29b56f2c329c/images/4728a8a0-1107-4816-98c6-74482d53f92c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://36901f64-240e-5c29-a2e2-29b56f2c329c/images/4728a8a0-1107-4816-98c6-74482d53f92c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 26 02:10:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:10:06 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2437777537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.948 350391 DEBUG oslo_concurrency.processutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:06 compute-0 podman[440609]: 2025-11-26 02:10:06.950444041 +0000 UTC m=+0.097912086 container create 3c4fa9f468e3ccd2c91a61f310ec388f7dcafa41d6d370464165093c398c9ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ellis, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.960 350391 DEBUG nova.compute.provider_tree [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:10:06 compute-0 nova_compute[350387]: 2025-11-26 02:10:06.980 350391 DEBUG nova.scheduler.client.report [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:10:07 compute-0 podman[440609]: 2025-11-26 02:10:06.912443596 +0000 UTC m=+0.059911681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.012 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.013 350391 DEBUG nova.compute.manager [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 02:10:07 compute-0 systemd[1]: Started libpod-conmon-3c4fa9f468e3ccd2c91a61f310ec388f7dcafa41d6d370464165093c398c9ef3.scope.
Nov 26 02:10:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.074 350391 DEBUG nova.compute.manager [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.075 350391 DEBUG nova.network.neutron [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 02:10:07 compute-0 podman[440609]: 2025-11-26 02:10:07.085773375 +0000 UTC m=+0.233241420 container init 3c4fa9f468e3ccd2c91a61f310ec388f7dcafa41d6d370464165093c398c9ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.094 350391 INFO nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 02:10:07 compute-0 podman[440609]: 2025-11-26 02:10:07.103565314 +0000 UTC m=+0.251033359 container start 3c4fa9f468e3ccd2c91a61f310ec388f7dcafa41d6d370464165093c398c9ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ellis, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:10:07 compute-0 romantic_ellis[440627]: 167 167
Nov 26 02:10:07 compute-0 systemd[1]: libpod-3c4fa9f468e3ccd2c91a61f310ec388f7dcafa41d6d370464165093c398c9ef3.scope: Deactivated successfully.
Nov 26 02:10:07 compute-0 podman[440609]: 2025-11-26 02:10:07.110439847 +0000 UTC m=+0.257907932 container attach 3c4fa9f468e3ccd2c91a61f310ec388f7dcafa41d6d370464165093c398c9ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ellis, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:10:07 compute-0 podman[440609]: 2025-11-26 02:10:07.11234371 +0000 UTC m=+0.259811755 container died 3c4fa9f468e3ccd2c91a61f310ec388f7dcafa41d6d370464165093c398c9ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.121 350391 DEBUG nova.compute.manager [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 02:10:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-6230731266ba187aa42a75efee87bcd1d71e494ddcfc684166f67b219b5bc2af-merged.mount: Deactivated successfully.
Nov 26 02:10:07 compute-0 podman[440609]: 2025-11-26 02:10:07.186623033 +0000 UTC m=+0.334091058 container remove 3c4fa9f468e3ccd2c91a61f310ec388f7dcafa41d6d370464165093c398c9ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ellis, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 02:10:07 compute-0 systemd[1]: libpod-conmon-3c4fa9f468e3ccd2c91a61f310ec388f7dcafa41d6d370464165093c398c9ef3.scope: Deactivated successfully.
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.240 350391 DEBUG nova.compute.manager [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.242 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.243 350391 INFO nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Creating image(s)#033[00m
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.295 350391 DEBUG nova.storage.rbd_utils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] rbd image 270d952c-e221-49ae-ba25-b259f07a2be3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.355 350391 DEBUG nova.storage.rbd_utils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] rbd image 270d952c-e221-49ae-ba25-b259f07a2be3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.427 350391 DEBUG nova.storage.rbd_utils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] rbd image 270d952c-e221-49ae-ba25-b259f07a2be3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:10:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.442 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Acquiring lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.443 350391 DEBUG nova.network.neutron [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Successfully created port: 4b2c5180-2ff0-4b98-90cb-e0e6ba068614 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 02:10:07 compute-0 podman[440684]: 2025-11-26 02:10:07.461119368 +0000 UTC m=+0.088206804 container create d499a9604aced86f987edcf882efac3cb83db34086dacd6d9d0cbcfc7b6ab7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cohen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 02:10:07 compute-0 podman[440684]: 2025-11-26 02:10:07.433790342 +0000 UTC m=+0.060877778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:10:07 compute-0 systemd[1]: Started libpod-conmon-d499a9604aced86f987edcf882efac3cb83db34086dacd6d9d0cbcfc7b6ab7e3.scope.
Nov 26 02:10:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e919de3d9896a881abbcce278e2348d6143eb72992e4994f7a6cc5e74a03f09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e919de3d9896a881abbcce278e2348d6143eb72992e4994f7a6cc5e74a03f09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e919de3d9896a881abbcce278e2348d6143eb72992e4994f7a6cc5e74a03f09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e919de3d9896a881abbcce278e2348d6143eb72992e4994f7a6cc5e74a03f09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e919de3d9896a881abbcce278e2348d6143eb72992e4994f7a6cc5e74a03f09/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:10:07 compute-0 podman[440684]: 2025-11-26 02:10:07.631748791 +0000 UTC m=+0.258836227 container init d499a9604aced86f987edcf882efac3cb83db34086dacd6d9d0cbcfc7b6ab7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:10:07 compute-0 podman[440684]: 2025-11-26 02:10:07.658750938 +0000 UTC m=+0.285838364 container start d499a9604aced86f987edcf882efac3cb83db34086dacd6d9d0cbcfc7b6ab7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cohen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:10:07 compute-0 podman[440684]: 2025-11-26 02:10:07.665284531 +0000 UTC m=+0.292371937 container attach d499a9604aced86f987edcf882efac3cb83db34086dacd6d9d0cbcfc7b6ab7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cohen, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.723 350391 DEBUG nova.policy [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1c3747a1e5af44b0bcc1d0a5f8241343', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ec6a84328cd54e0fad4f07089c4e4e95', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 02:10:07 compute-0 nova_compute[350387]: 2025-11-26 02:10:07.948 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:08 compute-0 nova_compute[350387]: 2025-11-26 02:10:08.410 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:08 compute-0 podman[440734]: 2025-11-26 02:10:08.574214832 +0000 UTC m=+0.112972399 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 02:10:08 compute-0 podman[440732]: 2025-11-26 02:10:08.586694751 +0000 UTC m=+0.125877270 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, release=1755695350, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc.)
Nov 26 02:10:08 compute-0 confident_cohen[440720]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:10:08 compute-0 confident_cohen[440720]: --> relative data size: 1.0
Nov 26 02:10:08 compute-0 confident_cohen[440720]: --> All data devices are unavailable
Nov 26 02:10:08 compute-0 systemd[1]: libpod-d499a9604aced86f987edcf882efac3cb83db34086dacd6d9d0cbcfc7b6ab7e3.scope: Deactivated successfully.
Nov 26 02:10:08 compute-0 systemd[1]: libpod-d499a9604aced86f987edcf882efac3cb83db34086dacd6d9d0cbcfc7b6ab7e3.scope: Consumed 1.147s CPU time.
Nov 26 02:10:08 compute-0 podman[440789]: 2025-11-26 02:10:08.954473282 +0000 UTC m=+0.062336239 container died d499a9604aced86f987edcf882efac3cb83db34086dacd6d9d0cbcfc7b6ab7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e919de3d9896a881abbcce278e2348d6143eb72992e4994f7a6cc5e74a03f09-merged.mount: Deactivated successfully.
Nov 26 02:10:09 compute-0 nova_compute[350387]: 2025-11-26 02:10:09.033 350391 DEBUG nova.network.neutron [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Successfully updated port: 4b2c5180-2ff0-4b98-90cb-e0e6ba068614 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 02:10:09 compute-0 nova_compute[350387]: 2025-11-26 02:10:09.051 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Acquiring lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:10:09 compute-0 nova_compute[350387]: 2025-11-26 02:10:09.051 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Acquired lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:10:09 compute-0 nova_compute[350387]: 2025-11-26 02:10:09.052 350391 DEBUG nova.network.neutron [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 02:10:09 compute-0 podman[440789]: 2025-11-26 02:10:09.067725297 +0000 UTC m=+0.175588164 container remove d499a9604aced86f987edcf882efac3cb83db34086dacd6d9d0cbcfc7b6ab7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 02:10:09 compute-0 systemd[1]: libpod-conmon-d499a9604aced86f987edcf882efac3cb83db34086dacd6d9d0cbcfc7b6ab7e3.scope: Deactivated successfully.
Nov 26 02:10:09 compute-0 nova_compute[350387]: 2025-11-26 02:10:09.249 350391 DEBUG nova.network.neutron [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Successfully created port: 1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 02:10:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:10:09 compute-0 nova_compute[350387]: 2025-11-26 02:10:09.889 350391 DEBUG nova.network.neutron [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 02:10:09 compute-0 nova_compute[350387]: 2025-11-26 02:10:09.935 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.032 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17.part --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.034 350391 DEBUG nova.virt.images [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] 4728a8a0-1107-4816-98c6-74482d53f92c was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.036 350391 DEBUG nova.privsep.utils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.037 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17.part /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:10:10 compute-0 podman[440951]: 2025-11-26 02:10:10.22714814 +0000 UTC m=+0.089214242 container create cff2c2d28080cfeb7ad011a5684642d55cdd113661f42854433610ecb6eb0164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:10:10 compute-0 podman[440951]: 2025-11-26 02:10:10.188704863 +0000 UTC m=+0.050770995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:10:10 compute-0 systemd[1]: Started libpod-conmon-cff2c2d28080cfeb7ad011a5684642d55cdd113661f42854433610ecb6eb0164.scope.
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.327 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17.part /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17.converted" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.338 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:10:10 compute-0 podman[440951]: 2025-11-26 02:10:10.361057194 +0000 UTC m=+0.223123306 container init cff2c2d28080cfeb7ad011a5684642d55cdd113661f42854433610ecb6eb0164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:10:10 compute-0 podman[440951]: 2025-11-26 02:10:10.376560119 +0000 UTC m=+0.238626211 container start cff2c2d28080cfeb7ad011a5684642d55cdd113661f42854433610ecb6eb0164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bartik, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:10:10 compute-0 podman[440951]: 2025-11-26 02:10:10.381115727 +0000 UTC m=+0.243181819 container attach cff2c2d28080cfeb7ad011a5684642d55cdd113661f42854433610ecb6eb0164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 02:10:10 compute-0 pensive_bartik[440967]: 167 167
Nov 26 02:10:10 compute-0 systemd[1]: libpod-cff2c2d28080cfeb7ad011a5684642d55cdd113661f42854433610ecb6eb0164.scope: Deactivated successfully.
Nov 26 02:10:10 compute-0 podman[440951]: 2025-11-26 02:10:10.388523215 +0000 UTC m=+0.250589337 container died cff2c2d28080cfeb7ad011a5684642d55cdd113661f42854433610ecb6eb0164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.411 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17.converted --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.414 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b243719e3dfe725c90b48e940d74921992da6bbe5f7fc5b4ed03f321d7bd38f5-merged.mount: Deactivated successfully.
Nov 26 02:10:10 compute-0 podman[440951]: 2025-11-26 02:10:10.453199418 +0000 UTC m=+0.315265520 container remove cff2c2d28080cfeb7ad011a5684642d55cdd113661f42854433610ecb6eb0164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 02:10:10 compute-0 systemd[1]: libpod-conmon-cff2c2d28080cfeb7ad011a5684642d55cdd113661f42854433610ecb6eb0164.scope: Deactivated successfully.
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.479 350391 DEBUG nova.storage.rbd_utils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] rbd image 5c8719f7-1028-4983-aa89-c99a459b6295_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.497 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 5c8719f7-1028-4983-aa89-c99a459b6295_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.522 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 3.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.524 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.574 350391 DEBUG nova.storage.rbd_utils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] rbd image 270d952c-e221-49ae-ba25-b259f07a2be3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.587 350391 DEBUG oslo_concurrency.processutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 270d952c-e221-49ae-ba25-b259f07a2be3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.642 350391 DEBUG nova.network.neutron [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Successfully updated port: 1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.662 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Acquiring lock "refresh_cache-270d952c-e221-49ae-ba25-b259f07a2be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.663 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Acquired lock "refresh_cache-270d952c-e221-49ae-ba25-b259f07a2be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.663 350391 DEBUG nova.network.neutron [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 02:10:10 compute-0 podman[441046]: 2025-11-26 02:10:10.770650937 +0000 UTC m=+0.111938099 container create 010829039f35b6aee76537b4c2519df6621e94005a6a1185d0abf991836635c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 02:10:10 compute-0 podman[441046]: 2025-11-26 02:10:10.7197307 +0000 UTC m=+0.061017942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:10:10 compute-0 systemd[1]: Started libpod-conmon-010829039f35b6aee76537b4c2519df6621e94005a6a1185d0abf991836635c6.scope.
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.875696) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123010876033, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2058, "num_deletes": 251, "total_data_size": 3372452, "memory_usage": 3428704, "flush_reason": "Manual Compaction"}
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123010891163, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3306296, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34734, "largest_seqno": 36791, "table_properties": {"data_size": 3296949, "index_size": 5905, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18879, "raw_average_key_size": 20, "raw_value_size": 3278233, "raw_average_value_size": 3494, "num_data_blocks": 262, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764122788, "oldest_key_time": 1764122788, "file_creation_time": 1764123010, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 15517 microseconds, and 7422 cpu microseconds.
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.891219) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3306296 bytes OK
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.891235) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.895417) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.895432) EVENT_LOG_v1 {"time_micros": 1764123010895427, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.895449) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3363843, prev total WAL file size 3363843, number of live WAL files 2.
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.897071) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3228KB)], [80(6908KB)]
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123010897140, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10381007, "oldest_snapshot_seqno": -1}
Nov 26 02:10:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83c81f2cd835a8b858947c6c8d8b32f1b8eea9d2ec9fae57689870467b0c286/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83c81f2cd835a8b858947c6c8d8b32f1b8eea9d2ec9fae57689870467b0c286/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83c81f2cd835a8b858947c6c8d8b32f1b8eea9d2ec9fae57689870467b0c286/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83c81f2cd835a8b858947c6c8d8b32f1b8eea9d2ec9fae57689870467b0c286/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.926 350391 DEBUG nova.compute.manager [req-d0e50a93-8072-4d9b-98d0-0b506a6b3b73 req-07e0769b-5f9d-40a1-b457-fe82cc6ecb8f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Received event network-changed-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:10:10 compute-0 podman[441046]: 2025-11-26 02:10:10.927407972 +0000 UTC m=+0.268695174 container init 010829039f35b6aee76537b4c2519df6621e94005a6a1185d0abf991836635c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_varahamihira, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.929 350391 DEBUG nova.compute.manager [req-d0e50a93-8072-4d9b-98d0-0b506a6b3b73 req-07e0769b-5f9d-40a1-b457-fe82cc6ecb8f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Refreshing instance network info cache due to event network-changed-4b2c5180-2ff0-4b98-90cb-e0e6ba068614. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.930 350391 DEBUG oslo_concurrency.lockutils [req-d0e50a93-8072-4d9b-98d0-0b506a6b3b73 req-07e0769b-5f9d-40a1-b457-fe82cc6ecb8f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:10:10 compute-0 podman[441046]: 2025-11-26 02:10:10.940039186 +0000 UTC m=+0.281326338 container start 010829039f35b6aee76537b4c2519df6621e94005a6a1185d0abf991836635c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_varahamihira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 02:10:10 compute-0 nova_compute[350387]: 2025-11-26 02:10:10.949 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 5c8719f7-1028-4983-aa89-c99a459b6295_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:10 compute-0 podman[441046]: 2025-11-26 02:10:10.96873577 +0000 UTC m=+0.310022992 container attach 010829039f35b6aee76537b4c2519df6621e94005a6a1185d0abf991836635c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 5664 keys, 8642066 bytes, temperature: kUnknown
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123010969779, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8642066, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8604760, "index_size": 22028, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 142949, "raw_average_key_size": 25, "raw_value_size": 8502873, "raw_average_value_size": 1501, "num_data_blocks": 904, "num_entries": 5664, "num_filter_entries": 5664, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764123010, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.970014) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8642066 bytes
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.979561) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.7 rd, 118.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 6.7 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 6182, records dropped: 518 output_compression: NoCompression
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.979581) EVENT_LOG_v1 {"time_micros": 1764123010979571, "job": 46, "event": "compaction_finished", "compaction_time_micros": 72732, "compaction_time_cpu_micros": 29741, "output_level": 6, "num_output_files": 1, "total_output_size": 8642066, "num_input_records": 6182, "num_output_records": 5664, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123010980233, "job": 46, "event": "table_file_deletion", "file_number": 82}
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123010981541, "job": 46, "event": "table_file_deletion", "file_number": 80}
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.896920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.981914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.981921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.981924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.981927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:10:10 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:10:10.981929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.070 350391 DEBUG nova.network.neutron [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.085 350391 DEBUG oslo_concurrency.processutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 270d952c-e221-49ae-ba25-b259f07a2be3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.159 350391 DEBUG nova.storage.rbd_utils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] resizing rbd image 5c8719f7-1028-4983-aa89-c99a459b6295_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 26 02:10:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:10:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:10:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:10:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:10:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:10:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.308 350391 DEBUG nova.storage.rbd_utils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] resizing rbd image 270d952c-e221-49ae-ba25-b259f07a2be3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 26 02:10:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 85 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.0 MiB/s wr, 27 op/s
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.475 350391 DEBUG nova.objects.instance [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lazy-loading 'migration_context' on Instance uuid 5c8719f7-1028-4983-aa89-c99a459b6295 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.571 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.572 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Ensure instance console log exists: /var/lib/nova/instances/5c8719f7-1028-4983-aa89-c99a459b6295/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.573 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.574 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.574 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.589 350391 DEBUG nova.objects.instance [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lazy-loading 'migration_context' on Instance uuid 270d952c-e221-49ae-ba25-b259f07a2be3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.614 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.615 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Ensure instance console log exists: /var/lib/nova/instances/270d952c-e221-49ae-ba25-b259f07a2be3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.616 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.617 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:11 compute-0 nova_compute[350387]: 2025-11-26 02:10:11.617 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]: {
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:    "0": [
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:        {
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "devices": [
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "/dev/loop3"
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            ],
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "lv_name": "ceph_lv0",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "lv_size": "21470642176",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "name": "ceph_lv0",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "tags": {
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.cluster_name": "ceph",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.crush_device_class": "",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.encrypted": "0",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.osd_id": "0",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.type": "block",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.vdo": "0"
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            },
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "type": "block",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "vg_name": "ceph_vg0"
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:        }
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:    ],
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:    "1": [
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:        {
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "devices": [
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "/dev/loop4"
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            ],
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "lv_name": "ceph_lv1",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "lv_size": "21470642176",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "name": "ceph_lv1",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "tags": {
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.cluster_name": "ceph",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.crush_device_class": "",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.encrypted": "0",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.osd_id": "1",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.type": "block",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.vdo": "0"
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            },
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "type": "block",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "vg_name": "ceph_vg1"
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:        }
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:    ],
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:    "2": [
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:        {
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "devices": [
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "/dev/loop5"
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            ],
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "lv_name": "ceph_lv2",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "lv_size": "21470642176",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "name": "ceph_lv2",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "tags": {
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.cluster_name": "ceph",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.crush_device_class": "",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.encrypted": "0",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.osd_id": "2",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.type": "block",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:                "ceph.vdo": "0"
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            },
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "type": "block",
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:            "vg_name": "ceph_vg2"
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:        }
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]:    ]
Nov 26 02:10:11 compute-0 hardcore_varahamihira[441083]: }
Nov 26 02:10:11 compute-0 systemd[1]: libpod-010829039f35b6aee76537b4c2519df6621e94005a6a1185d0abf991836635c6.scope: Deactivated successfully.
Nov 26 02:10:11 compute-0 podman[441046]: 2025-11-26 02:10:11.801994949 +0000 UTC m=+1.143282131 container died 010829039f35b6aee76537b4c2519df6621e94005a6a1185d0abf991836635c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 02:10:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-d83c81f2cd835a8b858947c6c8d8b32f1b8eea9d2ec9fae57689870467b0c286-merged.mount: Deactivated successfully.
Nov 26 02:10:11 compute-0 podman[441046]: 2025-11-26 02:10:11.910024308 +0000 UTC m=+1.251311470 container remove 010829039f35b6aee76537b4c2519df6621e94005a6a1185d0abf991836635c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_varahamihira, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 02:10:11 compute-0 systemd[1]: libpod-conmon-010829039f35b6aee76537b4c2519df6621e94005a6a1185d0abf991836635c6.scope: Deactivated successfully.
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.230 350391 DEBUG nova.network.neutron [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updating instance_info_cache with network_info: [{"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.257 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Releasing lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.258 350391 DEBUG nova.compute.manager [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Instance network_info: |[{"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.258 350391 DEBUG oslo_concurrency.lockutils [req-d0e50a93-8072-4d9b-98d0-0b506a6b3b73 req-07e0769b-5f9d-40a1-b457-fe82cc6ecb8f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.259 350391 DEBUG nova.network.neutron [req-d0e50a93-8072-4d9b-98d0-0b506a6b3b73 req-07e0769b-5f9d-40a1-b457-fe82cc6ecb8f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Refreshing network info cache for port 4b2c5180-2ff0-4b98-90cb-e0e6ba068614 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.265 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Start _get_guest_xml network_info=[{"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': '4728a8a0-1107-4816-98c6-74482d53f92c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.277 350391 WARNING nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.292 350391 DEBUG nova.virt.libvirt.host [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.293 350391 DEBUG nova.virt.libvirt.host [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.301 350391 DEBUG nova.virt.libvirt.host [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.302 350391 DEBUG nova.virt.libvirt.host [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.303 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.304 350391 DEBUG nova.virt.hardware [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T02:09:05Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6db4d080-ab1e-4a78-a6d9-858137b0ba8b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.305 350391 DEBUG nova.virt.hardware [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.306 350391 DEBUG nova.virt.hardware [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.307 350391 DEBUG nova.virt.hardware [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.308 350391 DEBUG nova.virt.hardware [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.309 350391 DEBUG nova.virt.hardware [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.310 350391 DEBUG nova.virt.hardware [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.310 350391 DEBUG nova.virt.hardware [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.311 350391 DEBUG nova.virt.hardware [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.312 350391 DEBUG nova.virt.hardware [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.313 350391 DEBUG nova.virt.hardware [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.321 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:10:12 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252480975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.798 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.852 350391 DEBUG nova.storage.rbd_utils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] rbd image 5c8719f7-1028-4983-aa89-c99a459b6295_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.866 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:12 compute-0 nova_compute[350387]: 2025-11-26 02:10:12.951 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:13 compute-0 podman[441425]: 2025-11-26 02:10:13.065190522 +0000 UTC m=+0.080019984 container create 4dcfdba1796d573628b2a79e9c6de01805116657d7fe5e369f38f07a59374410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:10:13 compute-0 podman[441425]: 2025-11-26 02:10:13.026536858 +0000 UTC m=+0.041366360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:10:13 compute-0 systemd[1]: Started libpod-conmon-4dcfdba1796d573628b2a79e9c6de01805116657d7fe5e369f38f07a59374410.scope.
Nov 26 02:10:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:10:13 compute-0 podman[441425]: 2025-11-26 02:10:13.212486191 +0000 UTC m=+0.227315693 container init 4dcfdba1796d573628b2a79e9c6de01805116657d7fe5e369f38f07a59374410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:10:13 compute-0 podman[441425]: 2025-11-26 02:10:13.230929648 +0000 UTC m=+0.245759110 container start 4dcfdba1796d573628b2a79e9c6de01805116657d7fe5e369f38f07a59374410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:10:13 compute-0 podman[441425]: 2025-11-26 02:10:13.238265914 +0000 UTC m=+0.253095376 container attach 4dcfdba1796d573628b2a79e9c6de01805116657d7fe5e369f38f07a59374410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 26 02:10:13 compute-0 crazy_kare[441460]: 167 167
Nov 26 02:10:13 compute-0 systemd[1]: libpod-4dcfdba1796d573628b2a79e9c6de01805116657d7fe5e369f38f07a59374410.scope: Deactivated successfully.
Nov 26 02:10:13 compute-0 conmon[441460]: conmon 4dcfdba1796d573628b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4dcfdba1796d573628b2a79e9c6de01805116657d7fe5e369f38f07a59374410.scope/container/memory.events
Nov 26 02:10:13 compute-0 podman[441425]: 2025-11-26 02:10:13.248217823 +0000 UTC m=+0.263047285 container died 4dcfdba1796d573628b2a79e9c6de01805116657d7fe5e369f38f07a59374410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 02:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6125f749aeaa59720c5febbca0592025bf40b672c7a71d46e3d036d0387f55e3-merged.mount: Deactivated successfully.
Nov 26 02:10:13 compute-0 podman[441425]: 2025-11-26 02:10:13.321042035 +0000 UTC m=+0.335871467 container remove 4dcfdba1796d573628b2a79e9c6de01805116657d7fe5e369f38f07a59374410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kare, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 02:10:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:10:13 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2316859504' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:10:13 compute-0 systemd[1]: libpod-conmon-4dcfdba1796d573628b2a79e9c6de01805116657d7fe5e369f38f07a59374410.scope: Deactivated successfully.
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.352 350391 DEBUG nova.compute.manager [req-efa4957e-3aa1-431d-bd71-61b17ac8870c req-faf5e6cb-7ab8-4928-8eca-e29e7498afa8 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Received event network-changed-1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.352 350391 DEBUG nova.compute.manager [req-efa4957e-3aa1-431d-bd71-61b17ac8870c req-faf5e6cb-7ab8-4928-8eca-e29e7498afa8 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Refreshing instance network info cache due to event network-changed-1bb955bd-fd16-48e2-8413-ad1ade1cf2e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.353 350391 DEBUG oslo_concurrency.lockutils [req-efa4957e-3aa1-431d-bd71-61b17ac8870c req-faf5e6cb-7ab8-4928-8eca-e29e7498afa8 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-270d952c-e221-49ae-ba25-b259f07a2be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.353 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.368 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.370 350391 DEBUG nova.virt.libvirt.vif [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-173154417',display_name='tempest-AttachInterfacesUnderV243Test-server-173154417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-173154417',id=6,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZtmevDBOH7h2uuNZDcJCbOFxIp1AvwcCYBRUuKNsTRUBZcQypMSSPUOMMpAITLGs2JRuuQVbR8AitbKv36s+fXFQUTo2Ffyoxd6fZW1aMdi088cBYkrvxHsEH3GZ43LA==',key_name='tempest-keypair-317693094',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339deb116b764070abc6d50520ee33c8',ramdisk_id='',reservation_id='r-i9hkqns0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-270246256',owner_user_name='tempest-AttachInterfacesUnderV243Test-270246256-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:10:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aadae2b9a9834185b051c2bc59c6054a',uuid=5c8719f7-1028-4983-aa89-c99a459b6295,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.370 350391 DEBUG nova.network.os_vif_util [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Converting VIF {"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.371 350391 DEBUG nova.network.os_vif_util [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:7b:7e,bridge_name='br-int',has_traffic_filtering=True,id=4b2c5180-2ff0-4b98-90cb-e0e6ba068614,network=Network(14e89566-5c79-472a-819f-45cd3bbc2134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b2c5180-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.372 350391 DEBUG nova.objects.instance [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c8719f7-1028-4983-aa89-c99a459b6295 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.387 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] End _get_guest_xml xml=<domain type="kvm">
Nov 26 02:10:13 compute-0 nova_compute[350387]:  <uuid>5c8719f7-1028-4983-aa89-c99a459b6295</uuid>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  <name>instance-00000006</name>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  <memory>131072</memory>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  <metadata>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-173154417</nova:name>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 02:10:12</nova:creationTime>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <nova:flavor name="m1.nano">
Nov 26 02:10:13 compute-0 nova_compute[350387]:        <nova:memory>128</nova:memory>
Nov 26 02:10:13 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 02:10:13 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 02:10:13 compute-0 nova_compute[350387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 02:10:13 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 02:10:13 compute-0 nova_compute[350387]:        <nova:user uuid="aadae2b9a9834185b051c2bc59c6054a">tempest-AttachInterfacesUnderV243Test-270246256-project-member</nova:user>
Nov 26 02:10:13 compute-0 nova_compute[350387]:        <nova:project uuid="339deb116b764070abc6d50520ee33c8">tempest-AttachInterfacesUnderV243Test-270246256</nova:project>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="4728a8a0-1107-4816-98c6-74482d53f92c"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <nova:ports>
Nov 26 02:10:13 compute-0 nova_compute[350387]:        <nova:port uuid="4b2c5180-2ff0-4b98-90cb-e0e6ba068614">
Nov 26 02:10:13 compute-0 nova_compute[350387]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:        </nova:port>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      </nova:ports>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  </metadata>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <system>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <entry name="serial">5c8719f7-1028-4983-aa89-c99a459b6295</entry>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <entry name="uuid">5c8719f7-1028-4983-aa89-c99a459b6295</entry>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    </system>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  <os>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  </os>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  <features>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <apic/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  </features>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  </clock>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  </cpu>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  <devices>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/5c8719f7-1028-4983-aa89-c99a459b6295_disk">
Nov 26 02:10:13 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      </source>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:10:13 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/5c8719f7-1028-4983-aa89-c99a459b6295_disk.config">
Nov 26 02:10:13 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      </source>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:10:13 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <interface type="ethernet">
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <mac address="fa:16:3e:5a:7b:7e"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <mtu size="1442"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <target dev="tap4b2c5180-2f"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    </interface>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/5c8719f7-1028-4983-aa89-c99a459b6295/console.log" append="off"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    </serial>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <video>
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    </video>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    </rng>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 02:10:13 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 02:10:13 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 02:10:13 compute-0 nova_compute[350387]:  </devices>
Nov 26 02:10:13 compute-0 nova_compute[350387]: </domain>
Nov 26 02:10:13 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.387 350391 DEBUG nova.compute.manager [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Preparing to wait for external event network-vif-plugged-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.387 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Acquiring lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.388 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.388 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.388 350391 DEBUG nova.virt.libvirt.vif [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-173154417',display_name='tempest-AttachInterfacesUnderV243Test-server-173154417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-173154417',id=6,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZtmevDBOH7h2uuNZDcJCbOFxIp1AvwcCYBRUuKNsTRUBZcQypMSSPUOMMpAITLGs2JRuuQVbR8AitbKv36s+fXFQUTo2Ffyoxd6fZW1aMdi088cBYkrvxHsEH3GZ43LA==',key_name='tempest-keypair-317693094',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339deb116b764070abc6d50520ee33c8',ramdisk_id='',reservation_id='r-i9hkqns0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-270246256',owner_user_name='tempest-AttachInterfacesUnderV243Test-270246256-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:10:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aadae2b9a9834185b051c2bc59c6054a',uuid=5c8719f7-1028-4983-aa89-c99a459b6295,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.389 350391 DEBUG nova.network.os_vif_util [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Converting VIF {"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.389 350391 DEBUG nova.network.os_vif_util [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:7b:7e,bridge_name='br-int',has_traffic_filtering=True,id=4b2c5180-2ff0-4b98-90cb-e0e6ba068614,network=Network(14e89566-5c79-472a-819f-45cd3bbc2134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b2c5180-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.390 350391 DEBUG os_vif [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:7b:7e,bridge_name='br-int',has_traffic_filtering=True,id=4b2c5180-2ff0-4b98-90cb-e0e6ba068614,network=Network(14e89566-5c79-472a-819f-45cd3bbc2134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b2c5180-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.390 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.390 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.391 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.395 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.395 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b2c5180-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.396 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b2c5180-2f, col_values=(('external_ids', {'iface-id': '4b2c5180-2ff0-4b98-90cb-e0e6ba068614', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:7b:7e', 'vm-uuid': '5c8719f7-1028-4983-aa89-c99a459b6295'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.398 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:13 compute-0 NetworkManager[48886]: <info>  [1764123013.3999] manager: (tap4b2c5180-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.400 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.407 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.408 350391 INFO os_vif [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:7b:7e,bridge_name='br-int',has_traffic_filtering=True,id=4b2c5180-2ff0-4b98-90cb-e0e6ba068614,network=Network(14e89566-5c79-472a-819f-45cd3bbc2134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b2c5180-2f')#033[00m
Nov 26 02:10:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 121 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.8 MiB/s wr, 33 op/s
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.479 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.479 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.480 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] No VIF found with MAC fa:16:3e:5a:7b:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.480 350391 INFO nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Using config drive#033[00m
Nov 26 02:10:13 compute-0 nova_compute[350387]: 2025-11-26 02:10:13.528 350391 DEBUG nova.storage.rbd_utils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] rbd image 5c8719f7-1028-4983-aa89-c99a459b6295_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:10:13 compute-0 podman[441493]: 2025-11-26 02:10:13.61730684 +0000 UTC m=+0.098028369 container create 2e08f1a07ab52b4f04e3f8dada3453a3503686861fc123821213dd38f04cfd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 02:10:13 compute-0 podman[441493]: 2025-11-26 02:10:13.582220677 +0000 UTC m=+0.062942246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:10:13 compute-0 systemd[1]: Started libpod-conmon-2e08f1a07ab52b4f04e3f8dada3453a3503686861fc123821213dd38f04cfd1a.scope.
Nov 26 02:10:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99843c0085f6c80f177a5aed179d4a3e7648f72b75191fa6dafb23ff7421803f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99843c0085f6c80f177a5aed179d4a3e7648f72b75191fa6dafb23ff7421803f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99843c0085f6c80f177a5aed179d4a3e7648f72b75191fa6dafb23ff7421803f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99843c0085f6c80f177a5aed179d4a3e7648f72b75191fa6dafb23ff7421803f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:10:13 compute-0 podman[441493]: 2025-11-26 02:10:13.83166209 +0000 UTC m=+0.312383629 container init 2e08f1a07ab52b4f04e3f8dada3453a3503686861fc123821213dd38f04cfd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 02:10:13 compute-0 podman[441493]: 2025-11-26 02:10:13.851484205 +0000 UTC m=+0.332205734 container start 2e08f1a07ab52b4f04e3f8dada3453a3503686861fc123821213dd38f04cfd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:10:13 compute-0 podman[441493]: 2025-11-26 02:10:13.857968297 +0000 UTC m=+0.338689796 container attach 2e08f1a07ab52b4f04e3f8dada3453a3503686861fc123821213dd38f04cfd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.304 350391 DEBUG nova.network.neutron [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Updating instance_info_cache with network_info: [{"id": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "address": "fa:16:3e:41:7c:0a", "network": {"id": "578f6a80-1a41-45c9-950f-a1b20db33909", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1413004914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ec6a84328cd54e0fad4f07089c4e4e95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb955bd-fd", "ovs_interfaceid": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.342 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Releasing lock "refresh_cache-270d952c-e221-49ae-ba25-b259f07a2be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.342 350391 DEBUG nova.compute.manager [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Instance network_info: |[{"id": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "address": "fa:16:3e:41:7c:0a", "network": {"id": "578f6a80-1a41-45c9-950f-a1b20db33909", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1413004914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ec6a84328cd54e0fad4f07089c4e4e95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb955bd-fd", "ovs_interfaceid": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.343 350391 DEBUG oslo_concurrency.lockutils [req-efa4957e-3aa1-431d-bd71-61b17ac8870c req-faf5e6cb-7ab8-4928-8eca-e29e7498afa8 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-270d952c-e221-49ae-ba25-b259f07a2be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.344 350391 DEBUG nova.network.neutron [req-efa4957e-3aa1-431d-bd71-61b17ac8870c req-faf5e6cb-7ab8-4928-8eca-e29e7498afa8 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Refreshing network info cache for port 1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.348 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Start _get_guest_xml network_info=[{"id": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "address": "fa:16:3e:41:7c:0a", "network": {"id": "578f6a80-1a41-45c9-950f-a1b20db33909", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1413004914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ec6a84328cd54e0fad4f07089c4e4e95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb955bd-fd", "ovs_interfaceid": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': '4728a8a0-1107-4816-98c6-74482d53f92c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.367 350391 WARNING nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.377 350391 DEBUG nova.virt.libvirt.host [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.378 350391 DEBUG nova.virt.libvirt.host [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.393 350391 DEBUG nova.virt.libvirt.host [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.394 350391 DEBUG nova.virt.libvirt.host [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.395 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.395 350391 DEBUG nova.virt.hardware [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T02:09:05Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6db4d080-ab1e-4a78-a6d9-858137b0ba8b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.396 350391 DEBUG nova.virt.hardware [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.397 350391 DEBUG nova.virt.hardware [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.398 350391 DEBUG nova.virt.hardware [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.398 350391 DEBUG nova.virt.hardware [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.399 350391 DEBUG nova.virt.hardware [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.400 350391 DEBUG nova.virt.hardware [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.400 350391 DEBUG nova.virt.hardware [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.401 350391 DEBUG nova.virt.hardware [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.402 350391 DEBUG nova.virt.hardware [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.402 350391 DEBUG nova.virt.hardware [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.407 350391 DEBUG oslo_concurrency.processutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.596 350391 INFO nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Creating config drive at /var/lib/nova/instances/5c8719f7-1028-4983-aa89-c99a459b6295/disk.config#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.619 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5c8719f7-1028-4983-aa89-c99a459b6295/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqcqo0gr3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.755 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5c8719f7-1028-4983-aa89-c99a459b6295/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqcqo0gr3" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.812 350391 DEBUG nova.storage.rbd_utils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] rbd image 5c8719f7-1028-4983-aa89-c99a459b6295_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:10:14 compute-0 nova_compute[350387]: 2025-11-26 02:10:14.826 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5c8719f7-1028-4983-aa89-c99a459b6295/disk.config 5c8719f7-1028-4983-aa89-c99a459b6295_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:14 compute-0 agitated_swirles[441521]: {
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:        "osd_id": 0,
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:        "type": "bluestore"
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:    },
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:        "osd_id": 2,
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:        "type": "bluestore"
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:    },
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:        "osd_id": 1,
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:        "type": "bluestore"
Nov 26 02:10:14 compute-0 agitated_swirles[441521]:    }
Nov 26 02:10:14 compute-0 agitated_swirles[441521]: }
Nov 26 02:10:14 compute-0 systemd[1]: libpod-2e08f1a07ab52b4f04e3f8dada3453a3503686861fc123821213dd38f04cfd1a.scope: Deactivated successfully.
Nov 26 02:10:14 compute-0 systemd[1]: libpod-2e08f1a07ab52b4f04e3f8dada3453a3503686861fc123821213dd38f04cfd1a.scope: Consumed 1.121s CPU time.
Nov 26 02:10:14 compute-0 podman[441493]: 2025-11-26 02:10:14.983268894 +0000 UTC m=+1.463990423 container died 2e08f1a07ab52b4f04e3f8dada3453a3503686861fc123821213dd38f04cfd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:10:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:10:14 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2927162198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-99843c0085f6c80f177a5aed179d4a3e7648f72b75191fa6dafb23ff7421803f-merged.mount: Deactivated successfully.
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.044 350391 DEBUG oslo_concurrency.processutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:15 compute-0 podman[441493]: 2025-11-26 02:10:15.064122911 +0000 UTC m=+1.544844400 container remove 2e08f1a07ab52b4f04e3f8dada3453a3503686861fc123821213dd38f04cfd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:10:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:10:15 compute-0 systemd[1]: libpod-conmon-2e08f1a07ab52b4f04e3f8dada3453a3503686861fc123821213dd38f04cfd1a.scope: Deactivated successfully.
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.103 350391 DEBUG nova.storage.rbd_utils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] rbd image 270d952c-e221-49ae-ba25-b259f07a2be3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.112 350391 DEBUG oslo_concurrency.processutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:10:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:10:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:10:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:10:15 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 4fe034cd-8f24-4e14-b0d4-af7b982f02e3 does not exist
Nov 26 02:10:15 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 04a79470-2cda-4098-8891-57a7a71e0409 does not exist
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.147 350391 DEBUG oslo_concurrency.processutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5c8719f7-1028-4983-aa89-c99a459b6295/disk.config 5c8719f7-1028-4983-aa89-c99a459b6295_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.148 350391 INFO nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Deleting local config drive /var/lib/nova/instances/5c8719f7-1028-4983-aa89-c99a459b6295/disk.config because it was imported into RBD.#033[00m
Nov 26 02:10:15 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 26 02:10:15 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 26 02:10:15 compute-0 kernel: tap4b2c5180-2f: entered promiscuous mode
Nov 26 02:10:15 compute-0 NetworkManager[48886]: <info>  [1764123015.2769] manager: (tap4b2c5180-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.278 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:15 compute-0 ovn_controller[89102]: 2025-11-26T02:10:15Z|00066|binding|INFO|Claiming lport 4b2c5180-2ff0-4b98-90cb-e0e6ba068614 for this chassis.
Nov 26 02:10:15 compute-0 ovn_controller[89102]: 2025-11-26T02:10:15Z|00067|binding|INFO|4b2c5180-2ff0-4b98-90cb-e0e6ba068614: Claiming fa:16:3e:5a:7b:7e 10.100.0.9
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.287 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:7b:7e 10.100.0.9'], port_security=['fa:16:3e:5a:7b:7e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5c8719f7-1028-4983-aa89-c99a459b6295', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e89566-5c79-472a-819f-45cd3bbc2134', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339deb116b764070abc6d50520ee33c8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8bca6503-83d9-4549-b632-308ae47fd689', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dc53788-f43e-4c82-98d2-b64a154786fb, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=4b2c5180-2ff0-4b98-90cb-e0e6ba068614) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.289 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 4b2c5180-2ff0-4b98-90cb-e0e6ba068614 in datapath 14e89566-5c79-472a-819f-45cd3bbc2134 bound to our chassis#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.291 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e89566-5c79-472a-819f-45cd3bbc2134#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.308 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5bb3a4-7c4e-4a64-b9dd-b9286ebfd20f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.309 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14e89566-51 in ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.311 413433 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14e89566-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.311 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[66d24a1a-f329-4edc-8cb7-a94958809754]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.312 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[eae744c8-6fee-488e-8b46-869d957d2091]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 systemd-machined[138512]: New machine qemu-6-instance-00000006.
Nov 26 02:10:15 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.322 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:15 compute-0 ovn_controller[89102]: 2025-11-26T02:10:15Z|00068|binding|INFO|Setting lport 4b2c5180-2ff0-4b98-90cb-e0e6ba068614 ovn-installed in OVS
Nov 26 02:10:15 compute-0 ovn_controller[89102]: 2025-11-26T02:10:15Z|00069|binding|INFO|Setting lport 4b2c5180-2ff0-4b98-90cb-e0e6ba068614 up in Southbound
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.328 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[d641d32d-a27a-4ea8-a1da-83cced7224aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.331 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:15 compute-0 systemd-udevd[441749]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.355 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[34890d5e-4c71-45da-9f5d-25d152522b50]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 NetworkManager[48886]: <info>  [1764123015.3743] device (tap4b2c5180-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 02:10:15 compute-0 NetworkManager[48886]: <info>  [1764123015.3802] device (tap4b2c5180-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.385 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[8f92732a-1416-4e97-9751-11aa926e9968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.409 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:15 compute-0 NetworkManager[48886]: <info>  [1764123015.4130] manager: (tap14e89566-50): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.412 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[066eca35-2e4b-4291-9831-bb6e884f8ebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 26 02:10:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 321 active+clean; 149 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 58 op/s
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.447 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[d99216b3-bed0-422a-be09-ba28c95fcea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.452 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[ee52949b-d703-4f7c-8c30-3f299e4018ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 26 02:10:15 compute-0 NetworkManager[48886]: <info>  [1764123015.4732] device (tap14e89566-50): carrier: link connected
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.480 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[4f65a0f3-8a43-4bd7-926a-1380428faca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.503 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[eb78240b-98bb-4027-a403-172f6b0a5f05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e89566-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:dc:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664610, 'reachable_time': 19157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 441798, 'error': None, 'target': 'ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.519 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[31cb1eaf-99ed-43e8-a675-c2589ce366ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb8:dcbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664610, 'tstamp': 664610}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 441799, 'error': None, 'target': 'ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.534 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[274dcd59-2014-4020-acc8-b3b94c2f80bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e89566-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:dc:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664610, 'reachable_time': 19157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 441800, 'error': None, 'target': 'ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.569 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d4e9ca-3aac-4ce2-aee3-451802dbfa56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:10:15 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1025358670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.623 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.635 350391 DEBUG oslo_concurrency.processutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.636 350391 DEBUG nova.virt.libvirt.vif [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:10:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1930893835',display_name='tempest-ServersTestManualDisk-server-1930893835',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1930893835',id=7,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIw6oaT02VQzNhSb55J5HKxH2V3Dbs5h1DE4yOuNN2iNQuMYPDSHQiBizY1qXSUSi68iXRvtrlwURaP2sypM0fG+fUOLtbd/ORld54R8DrYOis2sZUEXHqML8q2KtGbp/Q==',key_name='tempest-keypair-1650669487',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ec6a84328cd54e0fad4f07089c4e4e95',ramdisk_id='',reservation_id='r-r1fs902s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-501918347',owner_user_name='tempest-ServersTestManualDisk-501918347-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:10:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1c3747a1e5af44b0bcc1d0a5f8241343',uuid=270d952c-e221-49ae-ba25-b259f07a2be3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "address": "fa:16:3e:41:7c:0a", "network": {"id": "578f6a80-1a41-45c9-950f-a1b20db33909", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1413004914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ec6a84328cd54e0fad4f07089c4e4e95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb955bd-fd", "ovs_interfaceid": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.636 350391 DEBUG nova.network.os_vif_util [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Converting VIF {"id": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "address": "fa:16:3e:41:7c:0a", "network": {"id": "578f6a80-1a41-45c9-950f-a1b20db33909", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1413004914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ec6a84328cd54e0fad4f07089c4e4e95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb955bd-fd", "ovs_interfaceid": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.637 350391 DEBUG nova.network.os_vif_util [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:7c:0a,bridge_name='br-int',has_traffic_filtering=True,id=1bb955bd-fd16-48e2-8413-ad1ade1cf2e1,network=Network(578f6a80-1a41-45c9-950f-a1b20db33909),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb955bd-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.639 350391 DEBUG nova.objects.instance [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lazy-loading 'pci_devices' on Instance uuid 270d952c-e221-49ae-ba25-b259f07a2be3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.652 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[382b815e-c5a2-4fa1-b95f-a95d7ee22932]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.653 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e89566-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.654 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.654 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e89566-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.656 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:15 compute-0 NetworkManager[48886]: <info>  [1764123015.6593] manager: (tap14e89566-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 26 02:10:15 compute-0 kernel: tap14e89566-50: entered promiscuous mode
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.662 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e89566-50, col_values=(('external_ids', {'iface-id': '6285b1b6-6fe8-49b4-8dbc-d2e179b3b43b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.660 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] End _get_guest_xml xml=<domain type="kvm">
Nov 26 02:10:15 compute-0 nova_compute[350387]:  <uuid>270d952c-e221-49ae-ba25-b259f07a2be3</uuid>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  <name>instance-00000007</name>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  <memory>131072</memory>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  <metadata>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <nova:name>tempest-ServersTestManualDisk-server-1930893835</nova:name>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 02:10:14</nova:creationTime>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <nova:flavor name="m1.nano">
Nov 26 02:10:15 compute-0 nova_compute[350387]:        <nova:memory>128</nova:memory>
Nov 26 02:10:15 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 02:10:15 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 02:10:15 compute-0 nova_compute[350387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 02:10:15 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 02:10:15 compute-0 nova_compute[350387]:        <nova:user uuid="1c3747a1e5af44b0bcc1d0a5f8241343">tempest-ServersTestManualDisk-501918347-project-member</nova:user>
Nov 26 02:10:15 compute-0 nova_compute[350387]:        <nova:project uuid="ec6a84328cd54e0fad4f07089c4e4e95">tempest-ServersTestManualDisk-501918347</nova:project>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="4728a8a0-1107-4816-98c6-74482d53f92c"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <nova:ports>
Nov 26 02:10:15 compute-0 nova_compute[350387]:        <nova:port uuid="1bb955bd-fd16-48e2-8413-ad1ade1cf2e1">
Nov 26 02:10:15 compute-0 nova_compute[350387]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 26 02:10:15 compute-0 ovn_controller[89102]: 2025-11-26T02:10:15Z|00070|binding|INFO|Releasing lport 6285b1b6-6fe8-49b4-8dbc-d2e179b3b43b from this chassis (sb_readonly=0)
Nov 26 02:10:15 compute-0 nova_compute[350387]:        </nova:port>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      </nova:ports>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  </metadata>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <system>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <entry name="serial">270d952c-e221-49ae-ba25-b259f07a2be3</entry>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <entry name="uuid">270d952c-e221-49ae-ba25-b259f07a2be3</entry>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    </system>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  <os>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  </os>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  <features>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <apic/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  </features>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  </clock>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  </cpu>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  <devices>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/270d952c-e221-49ae-ba25-b259f07a2be3_disk">
Nov 26 02:10:15 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      </source>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:10:15 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/270d952c-e221-49ae-ba25-b259f07a2be3_disk.config">
Nov 26 02:10:15 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      </source>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:10:15 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <interface type="ethernet">
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <mac address="fa:16:3e:41:7c:0a"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <mtu size="1442"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <target dev="tap1bb955bd-fd"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    </interface>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/270d952c-e221-49ae-ba25-b259f07a2be3/console.log" append="off"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    </serial>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <video>
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    </video>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    </rng>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 02:10:15 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 02:10:15 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 02:10:15 compute-0 nova_compute[350387]:  </devices>
Nov 26 02:10:15 compute-0 nova_compute[350387]: </domain>
Nov 26 02:10:15 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.661 350391 DEBUG nova.compute.manager [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Preparing to wait for external event network-vif-plugged-1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.661 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Acquiring lock "270d952c-e221-49ae-ba25-b259f07a2be3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.662 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "270d952c-e221-49ae-ba25-b259f07a2be3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.662 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "270d952c-e221-49ae-ba25-b259f07a2be3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.663 350391 DEBUG nova.virt.libvirt.vif [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:10:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1930893835',display_name='tempest-ServersTestManualDisk-server-1930893835',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1930893835',id=7,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIw6oaT02VQzNhSb55J5HKxH2V3Dbs5h1DE4yOuNN2iNQuMYPDSHQiBizY1qXSUSi68iXRvtrlwURaP2sypM0fG+fUOLtbd/ORld54R8DrYOis2sZUEXHqML8q2KtGbp/Q==',key_name='tempest-keypair-1650669487',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ec6a84328cd54e0fad4f07089c4e4e95',ramdisk_id='',reservation_id='r-r1fs902s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-501918347',owner_user_name='tempest-ServersTestManualDisk-501918347-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:10:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1c3747a1e5af44b0bcc1d0a5f8241343',uuid=270d952c-e221-49ae-ba25-b259f07a2be3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "address": "fa:16:3e:41:7c:0a", "network": {"id": "578f6a80-1a41-45c9-950f-a1b20db33909", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1413004914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ec6a84328cd54e0fad4f07089c4e4e95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb955bd-fd", "ovs_interfaceid": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.663 350391 DEBUG nova.network.os_vif_util [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Converting VIF {"id": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "address": "fa:16:3e:41:7c:0a", "network": {"id": "578f6a80-1a41-45c9-950f-a1b20db33909", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1413004914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ec6a84328cd54e0fad4f07089c4e4e95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb955bd-fd", "ovs_interfaceid": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.663 350391 DEBUG nova.network.os_vif_util [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:7c:0a,bridge_name='br-int',has_traffic_filtering=True,id=1bb955bd-fd16-48e2-8413-ad1ade1cf2e1,network=Network(578f6a80-1a41-45c9-950f-a1b20db33909),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb955bd-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.672 350391 DEBUG os_vif [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:7c:0a,bridge_name='br-int',has_traffic_filtering=True,id=1bb955bd-fd16-48e2-8413-ad1ade1cf2e1,network=Network(578f6a80-1a41-45c9-950f-a1b20db33909),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb955bd-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.673 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.674 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.674 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.674 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.676 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.676 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1bb955bd-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.677 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1bb955bd-fd, col_values=(('external_ids', {'iface-id': '1bb955bd-fd16-48e2-8413-ad1ade1cf2e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:7c:0a', 'vm-uuid': '270d952c-e221-49ae-ba25-b259f07a2be3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.679 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:15 compute-0 NetworkManager[48886]: <info>  [1764123015.6812] manager: (tap1bb955bd-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.681 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.691 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.692 286844 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14e89566-5c79-472a-819f-45cd3bbc2134.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14e89566-5c79-472a-819f-45cd3bbc2134.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.693 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[15731e5a-2d82-45dc-ab88-bcbf29fc7366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.694 286844 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: global
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    log         /dev/log local0 debug
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    log-tag     haproxy-metadata-proxy-14e89566-5c79-472a-819f-45cd3bbc2134
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    user        root
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    group       root
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    maxconn     1024
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    pidfile     /var/lib/neutron/external/pids/14e89566-5c79-472a-819f-45cd3bbc2134.pid.haproxy
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    daemon
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: defaults
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    log global
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    mode http
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    option httplog
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    option dontlognull
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    option http-server-close
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    option forwardfor
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    retries                 3
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    timeout http-request    30s
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    timeout connect         30s
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    timeout client          32s
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    timeout server          32s
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    timeout http-keep-alive 30s
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: listen listener
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    bind 169.254.169.254:80
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]:    http-request add-header X-OVN-Network-ID 14e89566-5c79-472a-819f-45cd3bbc2134
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 02:10:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:15.695 286844 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134', 'env', 'PROCESS_TAG=haproxy-14e89566-5c79-472a-819f-45cd3bbc2134', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14e89566-5c79-472a-819f-45cd3bbc2134.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.706 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.707 350391 INFO os_vif [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:7c:0a,bridge_name='br-int',has_traffic_filtering=True,id=1bb955bd-fd16-48e2-8413-ad1ade1cf2e1,network=Network(578f6a80-1a41-45c9-950f-a1b20db33909),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb955bd-fd')#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.760 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.760 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.760 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] No VIF found with MAC fa:16:3e:41:7c:0a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.761 350391 INFO nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Using config drive#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.793 350391 DEBUG nova.storage.rbd_utils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] rbd image 270d952c-e221-49ae-ba25-b259f07a2be3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:10:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:10:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.970 350391 DEBUG nova.compute.manager [req-e5f736cd-882d-4fe3-8003-4f4786611ccc req-b2083553-4847-42e3-ac92-b693ae0f34e1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Received event network-vif-plugged-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.971 350391 DEBUG oslo_concurrency.lockutils [req-e5f736cd-882d-4fe3-8003-4f4786611ccc req-b2083553-4847-42e3-ac92-b693ae0f34e1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.973 350391 DEBUG oslo_concurrency.lockutils [req-e5f736cd-882d-4fe3-8003-4f4786611ccc req-b2083553-4847-42e3-ac92-b693ae0f34e1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.974 350391 DEBUG oslo_concurrency.lockutils [req-e5f736cd-882d-4fe3-8003-4f4786611ccc req-b2083553-4847-42e3-ac92-b693ae0f34e1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:15 compute-0 nova_compute[350387]: 2025-11-26 02:10:15.975 350391 DEBUG nova.compute.manager [req-e5f736cd-882d-4fe3-8003-4f4786611ccc req-b2083553-4847-42e3-ac92-b693ae0f34e1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Processing event network-vif-plugged-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.063 350391 DEBUG nova.network.neutron [req-d0e50a93-8072-4d9b-98d0-0b506a6b3b73 req-07e0769b-5f9d-40a1-b457-fe82cc6ecb8f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updated VIF entry in instance network info cache for port 4b2c5180-2ff0-4b98-90cb-e0e6ba068614. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.064 350391 DEBUG nova.network.neutron [req-d0e50a93-8072-4d9b-98d0-0b506a6b3b73 req-07e0769b-5f9d-40a1-b457-fe82cc6ecb8f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updating instance_info_cache with network_info: [{"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.100 350391 DEBUG oslo_concurrency.lockutils [req-d0e50a93-8072-4d9b-98d0-0b506a6b3b73 req-07e0769b-5f9d-40a1-b457-fe82cc6ecb8f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:10:16 compute-0 podman[441853]: 2025-11-26 02:10:16.271066126 +0000 UTC m=+0.096266850 container create db0b6ce7a587e4b091d8d73a70586f1fd39569d1049d2305b7bd28fb64e2889e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 02:10:16 compute-0 podman[441853]: 2025-11-26 02:10:16.236580199 +0000 UTC m=+0.061780953 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 02:10:16 compute-0 systemd[1]: Started libpod-conmon-db0b6ce7a587e4b091d8d73a70586f1fd39569d1049d2305b7bd28fb64e2889e.scope.
Nov 26 02:10:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51be4e4eff0774c18211445f63e15087e46460fd324f4463a2ce3dc4a716fb82/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 02:10:16 compute-0 podman[441853]: 2025-11-26 02:10:16.393313873 +0000 UTC m=+0.218514687 container init db0b6ce7a587e4b091d8d73a70586f1fd39569d1049d2305b7bd28fb64e2889e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:10:16 compute-0 podman[441853]: 2025-11-26 02:10:16.403242061 +0000 UTC m=+0.228442785 container start db0b6ce7a587e4b091d8d73a70586f1fd39569d1049d2305b7bd28fb64e2889e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.406 350391 INFO nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Creating config drive at /var/lib/nova/instances/270d952c-e221-49ae-ba25-b259f07a2be3/disk.config#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.412 350391 DEBUG oslo_concurrency.processutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/270d952c-e221-49ae-ba25-b259f07a2be3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwab2eev execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:16 compute-0 neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134[441869]: [NOTICE]   (441873) : New worker (441876) forked
Nov 26 02:10:16 compute-0 neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134[441869]: [NOTICE]   (441873) : Loading success.
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.563 350391 DEBUG oslo_concurrency.processutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/270d952c-e221-49ae-ba25-b259f07a2be3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwab2eev" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.640 350391 DEBUG nova.storage.rbd_utils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] rbd image 270d952c-e221-49ae-ba25-b259f07a2be3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.652 350391 DEBUG oslo_concurrency.processutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/270d952c-e221-49ae-ba25-b259f07a2be3/disk.config 270d952c-e221-49ae-ba25-b259f07a2be3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.764 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123016.7633128, 5c8719f7-1028-4983-aa89-c99a459b6295 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.765 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] VM Started (Lifecycle Event)#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.769 350391 DEBUG nova.compute.manager [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.780 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.790 350391 INFO nova.virt.libvirt.driver [-] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Instance spawned successfully.#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.794 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.806 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.831 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.840 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.841 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.841 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.842 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.843 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.843 350391 DEBUG nova.virt.libvirt.driver [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.877 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.877 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123016.76345, 5c8719f7-1028-4983-aa89-c99a459b6295 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.877 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] VM Paused (Lifecycle Event)#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.958 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.967 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123016.7780488, 5c8719f7-1028-4983-aa89-c99a459b6295 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.968 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] VM Resumed (Lifecycle Event)#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.973 350391 INFO nova.compute.manager [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Took 11.02 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.974 350391 DEBUG nova.compute.manager [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.992 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.993 350391 DEBUG oslo_concurrency.processutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/270d952c-e221-49ae-ba25-b259f07a2be3/disk.config 270d952c-e221-49ae-ba25-b259f07a2be3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:16 compute-0 nova_compute[350387]: 2025-11-26 02:10:16.995 350391 INFO nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Deleting local config drive /var/lib/nova/instances/270d952c-e221-49ae-ba25-b259f07a2be3/disk.config because it was imported into RBD.#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.004 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.041 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.068 350391 INFO nova.compute.manager [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Took 12.11 seconds to build instance.#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.095 350391 DEBUG oslo_concurrency.lockutils [None req-e13ffca6-c37b-47f7-ba51-493a2a3b271b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:17 compute-0 kernel: tap1bb955bd-fd: entered promiscuous mode
Nov 26 02:10:17 compute-0 NetworkManager[48886]: <info>  [1764123017.1169] manager: (tap1bb955bd-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.121 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:17 compute-0 ovn_controller[89102]: 2025-11-26T02:10:17Z|00071|binding|INFO|Claiming lport 1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 for this chassis.
Nov 26 02:10:17 compute-0 ovn_controller[89102]: 2025-11-26T02:10:17Z|00072|binding|INFO|1bb955bd-fd16-48e2-8413-ad1ade1cf2e1: Claiming fa:16:3e:41:7c:0a 10.100.0.4
Nov 26 02:10:17 compute-0 systemd-udevd[441767]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.136 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.146 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:7c:0a 10.100.0.4'], port_security=['fa:16:3e:41:7c:0a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '270d952c-e221-49ae-ba25-b259f07a2be3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-578f6a80-1a41-45c9-950f-a1b20db33909', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ec6a84328cd54e0fad4f07089c4e4e95', 'neutron:revision_number': '2', 'neutron:security_group_ids': '92c65b7b-9431-492d-8fb4-9a245526b800', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8edeb8c-abc9-4149-8939-cbe5e03d5fa6, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=1bb955bd-fd16-48e2-8413-ad1ade1cf2e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.149 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 in datapath 578f6a80-1a41-45c9-950f-a1b20db33909 bound to our chassis#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.151 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 578f6a80-1a41-45c9-950f-a1b20db33909#033[00m
Nov 26 02:10:17 compute-0 NetworkManager[48886]: <info>  [1764123017.1640] device (tap1bb955bd-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.168 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd699e9-0e01-447e-b15b-cb194e6190a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 NetworkManager[48886]: <info>  [1764123017.1700] device (tap1bb955bd-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.169 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap578f6a80-11 in ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.178 413433 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap578f6a80-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.178 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2b8243-98f8-42ef-ae1a-62ee69febe20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.180 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[c5900cc3-c7d7-4121-a74e-b0661f7344ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 systemd-machined[138512]: New machine qemu-7-instance-00000007.
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.196 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[31c6a03e-efcb-4cff-b003-22f9859564df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.227 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[d64895a3-d337-488c-b96e-3ed26a6fd2c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.234 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:17 compute-0 ovn_controller[89102]: 2025-11-26T02:10:17Z|00073|binding|INFO|Setting lport 1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 ovn-installed in OVS
Nov 26 02:10:17 compute-0 ovn_controller[89102]: 2025-11-26T02:10:17Z|00074|binding|INFO|Setting lport 1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 up in Southbound
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.251 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.269 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[287a6e32-1ffd-4432-a24f-39ac2e5e3754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 NetworkManager[48886]: <info>  [1764123017.2781] manager: (tap578f6a80-10): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.279 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf458a7-6cd9-4195-b341-3d100904b7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.317 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[e833e620-0098-4d85-89f5-d65187b34030]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.320 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[7a50c5c9-f2d5-461d-8700-17aa1ee4cba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.345 350391 DEBUG nova.network.neutron [req-efa4957e-3aa1-431d-bd71-61b17ac8870c req-faf5e6cb-7ab8-4928-8eca-e29e7498afa8 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Updated VIF entry in instance network info cache for port 1bb955bd-fd16-48e2-8413-ad1ade1cf2e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.346 350391 DEBUG nova.network.neutron [req-efa4957e-3aa1-431d-bd71-61b17ac8870c req-faf5e6cb-7ab8-4928-8eca-e29e7498afa8 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Updating instance_info_cache with network_info: [{"id": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "address": "fa:16:3e:41:7c:0a", "network": {"id": "578f6a80-1a41-45c9-950f-a1b20db33909", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1413004914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ec6a84328cd54e0fad4f07089c4e4e95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb955bd-fd", "ovs_interfaceid": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:10:17 compute-0 NetworkManager[48886]: <info>  [1764123017.3491] device (tap578f6a80-10): carrier: link connected
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.355 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[de147b39-63d5-494a-8ff9-d5d32748b9a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.369 350391 DEBUG oslo_concurrency.lockutils [req-efa4957e-3aa1-431d-bd71-61b17ac8870c req-faf5e6cb-7ab8-4928-8eca-e29e7498afa8 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-270d952c-e221-49ae-ba25-b259f07a2be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.390 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[6e63bed9-a39a-4da8-838d-9ed3f0807759]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap578f6a80-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:0a:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664797, 'reachable_time': 32539, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 442000, 'error': None, 'target': 'ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.416 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[910ce244-6d15-46a6-b47d-14948ef1c40b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:af7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664797, 'tstamp': 664797}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 442001, 'error': None, 'target': 'ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 150 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 65 op/s
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.444 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[0646a7af-2d13-4fcb-8d74-de20d1c8db8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap578f6a80-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:0a:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664797, 'reachable_time': 32539, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 442002, 'error': None, 'target': 'ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.493 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[33b423f4-e9aa-4713-a68c-126dc1ae1cc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.592 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[1928bcd3-3a6f-4f8e-892a-6de1c7fcfe4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.595 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap578f6a80-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.596 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.598 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap578f6a80-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:10:17 compute-0 kernel: tap578f6a80-10: entered promiscuous mode
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.604 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:17 compute-0 NetworkManager[48886]: <info>  [1764123017.6086] manager: (tap578f6a80-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.608 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap578f6a80-10, col_values=(('external_ids', {'iface-id': '3ad46a56-806e-4d01-abd0-9720b5a6bbac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:10:17 compute-0 ovn_controller[89102]: 2025-11-26T02:10:17Z|00075|binding|INFO|Releasing lport 3ad46a56-806e-4d01-abd0-9720b5a6bbac from this chassis (sb_readonly=0)
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.610 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.641 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.641 286844 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/578f6a80-1a41-45c9-950f-a1b20db33909.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/578f6a80-1a41-45c9-950f-a1b20db33909.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.643 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[189afbd0-2478-4cd8-b15a-5b279503fd2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.644 286844 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: global
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    log         /dev/log local0 debug
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    log-tag     haproxy-metadata-proxy-578f6a80-1a41-45c9-950f-a1b20db33909
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    user        root
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    group       root
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    maxconn     1024
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    pidfile     /var/lib/neutron/external/pids/578f6a80-1a41-45c9-950f-a1b20db33909.pid.haproxy
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    daemon
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: defaults
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    log global
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    mode http
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    option httplog
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    option dontlognull
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    option http-server-close
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    option forwardfor
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    retries                 3
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    timeout http-request    30s
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    timeout connect         30s
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    timeout client          32s
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    timeout server          32s
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    timeout http-keep-alive 30s
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: listen listener
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    bind 169.254.169.254:80
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]:    http-request add-header X-OVN-Network-ID 578f6a80-1a41-45c9-950f-a1b20db33909
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 02:10:17 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:17.645 286844 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909', 'env', 'PROCESS_TAG=haproxy-578f6a80-1a41-45c9-950f-a1b20db33909', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/578f6a80-1a41-45c9-950f-a1b20db33909.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.886 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123017.886179, 270d952c-e221-49ae-ba25-b259f07a2be3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.887 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] VM Started (Lifecycle Event)#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.913 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.921 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123017.886289, 270d952c-e221-49ae-ba25-b259f07a2be3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.921 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] VM Paused (Lifecycle Event)#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.944 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.949 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:10:17 compute-0 nova_compute[350387]: 2025-11-26 02:10:17.966 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:10:18 compute-0 podman[442075]: 2025-11-26 02:10:18.130959187 +0000 UTC m=+0.090347494 container create 0c5628979c8acc3b1829099a7440e968a90e21430e82d0dbe1ff19c956ac4879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 02:10:18 compute-0 systemd[1]: Started libpod-conmon-0c5628979c8acc3b1829099a7440e968a90e21430e82d0dbe1ff19c956ac4879.scope.
Nov 26 02:10:18 compute-0 podman[442075]: 2025-11-26 02:10:18.094472984 +0000 UTC m=+0.053861301 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 02:10:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc357ff67695aa387cd425202bf12597296af98bcbeb57a295ab270f6316a3e3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.273 350391 DEBUG nova.compute.manager [req-8f7862f1-533b-4d22-bfa2-5ba7eeedad46 req-1a21083c-2817-48a2-b534-2557613dc641 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Received event network-vif-plugged-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.274 350391 DEBUG oslo_concurrency.lockutils [req-8f7862f1-533b-4d22-bfa2-5ba7eeedad46 req-1a21083c-2817-48a2-b534-2557613dc641 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.274 350391 DEBUG oslo_concurrency.lockutils [req-8f7862f1-533b-4d22-bfa2-5ba7eeedad46 req-1a21083c-2817-48a2-b534-2557613dc641 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.275 350391 DEBUG oslo_concurrency.lockutils [req-8f7862f1-533b-4d22-bfa2-5ba7eeedad46 req-1a21083c-2817-48a2-b534-2557613dc641 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.275 350391 DEBUG nova.compute.manager [req-8f7862f1-533b-4d22-bfa2-5ba7eeedad46 req-1a21083c-2817-48a2-b534-2557613dc641 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] No waiting events found dispatching network-vif-plugged-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.275 350391 WARNING nova.compute.manager [req-8f7862f1-533b-4d22-bfa2-5ba7eeedad46 req-1a21083c-2817-48a2-b534-2557613dc641 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Received unexpected event network-vif-plugged-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 for instance with vm_state active and task_state None.#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.276 350391 DEBUG nova.compute.manager [req-8f7862f1-533b-4d22-bfa2-5ba7eeedad46 req-1a21083c-2817-48a2-b534-2557613dc641 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Received event network-vif-plugged-1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.276 350391 DEBUG oslo_concurrency.lockutils [req-8f7862f1-533b-4d22-bfa2-5ba7eeedad46 req-1a21083c-2817-48a2-b534-2557613dc641 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "270d952c-e221-49ae-ba25-b259f07a2be3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.277 350391 DEBUG oslo_concurrency.lockutils [req-8f7862f1-533b-4d22-bfa2-5ba7eeedad46 req-1a21083c-2817-48a2-b534-2557613dc641 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "270d952c-e221-49ae-ba25-b259f07a2be3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.277 350391 DEBUG oslo_concurrency.lockutils [req-8f7862f1-533b-4d22-bfa2-5ba7eeedad46 req-1a21083c-2817-48a2-b534-2557613dc641 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "270d952c-e221-49ae-ba25-b259f07a2be3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.278 350391 DEBUG nova.compute.manager [req-8f7862f1-533b-4d22-bfa2-5ba7eeedad46 req-1a21083c-2817-48a2-b534-2557613dc641 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Processing event network-vif-plugged-1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.279 350391 DEBUG nova.compute.manager [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.292 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123018.285803, 270d952c-e221-49ae-ba25-b259f07a2be3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.292 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] VM Resumed (Lifecycle Event)#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.295 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.304 350391 INFO nova.virt.libvirt.driver [-] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Instance spawned successfully.#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.304 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 02:10:18 compute-0 podman[442075]: 2025-11-26 02:10:18.313620347 +0000 UTC m=+0.273008674 container init 0c5628979c8acc3b1829099a7440e968a90e21430e82d0dbe1ff19c956ac4879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.327 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:10:18 compute-0 podman[442075]: 2025-11-26 02:10:18.33189245 +0000 UTC m=+0.291280777 container start 0c5628979c8acc3b1829099a7440e968a90e21430e82d0dbe1ff19c956ac4879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.334 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.334 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.334 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.335 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.335 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.336 350391 DEBUG nova.virt.libvirt.driver [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.342 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.353 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.368 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:10:18 compute-0 neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909[442089]: [NOTICE]   (442094) : New worker (442096) forked
Nov 26 02:10:18 compute-0 neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909[442089]: [NOTICE]   (442094) : Loading success.
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.394 350391 INFO nova.compute.manager [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Took 11.15 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.394 350391 DEBUG nova.compute.manager [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.478 350391 INFO nova.compute.manager [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Took 12.26 seconds to build instance.#033[00m
Nov 26 02:10:18 compute-0 nova_compute[350387]: 2025-11-26 02:10:18.499 350391 DEBUG oslo_concurrency.lockutils [None req-1509060c-a783-4593-aaff-3026bae8d7f2 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "270d952c-e221-49ae-ba25-b259f07a2be3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 150 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 65 op/s
Nov 26 02:10:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:10:20 compute-0 nova_compute[350387]: 2025-11-26 02:10:20.679 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:20 compute-0 nova_compute[350387]: 2025-11-26 02:10:20.851 350391 DEBUG nova.compute.manager [req-83cbaf70-9ea2-444b-8b34-0a967adb1840 req-7d41c626-1535-4531-9983-0f3055367c51 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Received event network-vif-plugged-1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:10:20 compute-0 nova_compute[350387]: 2025-11-26 02:10:20.854 350391 DEBUG oslo_concurrency.lockutils [req-83cbaf70-9ea2-444b-8b34-0a967adb1840 req-7d41c626-1535-4531-9983-0f3055367c51 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "270d952c-e221-49ae-ba25-b259f07a2be3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:20 compute-0 nova_compute[350387]: 2025-11-26 02:10:20.855 350391 DEBUG oslo_concurrency.lockutils [req-83cbaf70-9ea2-444b-8b34-0a967adb1840 req-7d41c626-1535-4531-9983-0f3055367c51 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "270d952c-e221-49ae-ba25-b259f07a2be3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:20 compute-0 nova_compute[350387]: 2025-11-26 02:10:20.858 350391 DEBUG oslo_concurrency.lockutils [req-83cbaf70-9ea2-444b-8b34-0a967adb1840 req-7d41c626-1535-4531-9983-0f3055367c51 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "270d952c-e221-49ae-ba25-b259f07a2be3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:20 compute-0 nova_compute[350387]: 2025-11-26 02:10:20.858 350391 DEBUG nova.compute.manager [req-83cbaf70-9ea2-444b-8b34-0a967adb1840 req-7d41c626-1535-4531-9983-0f3055367c51 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] No waiting events found dispatching network-vif-plugged-1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:10:20 compute-0 nova_compute[350387]: 2025-11-26 02:10:20.859 350391 WARNING nova.compute.manager [req-83cbaf70-9ea2-444b-8b34-0a967adb1840 req-7d41c626-1535-4531-9983-0f3055367c51 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Received unexpected event network-vif-plugged-1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 for instance with vm_state active and task_state None.#033[00m
Nov 26 02:10:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 150 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 130 op/s
Nov 26 02:10:21 compute-0 NetworkManager[48886]: <info>  [1764123021.9016] manager: (patch-br-int-to-provnet-c19f7092-632f-4b5a-a43a-928c0892538c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 26 02:10:21 compute-0 NetworkManager[48886]: <info>  [1764123021.9026] manager: (patch-provnet-c19f7092-632f-4b5a-a43a-928c0892538c-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Nov 26 02:10:21 compute-0 nova_compute[350387]: 2025-11-26 02:10:21.900 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:22 compute-0 nova_compute[350387]: 2025-11-26 02:10:22.022 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:22 compute-0 ovn_controller[89102]: 2025-11-26T02:10:22Z|00076|binding|INFO|Releasing lport 3ad46a56-806e-4d01-abd0-9720b5a6bbac from this chassis (sb_readonly=0)
Nov 26 02:10:22 compute-0 ovn_controller[89102]: 2025-11-26T02:10:22Z|00077|binding|INFO|Releasing lport 6285b1b6-6fe8-49b4-8dbc-d2e179b3b43b from this chassis (sb_readonly=0)
Nov 26 02:10:22 compute-0 nova_compute[350387]: 2025-11-26 02:10:22.045 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:22 compute-0 nova_compute[350387]: 2025-11-26 02:10:22.635 350391 DEBUG nova.compute.manager [req-81d51224-09e6-4fb2-a0f1-937efd507b60 req-bd07a327-d67d-4c83-964b-5f67dad8c5fa 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Received event network-changed-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:10:22 compute-0 nova_compute[350387]: 2025-11-26 02:10:22.636 350391 DEBUG nova.compute.manager [req-81d51224-09e6-4fb2-a0f1-937efd507b60 req-bd07a327-d67d-4c83-964b-5f67dad8c5fa 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Refreshing instance network info cache due to event network-changed-4b2c5180-2ff0-4b98-90cb-e0e6ba068614. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:10:22 compute-0 nova_compute[350387]: 2025-11-26 02:10:22.637 350391 DEBUG oslo_concurrency.lockutils [req-81d51224-09e6-4fb2-a0f1-937efd507b60 req-bd07a327-d67d-4c83-964b-5f67dad8c5fa 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:10:22 compute-0 nova_compute[350387]: 2025-11-26 02:10:22.637 350391 DEBUG oslo_concurrency.lockutils [req-81d51224-09e6-4fb2-a0f1-937efd507b60 req-bd07a327-d67d-4c83-964b-5f67dad8c5fa 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:10:22 compute-0 nova_compute[350387]: 2025-11-26 02:10:22.637 350391 DEBUG nova.network.neutron [req-81d51224-09e6-4fb2-a0f1-937efd507b60 req-bd07a327-d67d-4c83-964b-5f67dad8c5fa 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Refreshing network info cache for port 4b2c5180-2ff0-4b98-90cb-e0e6ba068614 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:10:23 compute-0 nova_compute[350387]: 2025-11-26 02:10:23.357 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 150 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.5 MiB/s wr, 159 op/s
Nov 26 02:10:24 compute-0 nova_compute[350387]: 2025-11-26 02:10:24.939 350391 DEBUG oslo_concurrency.lockutils [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Acquiring lock "270d952c-e221-49ae-ba25-b259f07a2be3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:24 compute-0 nova_compute[350387]: 2025-11-26 02:10:24.940 350391 DEBUG oslo_concurrency.lockutils [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "270d952c-e221-49ae-ba25-b259f07a2be3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:24 compute-0 nova_compute[350387]: 2025-11-26 02:10:24.940 350391 DEBUG oslo_concurrency.lockutils [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Acquiring lock "270d952c-e221-49ae-ba25-b259f07a2be3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:24 compute-0 nova_compute[350387]: 2025-11-26 02:10:24.941 350391 DEBUG oslo_concurrency.lockutils [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "270d952c-e221-49ae-ba25-b259f07a2be3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:24 compute-0 nova_compute[350387]: 2025-11-26 02:10:24.942 350391 DEBUG oslo_concurrency.lockutils [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "270d952c-e221-49ae-ba25-b259f07a2be3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:24 compute-0 nova_compute[350387]: 2025-11-26 02:10:24.944 350391 INFO nova.compute.manager [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Terminating instance#033[00m
Nov 26 02:10:24 compute-0 nova_compute[350387]: 2025-11-26 02:10:24.947 350391 DEBUG nova.compute.manager [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 02:10:24 compute-0 nova_compute[350387]: 2025-11-26 02:10:24.954 350391 DEBUG nova.compute.manager [req-4496e355-16c9-4f0d-bcc4-769ac27435da req-1cf19ee8-2423-4949-bb9e-d3363f849590 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Received event network-changed-1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:10:24 compute-0 nova_compute[350387]: 2025-11-26 02:10:24.957 350391 DEBUG nova.compute.manager [req-4496e355-16c9-4f0d-bcc4-769ac27435da req-1cf19ee8-2423-4949-bb9e-d3363f849590 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Refreshing instance network info cache due to event network-changed-1bb955bd-fd16-48e2-8413-ad1ade1cf2e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:10:24 compute-0 nova_compute[350387]: 2025-11-26 02:10:24.960 350391 DEBUG oslo_concurrency.lockutils [req-4496e355-16c9-4f0d-bcc4-769ac27435da req-1cf19ee8-2423-4949-bb9e-d3363f849590 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-270d952c-e221-49ae-ba25-b259f07a2be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:10:24 compute-0 nova_compute[350387]: 2025-11-26 02:10:24.960 350391 DEBUG oslo_concurrency.lockutils [req-4496e355-16c9-4f0d-bcc4-769ac27435da req-1cf19ee8-2423-4949-bb9e-d3363f849590 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-270d952c-e221-49ae-ba25-b259f07a2be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:10:24 compute-0 nova_compute[350387]: 2025-11-26 02:10:24.961 350391 DEBUG nova.network.neutron [req-4496e355-16c9-4f0d-bcc4-769ac27435da req-1cf19ee8-2423-4949-bb9e-d3363f849590 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Refreshing network info cache for port 1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:10:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:24.994 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:24.995 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:24.996 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:25 compute-0 kernel: tap1bb955bd-fd (unregistering): left promiscuous mode
Nov 26 02:10:25 compute-0 NetworkManager[48886]: <info>  [1764123025.0428] device (tap1bb955bd-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.066 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:25 compute-0 ovn_controller[89102]: 2025-11-26T02:10:25Z|00078|binding|INFO|Releasing lport 1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 from this chassis (sb_readonly=0)
Nov 26 02:10:25 compute-0 ovn_controller[89102]: 2025-11-26T02:10:25Z|00079|binding|INFO|Setting lport 1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 down in Southbound
Nov 26 02:10:25 compute-0 ovn_controller[89102]: 2025-11-26T02:10:25Z|00080|binding|INFO|Removing iface tap1bb955bd-fd ovn-installed in OVS
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.071 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:10:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:25.075 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:7c:0a 10.100.0.4'], port_security=['fa:16:3e:41:7c:0a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '270d952c-e221-49ae-ba25-b259f07a2be3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-578f6a80-1a41-45c9-950f-a1b20db33909', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ec6a84328cd54e0fad4f07089c4e4e95', 'neutron:revision_number': '4', 'neutron:security_group_ids': '92c65b7b-9431-492d-8fb4-9a245526b800', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8edeb8c-abc9-4149-8939-cbe5e03d5fa6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=1bb955bd-fd16-48e2-8413-ad1ade1cf2e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:10:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:25.076 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 in datapath 578f6a80-1a41-45c9-950f-a1b20db33909 unbound from our chassis#033[00m
Nov 26 02:10:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:25.077 286844 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 578f6a80-1a41-45c9-950f-a1b20db33909, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 02:10:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:25.078 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[65bc90ca-3c50-4b45-936b-11bbbca168fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:25.079 286844 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909 namespace which is not needed anymore#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.090 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:25 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 26 02:10:25 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 8.009s CPU time.
Nov 26 02:10:25 compute-0 systemd-machined[138512]: Machine qemu-7-instance-00000007 terminated.
Nov 26 02:10:25 compute-0 podman[442111]: 2025-11-26 02:10:25.178660633 +0000 UTC m=+0.098328098 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 02:10:25 compute-0 podman[442110]: 2025-11-26 02:10:25.184613499 +0000 UTC m=+0.096609429 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:10:25 compute-0 podman[442107]: 2025-11-26 02:10:25.213312404 +0000 UTC m=+0.116366703 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.213 350391 INFO nova.virt.libvirt.driver [-] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Instance destroyed successfully.#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.213 350391 DEBUG nova.objects.instance [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lazy-loading 'resources' on Instance uuid 270d952c-e221-49ae-ba25-b259f07a2be3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.231 350391 DEBUG nova.virt.libvirt.vif [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T02:10:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1930893835',display_name='tempest-ServersTestManualDisk-server-1930893835',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1930893835',id=7,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIw6oaT02VQzNhSb55J5HKxH2V3Dbs5h1DE4yOuNN2iNQuMYPDSHQiBizY1qXSUSi68iXRvtrlwURaP2sypM0fG+fUOLtbd/ORld54R8DrYOis2sZUEXHqML8q2KtGbp/Q==',key_name='tempest-keypair-1650669487',keypairs=<?>,launch_index=0,launched_at=2025-11-26T02:10:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ec6a84328cd54e0fad4f07089c4e4e95',ramdisk_id='',reservation_id='r-r1fs902s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-501918347',owner_user_name='tempest-ServersTestManualDisk-501918347-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T02:10:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1c3747a1e5af44b0bcc1d0a5f8241343',uuid=270d952c-e221-49ae-ba25-b259f07a2be3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "address": "fa:16:3e:41:7c:0a", "network": {"id": "578f6a80-1a41-45c9-950f-a1b20db33909", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1413004914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ec6a84328cd54e0fad4f07089c4e4e95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb955bd-fd", "ovs_interfaceid": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.231 350391 DEBUG nova.network.os_vif_util [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Converting VIF {"id": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "address": "fa:16:3e:41:7c:0a", "network": {"id": "578f6a80-1a41-45c9-950f-a1b20db33909", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1413004914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ec6a84328cd54e0fad4f07089c4e4e95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb955bd-fd", "ovs_interfaceid": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.232 350391 DEBUG nova.network.os_vif_util [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:7c:0a,bridge_name='br-int',has_traffic_filtering=True,id=1bb955bd-fd16-48e2-8413-ad1ade1cf2e1,network=Network(578f6a80-1a41-45c9-950f-a1b20db33909),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb955bd-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.232 350391 DEBUG os_vif [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:7c:0a,bridge_name='br-int',has_traffic_filtering=True,id=1bb955bd-fd16-48e2-8413-ad1ade1cf2e1,network=Network(578f6a80-1a41-45c9-950f-a1b20db33909),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb955bd-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.233 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.233 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bb955bd-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.236 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.238 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.240 350391 INFO os_vif [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:7c:0a,bridge_name='br-int',has_traffic_filtering=True,id=1bb955bd-fd16-48e2-8413-ad1ade1cf2e1,network=Network(578f6a80-1a41-45c9-950f-a1b20db33909),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bb955bd-fd')#033[00m
Nov 26 02:10:25 compute-0 neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909[442089]: [NOTICE]   (442094) : haproxy version is 2.8.14-c23fe91
Nov 26 02:10:25 compute-0 neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909[442089]: [NOTICE]   (442094) : path to executable is /usr/sbin/haproxy
Nov 26 02:10:25 compute-0 neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909[442089]: [WARNING]  (442094) : Exiting Master process...
Nov 26 02:10:25 compute-0 neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909[442089]: [WARNING]  (442094) : Exiting Master process...
Nov 26 02:10:25 compute-0 neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909[442089]: [ALERT]    (442094) : Current worker (442096) exited with code 143 (Terminated)
Nov 26 02:10:25 compute-0 neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909[442089]: [WARNING]  (442094) : All workers exited. Exiting... (0)
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.286 350391 DEBUG nova.network.neutron [req-81d51224-09e6-4fb2-a0f1-937efd507b60 req-bd07a327-d67d-4c83-964b-5f67dad8c5fa 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updated VIF entry in instance network info cache for port 4b2c5180-2ff0-4b98-90cb-e0e6ba068614. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.286 350391 DEBUG nova.network.neutron [req-81d51224-09e6-4fb2-a0f1-937efd507b60 req-bd07a327-d67d-4c83-964b-5f67dad8c5fa 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updating instance_info_cache with network_info: [{"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:10:25 compute-0 systemd[1]: libpod-0c5628979c8acc3b1829099a7440e968a90e21430e82d0dbe1ff19c956ac4879.scope: Deactivated successfully.
Nov 26 02:10:25 compute-0 podman[442196]: 2025-11-26 02:10:25.295624772 +0000 UTC m=+0.073575344 container died 0c5628979c8acc3b1829099a7440e968a90e21430e82d0dbe1ff19c956ac4879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.314 350391 DEBUG oslo_concurrency.lockutils [req-81d51224-09e6-4fb2-a0f1-937efd507b60 req-bd07a327-d67d-4c83-964b-5f67dad8c5fa 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc357ff67695aa387cd425202bf12597296af98bcbeb57a295ab270f6316a3e3-merged.mount: Deactivated successfully.
Nov 26 02:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0c5628979c8acc3b1829099a7440e968a90e21430e82d0dbe1ff19c956ac4879-userdata-shm.mount: Deactivated successfully.
Nov 26 02:10:25 compute-0 podman[442196]: 2025-11-26 02:10:25.365313855 +0000 UTC m=+0.143264417 container cleanup 0c5628979c8acc3b1829099a7440e968a90e21430e82d0dbe1ff19c956ac4879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 26 02:10:25 compute-0 systemd[1]: libpod-conmon-0c5628979c8acc3b1829099a7440e968a90e21430e82d0dbe1ff19c956ac4879.scope: Deactivated successfully.
Nov 26 02:10:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1797: 321 pgs: 321 active+clean; 150 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 824 KiB/s wr, 169 op/s
Nov 26 02:10:25 compute-0 podman[442240]: 2025-11-26 02:10:25.501258446 +0000 UTC m=+0.096763893 container remove 0c5628979c8acc3b1829099a7440e968a90e21430e82d0dbe1ff19c956ac4879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 26 02:10:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:25.520 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0208e0-7c74-4166-a07e-65fd0682f3d4]: (4, ('Wed Nov 26 02:10:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909 (0c5628979c8acc3b1829099a7440e968a90e21430e82d0dbe1ff19c956ac4879)\n0c5628979c8acc3b1829099a7440e968a90e21430e82d0dbe1ff19c956ac4879\nWed Nov 26 02:10:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909 (0c5628979c8acc3b1829099a7440e968a90e21430e82d0dbe1ff19c956ac4879)\n0c5628979c8acc3b1829099a7440e968a90e21430e82d0dbe1ff19c956ac4879\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:25.525 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[fc294f92-26a2-4199-bae0-5cdda2778aa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:25.526 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap578f6a80-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.533 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:25 compute-0 kernel: tap578f6a80-10: left promiscuous mode
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.555 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.558 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:25.557 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[192b88a1-1b08-4b53-986c-5a7cf0c59709]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:25.585 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[725102ce-c2d0-4734-9cc9-f7587f7e3a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:25.587 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[570e312f-bb8b-4ce7-b39d-b88ce4e7c65c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:25.607 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d77146-69cf-4038-bccc-b6b8074252ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664789, 'reachable_time': 33702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 442253, 'error': None, 'target': 'ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d578f6a80\x2d1a41\x2d45c9\x2d950f\x2da1b20db33909.mount: Deactivated successfully.
Nov 26 02:10:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:25.614 287175 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-578f6a80-1a41-45c9-950f-a1b20db33909 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 02:10:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:25.615 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[84ed7445-8c89-4d89-8d26-63657dd44984]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.928 350391 INFO nova.virt.libvirt.driver [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Deleting instance files /var/lib/nova/instances/270d952c-e221-49ae-ba25-b259f07a2be3_del#033[00m
Nov 26 02:10:25 compute-0 nova_compute[350387]: 2025-11-26 02:10:25.929 350391 INFO nova.virt.libvirt.driver [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Deletion of /var/lib/nova/instances/270d952c-e221-49ae-ba25-b259f07a2be3_del complete#033[00m
Nov 26 02:10:26 compute-0 nova_compute[350387]: 2025-11-26 02:10:26.013 350391 INFO nova.compute.manager [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Took 1.06 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 02:10:26 compute-0 nova_compute[350387]: 2025-11-26 02:10:26.013 350391 DEBUG oslo.service.loopingcall [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 02:10:26 compute-0 nova_compute[350387]: 2025-11-26 02:10:26.014 350391 DEBUG nova.compute.manager [-] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 02:10:26 compute-0 nova_compute[350387]: 2025-11-26 02:10:26.014 350391 DEBUG nova.network.neutron [-] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 02:10:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:10:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/197442208' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:10:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:10:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/197442208' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:10:27 compute-0 nova_compute[350387]: 2025-11-26 02:10:27.411 350391 DEBUG nova.network.neutron [-] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:10:27 compute-0 nova_compute[350387]: 2025-11-26 02:10:27.443 350391 INFO nova.compute.manager [-] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Took 1.43 seconds to deallocate network for instance.#033[00m
Nov 26 02:10:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 163 op/s
Nov 26 02:10:27 compute-0 nova_compute[350387]: 2025-11-26 02:10:27.488 350391 DEBUG oslo_concurrency.lockutils [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:27 compute-0 nova_compute[350387]: 2025-11-26 02:10:27.489 350391 DEBUG oslo_concurrency.lockutils [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:27 compute-0 nova_compute[350387]: 2025-11-26 02:10:27.626 350391 DEBUG oslo_concurrency.processutils [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:27 compute-0 nova_compute[350387]: 2025-11-26 02:10:27.659 350391 DEBUG nova.compute.manager [req-5df635c1-621b-4662-a2be-71d6f421b1a9 req-043d3f2b-750e-43be-97b3-48902b275e03 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Received event network-vif-deleted-1bb955bd-fd16-48e2-8413-ad1ade1cf2e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:10:27 compute-0 nova_compute[350387]: 2025-11-26 02:10:27.876 350391 DEBUG nova.network.neutron [req-4496e355-16c9-4f0d-bcc4-769ac27435da req-1cf19ee8-2423-4949-bb9e-d3363f849590 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Updated VIF entry in instance network info cache for port 1bb955bd-fd16-48e2-8413-ad1ade1cf2e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:10:27 compute-0 nova_compute[350387]: 2025-11-26 02:10:27.877 350391 DEBUG nova.network.neutron [req-4496e355-16c9-4f0d-bcc4-769ac27435da req-1cf19ee8-2423-4949-bb9e-d3363f849590 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Updating instance_info_cache with network_info: [{"id": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "address": "fa:16:3e:41:7c:0a", "network": {"id": "578f6a80-1a41-45c9-950f-a1b20db33909", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1413004914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ec6a84328cd54e0fad4f07089c4e4e95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bb955bd-fd", "ovs_interfaceid": "1bb955bd-fd16-48e2-8413-ad1ade1cf2e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:10:27 compute-0 nova_compute[350387]: 2025-11-26 02:10:27.904 350391 DEBUG oslo_concurrency.lockutils [req-4496e355-16c9-4f0d-bcc4-769ac27435da req-1cf19ee8-2423-4949-bb9e-d3363f849590 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-270d952c-e221-49ae-ba25-b259f07a2be3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:10:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:10:28 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2713853707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:10:28 compute-0 nova_compute[350387]: 2025-11-26 02:10:28.144 350391 DEBUG oslo_concurrency.processutils [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:28 compute-0 nova_compute[350387]: 2025-11-26 02:10:28.153 350391 DEBUG nova.compute.provider_tree [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:10:28 compute-0 nova_compute[350387]: 2025-11-26 02:10:28.165 350391 DEBUG nova.scheduler.client.report [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:10:28 compute-0 nova_compute[350387]: 2025-11-26 02:10:28.186 350391 DEBUG oslo_concurrency.lockutils [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:28 compute-0 nova_compute[350387]: 2025-11-26 02:10:28.214 350391 INFO nova.scheduler.client.report [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Deleted allocations for instance 270d952c-e221-49ae-ba25-b259f07a2be3#033[00m
Nov 26 02:10:28 compute-0 nova_compute[350387]: 2025-11-26 02:10:28.289 350391 DEBUG oslo_concurrency.lockutils [None req-23a98774-5b09-4b63-b0ab-f9cbb20340a9 1c3747a1e5af44b0bcc1d0a5f8241343 ec6a84328cd54e0fad4f07089c4e4e95 - - default default] Lock "270d952c-e221-49ae-ba25-b259f07a2be3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:28 compute-0 nova_compute[350387]: 2025-11-26 02:10:28.360 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 123 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 16 KiB/s wr, 157 op/s
Nov 26 02:10:29 compute-0 podman[158021]: time="2025-11-26T02:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:10:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:10:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8637 "" "Go-http-client/1.1"
Nov 26 02:10:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:10:30 compute-0 nova_compute[350387]: 2025-11-26 02:10:30.236 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:31 compute-0 openstack_network_exporter[367323]: ERROR   02:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:10:31 compute-0 openstack_network_exporter[367323]: ERROR   02:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:10:31 compute-0 openstack_network_exporter[367323]: ERROR   02:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:10:31 compute-0 openstack_network_exporter[367323]: ERROR   02:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:10:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:10:31 compute-0 openstack_network_exporter[367323]: ERROR   02:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:10:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:10:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 16 KiB/s wr, 169 op/s
Nov 26 02:10:31 compute-0 podman[442276]: 2025-11-26 02:10:31.554799632 +0000 UTC m=+0.095637912 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 02:10:31 compute-0 podman[442277]: 2025-11-26 02:10:31.645096703 +0000 UTC m=+0.183942668 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 26 02:10:33 compute-0 nova_compute[350387]: 2025-11-26 02:10:33.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:10:33 compute-0 nova_compute[350387]: 2025-11-26 02:10:33.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:10:33 compute-0 nova_compute[350387]: 2025-11-26 02:10:33.331 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:33 compute-0 nova_compute[350387]: 2025-11-26 02:10:33.333 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:33 compute-0 nova_compute[350387]: 2025-11-26 02:10:33.333 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:33 compute-0 nova_compute[350387]: 2025-11-26 02:10:33.334 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:10:33 compute-0 nova_compute[350387]: 2025-11-26 02:10:33.335 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:33 compute-0 nova_compute[350387]: 2025-11-26 02:10:33.364 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.2 KiB/s wr, 104 op/s
Nov 26 02:10:33 compute-0 podman[442322]: 2025-11-26 02:10:33.582486426 +0000 UTC m=+0.127908637 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, container_name=kepler, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release-0.7.12=, io.buildah.version=1.29.0)
Nov 26 02:10:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:10:33 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/668520584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:10:33 compute-0 nova_compute[350387]: 2025-11-26 02:10:33.818 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:33 compute-0 nova_compute[350387]: 2025-11-26 02:10:33.931 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:10:33 compute-0 nova_compute[350387]: 2025-11-26 02:10:33.933 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:10:34 compute-0 nova_compute[350387]: 2025-11-26 02:10:34.609 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:10:34 compute-0 nova_compute[350387]: 2025-11-26 02:10:34.611 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3929MB free_disk=59.96735763549805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:10:34 compute-0 nova_compute[350387]: 2025-11-26 02:10:34.612 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:34 compute-0 nova_compute[350387]: 2025-11-26 02:10:34.612 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:34 compute-0 nova_compute[350387]: 2025-11-26 02:10:34.702 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 5c8719f7-1028-4983-aa89-c99a459b6295 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:10:34 compute-0 nova_compute[350387]: 2025-11-26 02:10:34.703 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:10:34 compute-0 nova_compute[350387]: 2025-11-26 02:10:34.704 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:10:34 compute-0 nova_compute[350387]: 2025-11-26 02:10:34.744 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:34 compute-0 ovn_controller[89102]: 2025-11-26T02:10:34Z|00081|binding|INFO|Releasing lport 6285b1b6-6fe8-49b4-8dbc-d2e179b3b43b from this chassis (sb_readonly=0)
Nov 26 02:10:34 compute-0 nova_compute[350387]: 2025-11-26 02:10:34.829 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:10:35 compute-0 podman[442382]: 2025-11-26 02:10:35.085362178 +0000 UTC m=+0.127555407 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 26 02:10:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:10:35 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2503578046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:10:35 compute-0 nova_compute[350387]: 2025-11-26 02:10:35.239 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:35 compute-0 nova_compute[350387]: 2025-11-26 02:10:35.242 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:10:35 compute-0 nova_compute[350387]: 2025-11-26 02:10:35.251 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:10:35 compute-0 nova_compute[350387]: 2025-11-26 02:10:35.270 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:10:35 compute-0 nova_compute[350387]: 2025-11-26 02:10:35.296 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:10:35 compute-0 nova_compute[350387]: 2025-11-26 02:10:35.297 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:10:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 665 KiB/s rd, 1.2 KiB/s wr, 47 op/s
Nov 26 02:10:36 compute-0 nova_compute[350387]: 2025-11-26 02:10:36.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:10:36 compute-0 nova_compute[350387]: 2025-11-26 02:10:36.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:10:36 compute-0 nova_compute[350387]: 2025-11-26 02:10:36.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:10:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 177 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 26 02:10:38 compute-0 nova_compute[350387]: 2025-11-26 02:10:38.366 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Nov 26 02:10:39 compute-0 podman[442406]: 2025-11-26 02:10:39.576776931 +0000 UTC m=+0.124067099 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, version=9.6, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 26 02:10:39 compute-0 podman[442407]: 2025-11-26 02:10:39.583419107 +0000 UTC m=+0.125369666 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:10:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:10:40 compute-0 nova_compute[350387]: 2025-11-26 02:10:40.209 350391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764123025.208196, 270d952c-e221-49ae-ba25-b259f07a2be3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:10:40 compute-0 nova_compute[350387]: 2025-11-26 02:10:40.210 350391 INFO nova.compute.manager [-] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] VM Stopped (Lifecycle Event)#033[00m
Nov 26 02:10:40 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:40.232 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:10:40 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:40.233 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 02:10:40 compute-0 nova_compute[350387]: 2025-11-26 02:10:40.239 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:40 compute-0 nova_compute[350387]: 2025-11-26 02:10:40.242 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:40 compute-0 nova_compute[350387]: 2025-11-26 02:10:40.250 350391 DEBUG nova.compute.manager [None req-533fcc03-75cd-4cbf-bd0a-a63ce75ab2b4 - - - - - -] [instance: 270d952c-e221-49ae-ba25-b259f07a2be3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:10:40 compute-0 nova_compute[350387]: 2025-11-26 02:10:40.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:10:40 compute-0 nova_compute[350387]: 2025-11-26 02:10:40.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:10:40 compute-0 nova_compute[350387]: 2025-11-26 02:10:40.301 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:10:40 compute-0 nova_compute[350387]: 2025-11-26 02:10:40.613 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:10:40 compute-0 nova_compute[350387]: 2025-11-26 02:10:40.614 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:10:40 compute-0 nova_compute[350387]: 2025-11-26 02:10:40.615 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:10:40 compute-0 nova_compute[350387]: 2025-11-26 02:10:40.616 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5c8719f7-1028-4983-aa89-c99a459b6295 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:10:41
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'backups', 'default.rgw.control', 'default.rgw.log']
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:10:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.873 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.874 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.875 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.882 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 5c8719f7-1028-4983-aa89-c99a459b6295 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 02:10:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:42.884 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/5c8719f7-1028-4983-aa89-c99a459b6295 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}4e94a0ede5bb893797130fc39ee992faf1803b43b6582353b5619a442e3adefc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 02:10:43 compute-0 nova_compute[350387]: 2025-11-26 02:10:43.045 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updating instance_info_cache with network_info: [{"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:10:43 compute-0 nova_compute[350387]: 2025-11-26 02:10:43.063 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:10:43 compute-0 nova_compute[350387]: 2025-11-26 02:10:43.064 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:10:43 compute-0 nova_compute[350387]: 2025-11-26 02:10:43.064 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:10:43 compute-0 nova_compute[350387]: 2025-11-26 02:10:43.369 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.730 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1994 Content-Type: application/json Date: Wed, 26 Nov 2025 02:10:43 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-c110c95e-7986-4103-b724-4960d120da74 x-openstack-request-id: req-c110c95e-7986-4103-b724-4960d120da74 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.730 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "5c8719f7-1028-4983-aa89-c99a459b6295", "name": "tempest-AttachInterfacesUnderV243Test-server-173154417", "status": "ACTIVE", "tenant_id": "339deb116b764070abc6d50520ee33c8", "user_id": "aadae2b9a9834185b051c2bc59c6054a", "metadata": {}, "hostId": "10c8a3da1341b5b691253a4d2b6f0cd43732119218070d4ef0250588", "image": {"id": "4728a8a0-1107-4816-98c6-74482d53f92c", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/4728a8a0-1107-4816-98c6-74482d53f92c"}]}, "flavor": {"id": "6db4d080-ab1e-4a78-a6d9-858137b0ba8b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/6db4d080-ab1e-4a78-a6d9-858137b0ba8b"}]}, "created": "2025-11-26T02:10:03Z", "updated": "2025-11-26T02:10:17Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-1836704104-network": [{"version": 4, "addr": "10.100.0.9", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:5a:7b:7e"}, {"version": 4, "addr": "192.168.122.183", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:5a:7b:7e"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/5c8719f7-1028-4983-aa89-c99a459b6295"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/5c8719f7-1028-4983-aa89-c99a459b6295"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-317693094", "OS-SRV-USG:launched_at": "2025-11-26T02:10:16.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--658208883"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000006", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.731 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/5c8719f7-1028-4983-aa89-c99a459b6295 used request id req-c110c95e-7986-4103-b724-4960d120da74 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.733 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5c8719f7-1028-4983-aa89-c99a459b6295', 'name': 'tempest-AttachInterfacesUnderV243Test-server-173154417', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '4728a8a0-1107-4816-98c6-74482d53f92c'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '339deb116b764070abc6d50520ee33c8', 'user_id': 'aadae2b9a9834185b051c2bc59c6054a', 'hostId': '10c8a3da1341b5b691253a4d2b6f0cd43732119218070d4ef0250588', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.733 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.733 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.734 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.734 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.735 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.735 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.736 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T02:10:43.734322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.736 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.736 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.737 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.737 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T02:10:43.736910) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.744 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 5c8719f7-1028-4983-aa89-c99a459b6295 / tap4b2c5180-2f inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.744 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.745 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.746 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.746 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.746 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.746 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.746 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.747 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.748 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.748 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.748 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.749 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.749 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.749 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T02:10:43.746768) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.749 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.749 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T02:10:43.749223) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.750 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.750 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.750 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.750 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.751 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.751 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.751 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.752 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T02:10:43.751265) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.752 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.752 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.753 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.753 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.753 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.753 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.754 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T02:10:43.753577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.754 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.754 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.755 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.755 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.755 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.755 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.755 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.756 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T02:10:43.755677) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.795 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/cpu volume: 25980000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.795 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.796 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.796 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.796 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.796 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.796 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.797 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.797 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T02:10:43.796809) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.798 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.798 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.798 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.798 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.798 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.799 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.799 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.799 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T02:10:43.799022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.799 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 5c8719f7-1028-4983-aa89-c99a459b6295: ceilometer.compute.pollsters.NoVolumeException
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.800 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.800 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.800 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.800 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.800 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.801 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.801 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.801 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-173154417>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-173154417>]
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.802 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.802 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.803 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.803 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-26T02:10:43.801058) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.803 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.803 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.804 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.804 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.805 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.805 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T02:10:43.803778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.805 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.805 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.806 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.806 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.806 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.807 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.807 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.807 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.808 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.808 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T02:10:43.806127) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.808 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.808 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.808 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.809 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.809 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.809 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.810 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.810 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.810 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.810 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.811 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.811 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.811 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.812 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T02:10:43.808469) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.812 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.812 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.812 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T02:10:43.810432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.812 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.812 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.813 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.813 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.813 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.813 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.814 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.814 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.815 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T02:10:43.812509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.815 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T02:10:43.814244) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.835 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.835 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.836 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.837 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.837 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.837 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.837 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.837 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.838 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T02:10:43.837675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.902 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.903 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.904 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.904 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.905 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.905 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.905 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.905 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.905 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-26T02:10:43.905673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.906 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-173154417>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-173154417>]
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.907 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.907 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.907 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.907 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.908 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.908 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.read.latency volume: 1958488082 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.908 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.read.latency volume: 2684045 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.909 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.909 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.910 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.910 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.910 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T02:10:43.908045) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.910 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.911 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.911 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.911 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T02:10:43.911069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.912 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.912 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.912 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.913 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.913 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.913 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.913 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.913 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.914 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.915 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.915 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T02:10:43.913549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.916 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.916 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.916 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T02:10:43.916563) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.916 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.917 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.917 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.918 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.919 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.919 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.919 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.919 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.920 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T02:10:43.919926) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.920 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.921 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.921 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.921 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.922 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.922 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.922 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.922 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.923 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.923 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.924 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.924 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.924 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.924 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.925 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.925 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.925 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.926 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.926 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.926 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.927 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T02:10:43.922461) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.927 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.927 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.927 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T02:10:43.925087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.927 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.927 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.928 15 DEBUG ceilometer.compute.pollsters [-] 5c8719f7-1028-4983-aa89-c99a459b6295/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.928 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.929 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.930 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.930 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.930 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T02:10:43.927453) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.930 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.930 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.931 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.931 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.931 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.931 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.932 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.932 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.932 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.932 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.932 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.933 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.933 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.933 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.933 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.933 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.933 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.934 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.934 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.934 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.934 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.934 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:10:43.934 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:10:44 compute-0 nova_compute[350387]: 2025-11-26 02:10:44.059 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:10:44 compute-0 nova_compute[350387]: 2025-11-26 02:10:44.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:10:44 compute-0 nova_compute[350387]: 2025-11-26 02:10:44.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:10:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:10:45 compute-0 nova_compute[350387]: 2025-11-26 02:10:45.245 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:10:46 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:10:46.236 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:10:46 compute-0 nova_compute[350387]: 2025-11-26 02:10:46.345 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:46 compute-0 nova_compute[350387]: 2025-11-26 02:10:46.850 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:10:48 compute-0 nova_compute[350387]: 2025-11-26 02:10:48.372 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:10:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:10:50 compute-0 nova_compute[350387]: 2025-11-26 02:10:50.247 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003487950323956502 of space, bias 1.0, pg target 0.10463850971869505 quantized to 32 (current 32)
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:10:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 op/s
Nov 26 02:10:52 compute-0 ovn_controller[89102]: 2025-11-26T02:10:52Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5a:7b:7e 10.100.0.9
Nov 26 02:10:52 compute-0 ovn_controller[89102]: 2025-11-26T02:10:52Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5a:7b:7e 10.100.0.9
Nov 26 02:10:53 compute-0 nova_compute[350387]: 2025-11-26 02:10:53.375 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 118 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 12 op/s
Nov 26 02:10:54 compute-0 nova_compute[350387]: 2025-11-26 02:10:54.583 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:10:55 compute-0 nova_compute[350387]: 2025-11-26 02:10:55.250 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:55 compute-0 nova_compute[350387]: 2025-11-26 02:10:55.258 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 125 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Nov 26 02:10:55 compute-0 podman[442454]: 2025-11-26 02:10:55.582660887 +0000 UTC m=+0.109068127 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:10:55 compute-0 podman[442453]: 2025-11-26 02:10:55.606446713 +0000 UTC m=+0.140241330 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 26 02:10:55 compute-0 podman[442452]: 2025-11-26 02:10:55.612674368 +0000 UTC m=+0.151875636 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 26 02:10:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1813: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 26 02:10:58 compute-0 nova_compute[350387]: 2025-11-26 02:10:58.379 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:10:59 compute-0 nova_compute[350387]: 2025-11-26 02:10:59.312 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Acquiring lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:59 compute-0 nova_compute[350387]: 2025-11-26 02:10:59.313 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:59 compute-0 nova_compute[350387]: 2025-11-26 02:10:59.338 350391 DEBUG nova.compute.manager [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 02:10:59 compute-0 nova_compute[350387]: 2025-11-26 02:10:59.420 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:10:59 compute-0 nova_compute[350387]: 2025-11-26 02:10:59.422 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:10:59 compute-0 nova_compute[350387]: 2025-11-26 02:10:59.432 350391 DEBUG nova.virt.hardware [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 02:10:59 compute-0 nova_compute[350387]: 2025-11-26 02:10:59.433 350391 INFO nova.compute.claims [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 02:10:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1814: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 26 02:10:59 compute-0 nova_compute[350387]: 2025-11-26 02:10:59.599 350391 DEBUG oslo_concurrency.processutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:10:59 compute-0 podman[158021]: time="2025-11-26T02:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:10:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:10:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8634 "" "Go-http-client/1.1"
Nov 26 02:11:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:11:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:11:00 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2189074166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.128 350391 DEBUG oslo_concurrency.processutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.144 350391 DEBUG nova.compute.provider_tree [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.169 350391 DEBUG nova.scheduler.client.report [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.209 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.211 350391 DEBUG nova.compute.manager [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.254 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.324 350391 DEBUG nova.compute.manager [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.325 350391 DEBUG nova.network.neutron [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.361 350391 INFO nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.391 350391 DEBUG nova.compute.manager [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.606 350391 DEBUG nova.policy [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd0f6705a78b34ed4991b2f5db8d428c4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f3bad7f1e634c97a6a227a970edc48a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.638 350391 DEBUG nova.compute.manager [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.641 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.643 350391 INFO nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Creating image(s)#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.695 350391 DEBUG nova.storage.rbd_utils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] rbd image 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.741 350391 DEBUG nova.storage.rbd_utils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] rbd image 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.792 350391 DEBUG nova.storage.rbd_utils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] rbd image 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.802 350391 DEBUG oslo_concurrency.processutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.877 350391 DEBUG oslo_concurrency.processutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.878 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Acquiring lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.880 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.880 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.933 350391 DEBUG nova.storage.rbd_utils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] rbd image 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:11:00 compute-0 nova_compute[350387]: 2025-11-26 02:11:00.944 350391 DEBUG oslo_concurrency.processutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:01 compute-0 nova_compute[350387]: 2025-11-26 02:11:01.353 350391 DEBUG oslo_concurrency.processutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:01 compute-0 openstack_network_exporter[367323]: ERROR   02:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:11:01 compute-0 openstack_network_exporter[367323]: ERROR   02:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:11:01 compute-0 openstack_network_exporter[367323]: ERROR   02:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:11:01 compute-0 openstack_network_exporter[367323]: ERROR   02:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:11:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:11:01 compute-0 openstack_network_exporter[367323]: ERROR   02:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:11:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:11:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 26 02:11:01 compute-0 nova_compute[350387]: 2025-11-26 02:11:01.507 350391 DEBUG nova.storage.rbd_utils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] resizing rbd image 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 26 02:11:01 compute-0 nova_compute[350387]: 2025-11-26 02:11:01.614 350391 DEBUG nova.network.neutron [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Successfully created port: a7933322-1af0-456e-9e1c-2102f607d4f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 02:11:01 compute-0 nova_compute[350387]: 2025-11-26 02:11:01.753 350391 DEBUG nova.objects.instance [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lazy-loading 'migration_context' on Instance uuid 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:11:01 compute-0 nova_compute[350387]: 2025-11-26 02:11:01.771 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 02:11:01 compute-0 nova_compute[350387]: 2025-11-26 02:11:01.771 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Ensure instance console log exists: /var/lib/nova/instances/5f2f6ac2-07e8-46b8-8930-5f9a67979d3f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 02:11:01 compute-0 nova_compute[350387]: 2025-11-26 02:11:01.772 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:01 compute-0 nova_compute[350387]: 2025-11-26 02:11:01.773 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:01 compute-0 nova_compute[350387]: 2025-11-26 02:11:01.773 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:02 compute-0 podman[442699]: 2025-11-26 02:11:02.589356363 +0000 UTC m=+0.131576537 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 02:11:02 compute-0 podman[442700]: 2025-11-26 02:11:02.648663235 +0000 UTC m=+0.189543022 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 26 02:11:02 compute-0 nova_compute[350387]: 2025-11-26 02:11:02.701 350391 DEBUG nova.network.neutron [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Successfully updated port: a7933322-1af0-456e-9e1c-2102f607d4f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 02:11:02 compute-0 nova_compute[350387]: 2025-11-26 02:11:02.717 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Acquiring lock "refresh_cache-5f2f6ac2-07e8-46b8-8930-5f9a67979d3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:11:02 compute-0 nova_compute[350387]: 2025-11-26 02:11:02.717 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Acquired lock "refresh_cache-5f2f6ac2-07e8-46b8-8930-5f9a67979d3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:11:02 compute-0 nova_compute[350387]: 2025-11-26 02:11:02.717 350391 DEBUG nova.network.neutron [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 02:11:02 compute-0 nova_compute[350387]: 2025-11-26 02:11:02.805 350391 DEBUG nova.compute.manager [req-95c044a1-dba6-42f4-abb1-ca9c75cd664e req-e5e7d0f2-08cd-4376-bcce-e7d11a9fa82f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Received event network-changed-a7933322-1af0-456e-9e1c-2102f607d4f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:11:02 compute-0 nova_compute[350387]: 2025-11-26 02:11:02.806 350391 DEBUG nova.compute.manager [req-95c044a1-dba6-42f4-abb1-ca9c75cd664e req-e5e7d0f2-08cd-4376-bcce-e7d11a9fa82f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Refreshing instance network info cache due to event network-changed-a7933322-1af0-456e-9e1c-2102f607d4f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:11:02 compute-0 nova_compute[350387]: 2025-11-26 02:11:02.807 350391 DEBUG oslo_concurrency.lockutils [req-95c044a1-dba6-42f4-abb1-ca9c75cd664e req-e5e7d0f2-08cd-4376-bcce-e7d11a9fa82f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-5f2f6ac2-07e8-46b8-8930-5f9a67979d3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:11:02 compute-0 nova_compute[350387]: 2025-11-26 02:11:02.873 350391 DEBUG nova.network.neutron [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 02:11:03 compute-0 nova_compute[350387]: 2025-11-26 02:11:03.382 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 149 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.7 MiB/s wr, 64 op/s
Nov 26 02:11:04 compute-0 podman[442740]: 2025-11-26 02:11:04.581782389 +0000 UTC m=+0.123273065 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-type=git, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.713 350391 DEBUG nova.network.neutron [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Updating instance_info_cache with network_info: [{"id": "a7933322-1af0-456e-9e1c-2102f607d4f1", "address": "fa:16:3e:5a:72:ad", "network": {"id": "b34932cc-66a3-49a9-8ab7-abd55886e6d2", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1363447779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f3bad7f1e634c97a6a227a970edc48a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7933322-1a", "ovs_interfaceid": "a7933322-1af0-456e-9e1c-2102f607d4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.735 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Releasing lock "refresh_cache-5f2f6ac2-07e8-46b8-8930-5f9a67979d3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.736 350391 DEBUG nova.compute.manager [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Instance network_info: |[{"id": "a7933322-1af0-456e-9e1c-2102f607d4f1", "address": "fa:16:3e:5a:72:ad", "network": {"id": "b34932cc-66a3-49a9-8ab7-abd55886e6d2", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1363447779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f3bad7f1e634c97a6a227a970edc48a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7933322-1a", "ovs_interfaceid": "a7933322-1af0-456e-9e1c-2102f607d4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.736 350391 DEBUG oslo_concurrency.lockutils [req-95c044a1-dba6-42f4-abb1-ca9c75cd664e req-e5e7d0f2-08cd-4376-bcce-e7d11a9fa82f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-5f2f6ac2-07e8-46b8-8930-5f9a67979d3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.737 350391 DEBUG nova.network.neutron [req-95c044a1-dba6-42f4-abb1-ca9c75cd664e req-e5e7d0f2-08cd-4376-bcce-e7d11a9fa82f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Refreshing network info cache for port a7933322-1af0-456e-9e1c-2102f607d4f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.741 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Start _get_guest_xml network_info=[{"id": "a7933322-1af0-456e-9e1c-2102f607d4f1", "address": "fa:16:3e:5a:72:ad", "network": {"id": "b34932cc-66a3-49a9-8ab7-abd55886e6d2", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1363447779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f3bad7f1e634c97a6a227a970edc48a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7933322-1a", "ovs_interfaceid": "a7933322-1af0-456e-9e1c-2102f607d4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': '4728a8a0-1107-4816-98c6-74482d53f92c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.752 350391 WARNING nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.763 350391 DEBUG nova.virt.libvirt.host [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.764 350391 DEBUG nova.virt.libvirt.host [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.770 350391 DEBUG nova.virt.libvirt.host [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.770 350391 DEBUG nova.virt.libvirt.host [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.771 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.771 350391 DEBUG nova.virt.hardware [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T02:09:05Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6db4d080-ab1e-4a78-a6d9-858137b0ba8b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.772 350391 DEBUG nova.virt.hardware [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.772 350391 DEBUG nova.virt.hardware [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.772 350391 DEBUG nova.virt.hardware [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.773 350391 DEBUG nova.virt.hardware [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.773 350391 DEBUG nova.virt.hardware [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.774 350391 DEBUG nova.virt.hardware [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.775 350391 DEBUG nova.virt.hardware [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.775 350391 DEBUG nova.virt.hardware [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.776 350391 DEBUG nova.virt.hardware [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.776 350391 DEBUG nova.virt.hardware [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 02:11:04 compute-0 nova_compute[350387]: 2025-11-26 02:11:04.783 350391 DEBUG oslo_concurrency.processutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:11:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:11:05 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3465605532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.257 350391 DEBUG oslo_concurrency.processutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.309 350391 DEBUG nova.storage.rbd_utils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] rbd image 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.321 350391 DEBUG oslo_concurrency.processutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.353 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.367 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "a6b626e1-3c31-460a-be1a-02b342efbb84" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.368 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.400 350391 DEBUG nova.compute.manager [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 02:11:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 172 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 26 02:11:05 compute-0 podman[442800]: 2025-11-26 02:11:05.582497066 +0000 UTC m=+0.136383202 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.740 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.741 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.750 350391 DEBUG nova.virt.hardware [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.751 350391 INFO nova.compute.claims [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 02:11:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:11:05 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3161417358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.863 350391 DEBUG oslo_concurrency.processutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.865 350391 DEBUG nova.virt.libvirt.vif [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:10:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-508132063',display_name='tempest-ServerAddressesTestJSON-server-508132063',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-508132063',id=8,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f3bad7f1e634c97a6a227a970edc48a',ramdisk_id='',reservation_id='r-2qz3bm4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-28329255',owner_user_name='tempest-ServerAddressesTestJSON-28329255-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:11:00Z,user_data=None,user_id='d0f6705a78b34ed4991b2f5db8d428c4',uuid=5f2f6ac2-07e8-46b8-8930-5f9a67979d3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7933322-1af0-456e-9e1c-2102f607d4f1", "address": "fa:16:3e:5a:72:ad", "network": {"id": "b34932cc-66a3-49a9-8ab7-abd55886e6d2", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1363447779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f3bad7f1e634c97a6a227a970edc48a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7933322-1a", "ovs_interfaceid": "a7933322-1af0-456e-9e1c-2102f607d4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.865 350391 DEBUG nova.network.os_vif_util [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Converting VIF {"id": "a7933322-1af0-456e-9e1c-2102f607d4f1", "address": "fa:16:3e:5a:72:ad", "network": {"id": "b34932cc-66a3-49a9-8ab7-abd55886e6d2", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1363447779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f3bad7f1e634c97a6a227a970edc48a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7933322-1a", "ovs_interfaceid": "a7933322-1af0-456e-9e1c-2102f607d4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.866 350391 DEBUG nova.network.os_vif_util [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:72:ad,bridge_name='br-int',has_traffic_filtering=True,id=a7933322-1af0-456e-9e1c-2102f607d4f1,network=Network(b34932cc-66a3-49a9-8ab7-abd55886e6d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7933322-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.867 350391 DEBUG nova.objects.instance [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.927 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] End _get_guest_xml xml=<domain type="kvm">
Nov 26 02:11:05 compute-0 nova_compute[350387]:  <uuid>5f2f6ac2-07e8-46b8-8930-5f9a67979d3f</uuid>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  <name>instance-00000008</name>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  <memory>131072</memory>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  <metadata>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <nova:name>tempest-ServerAddressesTestJSON-server-508132063</nova:name>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 02:11:04</nova:creationTime>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <nova:flavor name="m1.nano">
Nov 26 02:11:05 compute-0 nova_compute[350387]:        <nova:memory>128</nova:memory>
Nov 26 02:11:05 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 02:11:05 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 02:11:05 compute-0 nova_compute[350387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 02:11:05 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 02:11:05 compute-0 nova_compute[350387]:        <nova:user uuid="d0f6705a78b34ed4991b2f5db8d428c4">tempest-ServerAddressesTestJSON-28329255-project-member</nova:user>
Nov 26 02:11:05 compute-0 nova_compute[350387]:        <nova:project uuid="6f3bad7f1e634c97a6a227a970edc48a">tempest-ServerAddressesTestJSON-28329255</nova:project>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="4728a8a0-1107-4816-98c6-74482d53f92c"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <nova:ports>
Nov 26 02:11:05 compute-0 nova_compute[350387]:        <nova:port uuid="a7933322-1af0-456e-9e1c-2102f607d4f1">
Nov 26 02:11:05 compute-0 nova_compute[350387]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:        </nova:port>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      </nova:ports>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  </metadata>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <system>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <entry name="serial">5f2f6ac2-07e8-46b8-8930-5f9a67979d3f</entry>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <entry name="uuid">5f2f6ac2-07e8-46b8-8930-5f9a67979d3f</entry>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    </system>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  <os>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  </os>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  <features>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <apic/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  </features>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  </clock>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  </cpu>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  <devices>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_disk">
Nov 26 02:11:05 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      </source>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:11:05 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_disk.config">
Nov 26 02:11:05 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      </source>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:11:05 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <interface type="ethernet">
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <mac address="fa:16:3e:5a:72:ad"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <mtu size="1442"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <target dev="tapa7933322-1a"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    </interface>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/5f2f6ac2-07e8-46b8-8930-5f9a67979d3f/console.log" append="off"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    </serial>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <video>
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    </video>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    </rng>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 02:11:05 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 02:11:05 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 02:11:05 compute-0 nova_compute[350387]:  </devices>
Nov 26 02:11:05 compute-0 nova_compute[350387]: </domain>
Nov 26 02:11:05 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.928 350391 DEBUG nova.compute.manager [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Preparing to wait for external event network-vif-plugged-a7933322-1af0-456e-9e1c-2102f607d4f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.928 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Acquiring lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.928 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.929 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.933 350391 DEBUG nova.virt.libvirt.vif [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:10:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-508132063',display_name='tempest-ServerAddressesTestJSON-server-508132063',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-508132063',id=8,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f3bad7f1e634c97a6a227a970edc48a',ramdisk_id='',reservation_id='r-2qz3bm4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-28329255',owner_user_name='tempest-ServerAddressesTestJSON-28329255-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:11:00Z,user_data=None,user_id='d0f6705a78b34ed4991b2f5db8d428c4',uuid=5f2f6ac2-07e8-46b8-8930-5f9a67979d3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7933322-1af0-456e-9e1c-2102f607d4f1", "address": "fa:16:3e:5a:72:ad", "network": {"id": "b34932cc-66a3-49a9-8ab7-abd55886e6d2", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1363447779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f3bad7f1e634c97a6a227a970edc48a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7933322-1a", "ovs_interfaceid": "a7933322-1af0-456e-9e1c-2102f607d4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.935 350391 DEBUG nova.network.os_vif_util [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Converting VIF {"id": "a7933322-1af0-456e-9e1c-2102f607d4f1", "address": "fa:16:3e:5a:72:ad", "network": {"id": "b34932cc-66a3-49a9-8ab7-abd55886e6d2", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1363447779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f3bad7f1e634c97a6a227a970edc48a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7933322-1a", "ovs_interfaceid": "a7933322-1af0-456e-9e1c-2102f607d4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.937 350391 DEBUG nova.network.os_vif_util [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:72:ad,bridge_name='br-int',has_traffic_filtering=True,id=a7933322-1af0-456e-9e1c-2102f607d4f1,network=Network(b34932cc-66a3-49a9-8ab7-abd55886e6d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7933322-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.937 350391 DEBUG os_vif [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:72:ad,bridge_name='br-int',has_traffic_filtering=True,id=a7933322-1af0-456e-9e1c-2102f607d4f1,network=Network(b34932cc-66a3-49a9-8ab7-abd55886e6d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7933322-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.944 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.944 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.945 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.950 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.950 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7933322-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.951 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa7933322-1a, col_values=(('external_ids', {'iface-id': 'a7933322-1af0-456e-9e1c-2102f607d4f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:72:ad', 'vm-uuid': '5f2f6ac2-07e8-46b8-8930-5f9a67979d3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.953 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.956 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:11:05 compute-0 NetworkManager[48886]: <info>  [1764123065.9571] manager: (tapa7933322-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.964 350391 DEBUG oslo_concurrency.processutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:05 compute-0 nova_compute[350387]: 2025-11-26 02:11:05.997 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:06 compute-0 nova_compute[350387]: 2025-11-26 02:11:06.001 350391 INFO os_vif [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:72:ad,bridge_name='br-int',has_traffic_filtering=True,id=a7933322-1af0-456e-9e1c-2102f607d4f1,network=Network(b34932cc-66a3-49a9-8ab7-abd55886e6d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7933322-1a')#033[00m
Nov 26 02:11:06 compute-0 nova_compute[350387]: 2025-11-26 02:11:06.336 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:11:06 compute-0 nova_compute[350387]: 2025-11-26 02:11:06.337 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:11:06 compute-0 nova_compute[350387]: 2025-11-26 02:11:06.338 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] No VIF found with MAC fa:16:3e:5a:72:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 02:11:06 compute-0 nova_compute[350387]: 2025-11-26 02:11:06.339 350391 INFO nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Using config drive#033[00m
Nov 26 02:11:06 compute-0 nova_compute[350387]: 2025-11-26 02:11:06.396 350391 DEBUG nova.storage.rbd_utils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] rbd image 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:11:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:11:06 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3240467683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:11:06 compute-0 nova_compute[350387]: 2025-11-26 02:11:06.515 350391 DEBUG oslo_concurrency.processutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:06 compute-0 nova_compute[350387]: 2025-11-26 02:11:06.527 350391 DEBUG nova.compute.provider_tree [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:11:06 compute-0 nova_compute[350387]: 2025-11-26 02:11:06.680 350391 DEBUG nova.scheduler.client.report [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:11:06 compute-0 nova_compute[350387]: 2025-11-26 02:11:06.830 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:06 compute-0 nova_compute[350387]: 2025-11-26 02:11:06.832 350391 DEBUG nova.compute.manager [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 02:11:06 compute-0 nova_compute[350387]: 2025-11-26 02:11:06.906 350391 DEBUG nova.compute.manager [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 02:11:06 compute-0 nova_compute[350387]: 2025-11-26 02:11:06.907 350391 DEBUG nova.network.neutron [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 02:11:06 compute-0 nova_compute[350387]: 2025-11-26 02:11:06.931 350391 INFO nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 02:11:06 compute-0 nova_compute[350387]: 2025-11-26 02:11:06.960 350391 DEBUG nova.compute.manager [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.195 350391 INFO nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Creating config drive at /var/lib/nova/instances/5f2f6ac2-07e8-46b8-8930-5f9a67979d3f/disk.config#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.201 350391 DEBUG oslo_concurrency.processutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5f2f6ac2-07e8-46b8-8930-5f9a67979d3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpos1sy1eu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.244 350391 DEBUG nova.compute.manager [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.249 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.250 350391 INFO nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Creating image(s)#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.292 350391 DEBUG nova.storage.rbd_utils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] rbd image a6b626e1-3c31-460a-be1a-02b342efbb84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.346 350391 DEBUG nova.storage.rbd_utils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] rbd image a6b626e1-3c31-460a-be1a-02b342efbb84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.407 350391 DEBUG nova.storage.rbd_utils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] rbd image a6b626e1-3c31-460a-be1a-02b342efbb84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.420 350391 DEBUG oslo_concurrency.processutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.455 350391 DEBUG oslo_concurrency.processutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5f2f6ac2-07e8-46b8-8930-5f9a67979d3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpos1sy1eu" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.457 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.462 350391 DEBUG nova.policy [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a7102c5716b644e9a49ae0b2b6d2bd04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '66fdcaf8e71a4c809ab9cab4c64ca9d5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 02:11:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 2.2 MiB/s wr, 63 op/s
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.525 350391 DEBUG nova.storage.rbd_utils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] rbd image 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.535 350391 DEBUG oslo_concurrency.processutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5f2f6ac2-07e8-46b8-8930-5f9a67979d3f/disk.config 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.563 350391 DEBUG oslo_concurrency.processutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 --force-share --output=json" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.564 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.566 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.566 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.618 350391 DEBUG nova.storage.rbd_utils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] rbd image a6b626e1-3c31-460a-be1a-02b342efbb84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.630 350391 DEBUG oslo_concurrency.processutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 a6b626e1-3c31-460a-be1a-02b342efbb84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.825 350391 DEBUG oslo_concurrency.processutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5f2f6ac2-07e8-46b8-8930-5f9a67979d3f/disk.config 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.826 350391 INFO nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Deleting local config drive /var/lib/nova/instances/5f2f6ac2-07e8-46b8-8930-5f9a67979d3f/disk.config because it was imported into RBD.#033[00m
Nov 26 02:11:07 compute-0 kernel: tapa7933322-1a: entered promiscuous mode
Nov 26 02:11:07 compute-0 NetworkManager[48886]: <info>  [1764123067.9454] manager: (tapa7933322-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Nov 26 02:11:07 compute-0 ovn_controller[89102]: 2025-11-26T02:11:07Z|00082|binding|INFO|Claiming lport a7933322-1af0-456e-9e1c-2102f607d4f1 for this chassis.
Nov 26 02:11:07 compute-0 ovn_controller[89102]: 2025-11-26T02:11:07Z|00083|binding|INFO|a7933322-1af0-456e-9e1c-2102f607d4f1: Claiming fa:16:3e:5a:72:ad 10.100.0.10
Nov 26 02:11:07 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.947 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:07 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:07.956 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:72:ad 10.100.0.10'], port_security=['fa:16:3e:5a:72:ad 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5f2f6ac2-07e8-46b8-8930-5f9a67979d3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b34932cc-66a3-49a9-8ab7-abd55886e6d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f3bad7f1e634c97a6a227a970edc48a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '46cb60df-50e1-4c9a-877a-bea59902c38a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=142464b3-5053-4192-8cdf-d574afeb7ae1, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=a7933322-1af0-456e-9e1c-2102f607d4f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:11:07 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:07.958 286844 INFO neutron.agent.ovn.metadata.agent [-] Port a7933322-1af0-456e-9e1c-2102f607d4f1 in datapath b34932cc-66a3-49a9-8ab7-abd55886e6d2 bound to our chassis#033[00m
Nov 26 02:11:07 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:07.962 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b34932cc-66a3-49a9-8ab7-abd55886e6d2#033[00m
Nov 26 02:11:07 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:07.982 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[b2fefa10-db8d-4a02-841e-22e998872aa1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:07 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:07.983 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb34932cc-61 in ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 02:11:07 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:07.990 413433 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb34932cc-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 02:11:07 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:07.991 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[10527975-13b3-482d-8681-586bad37f5cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:07 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:07.992 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[91e91234-b556-462c-bc08-982ed9a2e5a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:07 compute-0 ovn_controller[89102]: 2025-11-26T02:11:07Z|00084|binding|INFO|Setting lport a7933322-1af0-456e-9e1c-2102f607d4f1 ovn-installed in OVS
Nov 26 02:11:07 compute-0 ovn_controller[89102]: 2025-11-26T02:11:07Z|00085|binding|INFO|Setting lport a7933322-1af0-456e-9e1c-2102f607d4f1 up in Southbound
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:07.999 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.006 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.005 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[af2da515-9858-45c9-80d2-d841dc492286]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:08 compute-0 systemd-machined[138512]: New machine qemu-8-instance-00000008.
Nov 26 02:11:08 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.035 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[aabe7b1e-53f0-49fa-8ced-801d57a25488]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:08 compute-0 systemd-udevd[443038]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.067 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a51927-0aca-4260-8ce0-a351720699df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.075 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[0a7c021f-644a-4ae0-83d6-91cedccd3900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:08 compute-0 NetworkManager[48886]: <info>  [1764123068.0786] manager: (tapb34932cc-60): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Nov 26 02:11:08 compute-0 systemd-udevd[443041]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 02:11:08 compute-0 NetworkManager[48886]: <info>  [1764123068.0847] device (tapa7933322-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 02:11:08 compute-0 NetworkManager[48886]: <info>  [1764123068.0911] device (tapa7933322-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.104 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[f9dbeea6-9b14-41cc-bf8f-0621db851cf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.111 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[59baac92-3852-4371-a3bd-b35de9d6a099]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.123 350391 DEBUG nova.network.neutron [req-95c044a1-dba6-42f4-abb1-ca9c75cd664e req-e5e7d0f2-08cd-4376-bcce-e7d11a9fa82f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Updated VIF entry in instance network info cache for port a7933322-1af0-456e-9e1c-2102f607d4f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.124 350391 DEBUG nova.network.neutron [req-95c044a1-dba6-42f4-abb1-ca9c75cd664e req-e5e7d0f2-08cd-4376-bcce-e7d11a9fa82f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Updating instance_info_cache with network_info: [{"id": "a7933322-1af0-456e-9e1c-2102f607d4f1", "address": "fa:16:3e:5a:72:ad", "network": {"id": "b34932cc-66a3-49a9-8ab7-abd55886e6d2", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1363447779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f3bad7f1e634c97a6a227a970edc48a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7933322-1a", "ovs_interfaceid": "a7933322-1af0-456e-9e1c-2102f607d4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:11:08 compute-0 NetworkManager[48886]: <info>  [1764123068.1375] device (tapb34932cc-60): carrier: link connected
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.143 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[30adc739-c8d9-45bd-9894-8968526d272f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.145 350391 DEBUG oslo_concurrency.processutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 a6b626e1-3c31-460a-be1a-02b342efbb84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.159 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[68f2cff3-becb-4977-b80f-d68eee0ef921]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb34932cc-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:c7:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669876, 'reachable_time': 33890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 443067, 'error': None, 'target': 'ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.175 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb9ae14-83a1-4116-9764-e3adda359cbb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0c:c72a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669876, 'tstamp': 669876}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 443075, 'error': None, 'target': 'ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.191 350391 DEBUG oslo_concurrency.lockutils [req-95c044a1-dba6-42f4-abb1-ca9c75cd664e req-e5e7d0f2-08cd-4376-bcce-e7d11a9fa82f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-5f2f6ac2-07e8-46b8-8930-5f9a67979d3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.199 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[4db53f4e-73b9-4a4b-9801-0db49bc35cd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb34932cc-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:c7:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669876, 'reachable_time': 33890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 443087, 'error': None, 'target': 'ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.232 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[0332d18a-2134-429c-bc23-f27320fd0618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.264 350391 DEBUG nova.storage.rbd_utils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] resizing rbd image a6b626e1-3c31-460a-be1a-02b342efbb84_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.324 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[c16f142b-8037-4747-a2af-0bd00d4a728c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.327 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb34932cc-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.328 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.329 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb34932cc-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:08 compute-0 NetworkManager[48886]: <info>  [1764123068.3344] manager: (tapb34932cc-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Nov 26 02:11:08 compute-0 kernel: tapb34932cc-60: entered promiscuous mode
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.337 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb34932cc-60, col_values=(('external_ids', {'iface-id': 'acf02f24-e62c-4514-83c6-d7a18ba46663'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.337 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:08 compute-0 ovn_controller[89102]: 2025-11-26T02:11:08Z|00086|binding|INFO|Releasing lport acf02f24-e62c-4514-83c6-d7a18ba46663 from this chassis (sb_readonly=0)
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.341 286844 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b34932cc-66a3-49a9-8ab7-abd55886e6d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b34932cc-66a3-49a9-8ab7-abd55886e6d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.348 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[49d20a52-40c7-4d97-8421-f4beaf7aa291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.352 286844 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: global
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    log         /dev/log local0 debug
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    log-tag     haproxy-metadata-proxy-b34932cc-66a3-49a9-8ab7-abd55886e6d2
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    user        root
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    group       root
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    maxconn     1024
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    pidfile     /var/lib/neutron/external/pids/b34932cc-66a3-49a9-8ab7-abd55886e6d2.pid.haproxy
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    daemon
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: defaults
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    log global
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    mode http
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    option httplog
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    option dontlognull
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    option http-server-close
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    option forwardfor
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    retries                 3
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    timeout http-request    30s
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    timeout connect         30s
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    timeout client          32s
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    timeout server          32s
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    timeout http-keep-alive 30s
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: listen listener
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    bind 169.254.169.254:80
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]:    http-request add-header X-OVN-Network-ID b34932cc-66a3-49a9-8ab7-abd55886e6d2
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 02:11:08 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:08.354 286844 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2', 'env', 'PROCESS_TAG=haproxy-b34932cc-66a3-49a9-8ab7-abd55886e6d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b34932cc-66a3-49a9-8ab7-abd55886e6d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.356 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.384 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.471 350391 DEBUG nova.objects.instance [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lazy-loading 'migration_context' on Instance uuid a6b626e1-3c31-460a-be1a-02b342efbb84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.497 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.497 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Ensure instance console log exists: /var/lib/nova/instances/a6b626e1-3c31-460a-be1a-02b342efbb84/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.498 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.498 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.498 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.688 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123068.6880515, 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.689 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] VM Started (Lifecycle Event)#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.709 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.718 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123068.688182, 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.718 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] VM Paused (Lifecycle Event)#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.767 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.776 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.839 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:11:08 compute-0 podman[443215]: 2025-11-26 02:11:08.901923181 +0000 UTC m=+0.099373024 container create f6a61458f071b934c02b6b2ec96b69c0a262fa22a5fbfe111670b969e5c61f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 26 02:11:08 compute-0 nova_compute[350387]: 2025-11-26 02:11:08.939 350391 DEBUG nova.network.neutron [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Successfully created port: 422f5ef7-f048-4c83-a300-8b5942aafb8f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 02:11:08 compute-0 podman[443215]: 2025-11-26 02:11:08.863505564 +0000 UTC m=+0.060955397 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 02:11:08 compute-0 systemd[1]: Started libpod-conmon-f6a61458f071b934c02b6b2ec96b69c0a262fa22a5fbfe111670b969e5c61f4e.scope.
Nov 26 02:11:09 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58639e6d286e1be81d57c27347758ef56215068f9ff07cc0412123ddea10400b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 02:11:09 compute-0 podman[443215]: 2025-11-26 02:11:09.061453591 +0000 UTC m=+0.258903474 container init f6a61458f071b934c02b6b2ec96b69c0a262fa22a5fbfe111670b969e5c61f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 26 02:11:09 compute-0 podman[443215]: 2025-11-26 02:11:09.072427888 +0000 UTC m=+0.269877731 container start f6a61458f071b934c02b6b2ec96b69c0a262fa22a5fbfe111670b969e5c61f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 26 02:11:09 compute-0 neutron-haproxy-ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2[443229]: [NOTICE]   (443234) : New worker (443236) forked
Nov 26 02:11:09 compute-0 neutron-haproxy-ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2[443229]: [NOTICE]   (443234) : Loading success.
Nov 26 02:11:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 26 02:11:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.411 350391 DEBUG nova.network.neutron [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Successfully updated port: 422f5ef7-f048-4c83-a300-8b5942aafb8f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.437 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "refresh_cache-a6b626e1-3c31-460a-be1a-02b342efbb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.438 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquired lock "refresh_cache-a6b626e1-3c31-460a-be1a-02b342efbb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.438 350391 DEBUG nova.network.neutron [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 02:11:10 compute-0 podman[443245]: 2025-11-26 02:11:10.58507237 +0000 UTC m=+0.129515359 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, maintainer=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350)
Nov 26 02:11:10 compute-0 podman[443246]: 2025-11-26 02:11:10.586982964 +0000 UTC m=+0.126065653 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.753 350391 DEBUG nova.compute.manager [req-690452bd-6afe-4053-8c74-5a631d6cdb3c req-6dbececc-51f9-46ca-aad1-93326283eddb 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Received event network-vif-plugged-a7933322-1af0-456e-9e1c-2102f607d4f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.754 350391 DEBUG oslo_concurrency.lockutils [req-690452bd-6afe-4053-8c74-5a631d6cdb3c req-6dbececc-51f9-46ca-aad1-93326283eddb 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.755 350391 DEBUG oslo_concurrency.lockutils [req-690452bd-6afe-4053-8c74-5a631d6cdb3c req-6dbececc-51f9-46ca-aad1-93326283eddb 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.755 350391 DEBUG oslo_concurrency.lockutils [req-690452bd-6afe-4053-8c74-5a631d6cdb3c req-6dbececc-51f9-46ca-aad1-93326283eddb 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.756 350391 DEBUG nova.compute.manager [req-690452bd-6afe-4053-8c74-5a631d6cdb3c req-6dbececc-51f9-46ca-aad1-93326283eddb 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Processing event network-vif-plugged-a7933322-1af0-456e-9e1c-2102f607d4f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.757 350391 DEBUG nova.compute.manager [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.764 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123070.7640235, 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.765 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] VM Resumed (Lifecycle Event)#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.768 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.776 350391 INFO nova.virt.libvirt.driver [-] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Instance spawned successfully.#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.776 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.788 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.807 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.822 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.822 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.824 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.825 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.826 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.827 350391 DEBUG nova.virt.libvirt.driver [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.853 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.892 350391 INFO nova.compute.manager [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Took 10.25 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.893 350391 DEBUG nova.compute.manager [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.962 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.968 350391 INFO nova.compute.manager [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Took 11.58 seconds to build instance.#033[00m
Nov 26 02:11:10 compute-0 nova_compute[350387]: 2025-11-26 02:11:10.987 350391 DEBUG oslo_concurrency.lockutils [None req-6ff8375d-a63d-437a-bd9f-372f5c1af9ca d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:11:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:11:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:11:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:11:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:11:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:11:11 compute-0 nova_compute[350387]: 2025-11-26 02:11:11.314 350391 DEBUG nova.network.neutron [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 02:11:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 321 active+clean; 220 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.3 MiB/s wr, 39 op/s
Nov 26 02:11:11 compute-0 nova_compute[350387]: 2025-11-26 02:11:11.935 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:12 compute-0 nova_compute[350387]: 2025-11-26 02:11:12.930 350391 DEBUG nova.compute.manager [req-0262b343-a63a-4ecc-a2e3-27ffe4fd9f89 req-0e44a45c-a3f8-41bc-aaa3-ffda0df5af69 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Received event network-changed-422f5ef7-f048-4c83-a300-8b5942aafb8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:11:12 compute-0 nova_compute[350387]: 2025-11-26 02:11:12.931 350391 DEBUG nova.compute.manager [req-0262b343-a63a-4ecc-a2e3-27ffe4fd9f89 req-0e44a45c-a3f8-41bc-aaa3-ffda0df5af69 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Refreshing instance network info cache due to event network-changed-422f5ef7-f048-4c83-a300-8b5942aafb8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:11:12 compute-0 nova_compute[350387]: 2025-11-26 02:11:12.931 350391 DEBUG oslo_concurrency.lockutils [req-0262b343-a63a-4ecc-a2e3-27ffe4fd9f89 req-0e44a45c-a3f8-41bc-aaa3-ffda0df5af69 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-a6b626e1-3c31-460a-be1a-02b342efbb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:11:12 compute-0 nova_compute[350387]: 2025-11-26 02:11:12.938 350391 DEBUG nova.network.neutron [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Updating instance_info_cache with network_info: [{"id": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "address": "fa:16:3e:a9:2c:51", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422f5ef7-f0", "ovs_interfaceid": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:11:12 compute-0 nova_compute[350387]: 2025-11-26 02:11:12.970 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Releasing lock "refresh_cache-a6b626e1-3c31-460a-be1a-02b342efbb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:11:12 compute-0 nova_compute[350387]: 2025-11-26 02:11:12.971 350391 DEBUG nova.compute.manager [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Instance network_info: |[{"id": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "address": "fa:16:3e:a9:2c:51", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422f5ef7-f0", "ovs_interfaceid": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 02:11:12 compute-0 nova_compute[350387]: 2025-11-26 02:11:12.971 350391 DEBUG oslo_concurrency.lockutils [req-0262b343-a63a-4ecc-a2e3-27ffe4fd9f89 req-0e44a45c-a3f8-41bc-aaa3-ffda0df5af69 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-a6b626e1-3c31-460a-be1a-02b342efbb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:11:12 compute-0 nova_compute[350387]: 2025-11-26 02:11:12.972 350391 DEBUG nova.network.neutron [req-0262b343-a63a-4ecc-a2e3-27ffe4fd9f89 req-0e44a45c-a3f8-41bc-aaa3-ffda0df5af69 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Refreshing network info cache for port 422f5ef7-f048-4c83-a300-8b5942aafb8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:11:12 compute-0 nova_compute[350387]: 2025-11-26 02:11:12.982 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Start _get_guest_xml network_info=[{"id": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "address": "fa:16:3e:a9:2c:51", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422f5ef7-f0", "ovs_interfaceid": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': '4728a8a0-1107-4816-98c6-74482d53f92c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.003 350391 WARNING nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.011 350391 DEBUG nova.virt.libvirt.host [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.011 350391 DEBUG nova.virt.libvirt.host [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.017 350391 DEBUG nova.virt.libvirt.host [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.017 350391 DEBUG nova.virt.libvirt.host [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.018 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.018 350391 DEBUG nova.virt.hardware [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T02:09:05Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6db4d080-ab1e-4a78-a6d9-858137b0ba8b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.020 350391 DEBUG nova.virt.hardware [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.020 350391 DEBUG nova.virt.hardware [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.020 350391 DEBUG nova.virt.hardware [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.020 350391 DEBUG nova.virt.hardware [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.021 350391 DEBUG nova.virt.hardware [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.021 350391 DEBUG nova.virt.hardware [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.021 350391 DEBUG nova.virt.hardware [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.021 350391 DEBUG nova.virt.hardware [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.022 350391 DEBUG nova.virt.hardware [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.022 350391 DEBUG nova.virt.hardware [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.027 350391 DEBUG oslo_concurrency.processutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.387 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 229 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Nov 26 02:11:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:11:13 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2983989726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.565 350391 DEBUG oslo_concurrency.processutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.630 350391 DEBUG nova.storage.rbd_utils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] rbd image a6b626e1-3c31-460a-be1a-02b342efbb84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:11:13 compute-0 nova_compute[350387]: 2025-11-26 02:11:13.643 350391 DEBUG oslo_concurrency.processutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:11:14 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/33557615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.196 350391 DEBUG oslo_concurrency.processutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.198 350391 DEBUG nova.virt.libvirt.vif [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:11:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1631385969',display_name='tempest-TestNetworkBasicOps-server-1631385969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1631385969',id=9,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVShY87yzlQOWe2u5ta5RUz1JTn9hlbCsCTuoOM49NKuxjE+WriVj7MZBmGhYZn3KtsgUeQW4ny49nFDDbEDIaBG+pCU+fOKCpWz3oR3Z1j5AqqbJOXWrfIzpHCXMzVNA==',key_name='tempest-TestNetworkBasicOps-280692433',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='66fdcaf8e71a4c809ab9cab4c64ca9d5',ramdisk_id='',reservation_id='r-eah10bx0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-345735252',owner_user_name='tempest-TestNetworkBasicOps-345735252-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:11:07Z,user_data=None,user_id='a7102c5716b644e9a49ae0b2b6d2bd04',uuid=a6b626e1-3c31-460a-be1a-02b342efbb84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "address": "fa:16:3e:a9:2c:51", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422f5ef7-f0", "ovs_interfaceid": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.199 350391 DEBUG nova.network.os_vif_util [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Converting VIF {"id": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "address": "fa:16:3e:a9:2c:51", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422f5ef7-f0", "ovs_interfaceid": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.200 350391 DEBUG nova.network.os_vif_util [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:2c:51,bridge_name='br-int',has_traffic_filtering=True,id=422f5ef7-f048-4c83-a300-8b5942aafb8f,network=Network(6006a9a5-9f5c-48b2-8574-7469a748b2e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap422f5ef7-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.203 350391 DEBUG nova.objects.instance [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lazy-loading 'pci_devices' on Instance uuid a6b626e1-3c31-460a-be1a-02b342efbb84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.224 350391 DEBUG oslo_concurrency.lockutils [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Acquiring lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.224 350391 DEBUG oslo_concurrency.lockutils [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.225 350391 DEBUG oslo_concurrency.lockutils [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Acquiring lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.225 350391 DEBUG oslo_concurrency.lockutils [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.225 350391 DEBUG oslo_concurrency.lockutils [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.227 350391 INFO nova.compute.manager [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Terminating instance#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.229 350391 DEBUG nova.compute.manager [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.233 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] End _get_guest_xml xml=<domain type="kvm">
Nov 26 02:11:14 compute-0 nova_compute[350387]:  <uuid>a6b626e1-3c31-460a-be1a-02b342efbb84</uuid>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  <name>instance-00000009</name>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  <memory>131072</memory>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  <metadata>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <nova:name>tempest-TestNetworkBasicOps-server-1631385969</nova:name>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 02:11:13</nova:creationTime>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <nova:flavor name="m1.nano">
Nov 26 02:11:14 compute-0 nova_compute[350387]:        <nova:memory>128</nova:memory>
Nov 26 02:11:14 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 02:11:14 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 02:11:14 compute-0 nova_compute[350387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 02:11:14 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 02:11:14 compute-0 nova_compute[350387]:        <nova:user uuid="a7102c5716b644e9a49ae0b2b6d2bd04">tempest-TestNetworkBasicOps-345735252-project-member</nova:user>
Nov 26 02:11:14 compute-0 nova_compute[350387]:        <nova:project uuid="66fdcaf8e71a4c809ab9cab4c64ca9d5">tempest-TestNetworkBasicOps-345735252</nova:project>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="4728a8a0-1107-4816-98c6-74482d53f92c"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <nova:ports>
Nov 26 02:11:14 compute-0 nova_compute[350387]:        <nova:port uuid="422f5ef7-f048-4c83-a300-8b5942aafb8f">
Nov 26 02:11:14 compute-0 nova_compute[350387]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:        </nova:port>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      </nova:ports>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  </metadata>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <system>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <entry name="serial">a6b626e1-3c31-460a-be1a-02b342efbb84</entry>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <entry name="uuid">a6b626e1-3c31-460a-be1a-02b342efbb84</entry>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    </system>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  <os>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  </os>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  <features>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <apic/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  </features>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  </clock>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  </cpu>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  <devices>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/a6b626e1-3c31-460a-be1a-02b342efbb84_disk">
Nov 26 02:11:14 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      </source>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:11:14 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/a6b626e1-3c31-460a-be1a-02b342efbb84_disk.config">
Nov 26 02:11:14 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      </source>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:11:14 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <interface type="ethernet">
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <mac address="fa:16:3e:a9:2c:51"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <mtu size="1442"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <target dev="tap422f5ef7-f0"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    </interface>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/a6b626e1-3c31-460a-be1a-02b342efbb84/console.log" append="off"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    </serial>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <video>
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    </video>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    </rng>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 02:11:14 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 02:11:14 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 02:11:14 compute-0 nova_compute[350387]:  </devices>
Nov 26 02:11:14 compute-0 nova_compute[350387]: </domain>
Nov 26 02:11:14 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.234 350391 DEBUG nova.compute.manager [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Preparing to wait for external event network-vif-plugged-422f5ef7-f048-4c83-a300-8b5942aafb8f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.234 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.234 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.235 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.236 350391 DEBUG nova.virt.libvirt.vif [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:11:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1631385969',display_name='tempest-TestNetworkBasicOps-server-1631385969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1631385969',id=9,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVShY87yzlQOWe2u5ta5RUz1JTn9hlbCsCTuoOM49NKuxjE+WriVj7MZBmGhYZn3KtsgUeQW4ny49nFDDbEDIaBG+pCU+fOKCpWz3oR3Z1j5AqqbJOXWrfIzpHCXMzVNA==',key_name='tempest-TestNetworkBasicOps-280692433',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='66fdcaf8e71a4c809ab9cab4c64ca9d5',ramdisk_id='',reservation_id='r-eah10bx0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-345735252',owner_user_name='tempest-TestNetworkBasicOps-345735252-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:11:07Z,user_data=None,user_id='a7102c5716b644e9a49ae0b2b6d2bd04',uuid=a6b626e1-3c31-460a-be1a-02b342efbb84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "address": "fa:16:3e:a9:2c:51", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422f5ef7-f0", "ovs_interfaceid": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.236 350391 DEBUG nova.network.os_vif_util [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Converting VIF {"id": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "address": "fa:16:3e:a9:2c:51", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422f5ef7-f0", "ovs_interfaceid": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.237 350391 DEBUG nova.network.os_vif_util [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:2c:51,bridge_name='br-int',has_traffic_filtering=True,id=422f5ef7-f048-4c83-a300-8b5942aafb8f,network=Network(6006a9a5-9f5c-48b2-8574-7469a748b2e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap422f5ef7-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.238 350391 DEBUG os_vif [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:2c:51,bridge_name='br-int',has_traffic_filtering=True,id=422f5ef7-f048-4c83-a300-8b5942aafb8f,network=Network(6006a9a5-9f5c-48b2-8574-7469a748b2e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap422f5ef7-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.239 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.240 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.240 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.244 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.244 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap422f5ef7-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.244 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap422f5ef7-f0, col_values=(('external_ids', {'iface-id': '422f5ef7-f048-4c83-a300-8b5942aafb8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:2c:51', 'vm-uuid': 'a6b626e1-3c31-460a-be1a-02b342efbb84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.246 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:14 compute-0 NetworkManager[48886]: <info>  [1764123074.2483] manager: (tap422f5ef7-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.250 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.259 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.260 350391 INFO os_vif [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:2c:51,bridge_name='br-int',has_traffic_filtering=True,id=422f5ef7-f048-4c83-a300-8b5942aafb8f,network=Network(6006a9a5-9f5c-48b2-8574-7469a748b2e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap422f5ef7-f0')#033[00m
Nov 26 02:11:14 compute-0 kernel: tapa7933322-1a (unregistering): left promiscuous mode
Nov 26 02:11:14 compute-0 NetworkManager[48886]: <info>  [1764123074.3219] device (tapa7933322-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.328 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.328 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.328 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] No VIF found with MAC fa:16:3e:a9:2c:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.329 350391 INFO nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Using config drive#033[00m
Nov 26 02:11:14 compute-0 ovn_controller[89102]: 2025-11-26T02:11:14Z|00087|binding|INFO|Releasing lport a7933322-1af0-456e-9e1c-2102f607d4f1 from this chassis (sb_readonly=0)
Nov 26 02:11:14 compute-0 ovn_controller[89102]: 2025-11-26T02:11:14Z|00088|binding|INFO|Setting lport a7933322-1af0-456e-9e1c-2102f607d4f1 down in Southbound
Nov 26 02:11:14 compute-0 ovn_controller[89102]: 2025-11-26T02:11:14Z|00089|binding|INFO|Removing iface tapa7933322-1a ovn-installed in OVS
Nov 26 02:11:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:14.357 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:72:ad 10.100.0.10'], port_security=['fa:16:3e:5a:72:ad 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5f2f6ac2-07e8-46b8-8930-5f9a67979d3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b34932cc-66a3-49a9-8ab7-abd55886e6d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f3bad7f1e634c97a6a227a970edc48a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '46cb60df-50e1-4c9a-877a-bea59902c38a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=142464b3-5053-4192-8cdf-d574afeb7ae1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=a7933322-1af0-456e-9e1c-2102f607d4f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:11:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:14.359 286844 INFO neutron.agent.ovn.metadata.agent [-] Port a7933322-1af0-456e-9e1c-2102f607d4f1 in datapath b34932cc-66a3-49a9-8ab7-abd55886e6d2 unbound from our chassis#033[00m
Nov 26 02:11:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:14.362 286844 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b34932cc-66a3-49a9-8ab7-abd55886e6d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 02:11:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:14.364 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[115b0f26-3c1b-4d7f-a712-059e45f9e5cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:14.365 286844 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2 namespace which is not needed anymore#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.385 350391 DEBUG nova.storage.rbd_utils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] rbd image a6b626e1-3c31-460a-be1a-02b342efbb84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:11:14 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 26 02:11:14 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 4.592s CPU time.
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.400 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:14 compute-0 systemd-machined[138512]: Machine qemu-8-instance-00000008 terminated.
Nov 26 02:11:14 compute-0 NetworkManager[48886]: <info>  [1764123074.4751] manager: (tapa7933322-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.493 350391 INFO nova.virt.libvirt.driver [-] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Instance destroyed successfully.#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.494 350391 DEBUG nova.objects.instance [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lazy-loading 'resources' on Instance uuid 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.522 350391 DEBUG nova.virt.libvirt.vif [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T02:10:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-508132063',display_name='tempest-ServerAddressesTestJSON-server-508132063',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-508132063',id=8,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-26T02:11:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6f3bad7f1e634c97a6a227a970edc48a',ramdisk_id='',reservation_id='r-2qz3bm4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-28329255',owner_user_name='tempest-ServerAddressesTestJSON-28329255-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T02:11:10Z,user_data=None,user_id='d0f6705a78b34ed4991b2f5db8d428c4',uuid=5f2f6ac2-07e8-46b8-8930-5f9a67979d3f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a7933322-1af0-456e-9e1c-2102f607d4f1", "address": "fa:16:3e:5a:72:ad", "network": {"id": "b34932cc-66a3-49a9-8ab7-abd55886e6d2", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1363447779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f3bad7f1e634c97a6a227a970edc48a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7933322-1a", "ovs_interfaceid": "a7933322-1af0-456e-9e1c-2102f607d4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.523 350391 DEBUG nova.network.os_vif_util [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Converting VIF {"id": "a7933322-1af0-456e-9e1c-2102f607d4f1", "address": "fa:16:3e:5a:72:ad", "network": {"id": "b34932cc-66a3-49a9-8ab7-abd55886e6d2", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1363447779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f3bad7f1e634c97a6a227a970edc48a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7933322-1a", "ovs_interfaceid": "a7933322-1af0-456e-9e1c-2102f607d4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.523 350391 DEBUG nova.network.os_vif_util [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:72:ad,bridge_name='br-int',has_traffic_filtering=True,id=a7933322-1af0-456e-9e1c-2102f607d4f1,network=Network(b34932cc-66a3-49a9-8ab7-abd55886e6d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7933322-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.524 350391 DEBUG os_vif [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:72:ad,bridge_name='br-int',has_traffic_filtering=True,id=a7933322-1af0-456e-9e1c-2102f607d4f1,network=Network(b34932cc-66a3-49a9-8ab7-abd55886e6d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7933322-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.525 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.525 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7933322-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.527 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.529 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.532 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.535 350391 INFO os_vif [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:72:ad,bridge_name='br-int',has_traffic_filtering=True,id=a7933322-1af0-456e-9e1c-2102f607d4f1,network=Network(b34932cc-66a3-49a9-8ab7-abd55886e6d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7933322-1a')#033[00m
Nov 26 02:11:14 compute-0 neutron-haproxy-ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2[443229]: [NOTICE]   (443234) : haproxy version is 2.8.14-c23fe91
Nov 26 02:11:14 compute-0 neutron-haproxy-ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2[443229]: [NOTICE]   (443234) : path to executable is /usr/sbin/haproxy
Nov 26 02:11:14 compute-0 neutron-haproxy-ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2[443229]: [WARNING]  (443234) : Exiting Master process...
Nov 26 02:11:14 compute-0 neutron-haproxy-ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2[443229]: [ALERT]    (443234) : Current worker (443236) exited with code 143 (Terminated)
Nov 26 02:11:14 compute-0 neutron-haproxy-ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2[443229]: [WARNING]  (443234) : All workers exited. Exiting... (0)
Nov 26 02:11:14 compute-0 systemd[1]: libpod-f6a61458f071b934c02b6b2ec96b69c0a262fa22a5fbfe111670b969e5c61f4e.scope: Deactivated successfully.
Nov 26 02:11:14 compute-0 podman[443403]: 2025-11-26 02:11:14.577482431 +0000 UTC m=+0.074211170 container died f6a61458f071b934c02b6b2ec96b69c0a262fa22a5fbfe111670b969e5c61f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 26 02:11:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6a61458f071b934c02b6b2ec96b69c0a262fa22a5fbfe111670b969e5c61f4e-userdata-shm.mount: Deactivated successfully.
Nov 26 02:11:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-58639e6d286e1be81d57c27347758ef56215068f9ff07cc0412123ddea10400b-merged.mount: Deactivated successfully.
Nov 26 02:11:14 compute-0 podman[443403]: 2025-11-26 02:11:14.654813638 +0000 UTC m=+0.151542367 container cleanup f6a61458f071b934c02b6b2ec96b69c0a262fa22a5fbfe111670b969e5c61f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:11:14 compute-0 systemd[1]: libpod-conmon-f6a61458f071b934c02b6b2ec96b69c0a262fa22a5fbfe111670b969e5c61f4e.scope: Deactivated successfully.
Nov 26 02:11:14 compute-0 podman[443452]: 2025-11-26 02:11:14.753437091 +0000 UTC m=+0.063430138 container remove f6a61458f071b934c02b6b2ec96b69c0a262fa22a5fbfe111670b969e5c61f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:11:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:14.764 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[4b48b474-7839-4cbb-97f4-6b5af2c9b184]: (4, ('Wed Nov 26 02:11:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2 (f6a61458f071b934c02b6b2ec96b69c0a262fa22a5fbfe111670b969e5c61f4e)\nf6a61458f071b934c02b6b2ec96b69c0a262fa22a5fbfe111670b969e5c61f4e\nWed Nov 26 02:11:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2 (f6a61458f071b934c02b6b2ec96b69c0a262fa22a5fbfe111670b969e5c61f4e)\nf6a61458f071b934c02b6b2ec96b69c0a262fa22a5fbfe111670b969e5c61f4e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:14.767 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b62aa1-fd47-4f85-99b5-2a265af398a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:14.769 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb34932cc-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.772 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:14 compute-0 kernel: tapb34932cc-60: left promiscuous mode
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.793 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:14.796 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[8adacecf-ecd0-4477-8b9a-86e493e4deb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:14.811 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[dc544152-0407-4892-8238-efbfd63314b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:14.812 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[a7facfd9-7b6c-4892-a9cd-228a5dff0053]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:14.834 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc45122-056d-43f2-8ed0-064419a437b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669869, 'reachable_time': 41770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 443470, 'error': None, 'target': 'ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:14 compute-0 systemd[1]: run-netns-ovnmeta\x2db34932cc\x2d66a3\x2d49a9\x2d8ab7\x2dabd55886e6d2.mount: Deactivated successfully.
Nov 26 02:11:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:14.838 287175 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b34932cc-66a3-49a9-8ab7-abd55886e6d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 02:11:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:14.838 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[b0421f33-b9ea-494f-a08f-6afbbb75699c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.879 350391 INFO nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Creating config drive at /var/lib/nova/instances/a6b626e1-3c31-460a-be1a-02b342efbb84/disk.config#033[00m
Nov 26 02:11:14 compute-0 nova_compute[350387]: 2025-11-26 02:11:14.884 350391 DEBUG oslo_concurrency.processutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6b626e1-3c31-460a-be1a-02b342efbb84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw1jlizgf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.018 350391 DEBUG oslo_concurrency.processutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6b626e1-3c31-460a-be1a-02b342efbb84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw1jlizgf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.079 350391 DEBUG nova.storage.rbd_utils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] rbd image a6b626e1-3c31-460a-be1a-02b342efbb84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:11:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.091 350391 DEBUG oslo_concurrency.processutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a6b626e1-3c31-460a-be1a-02b342efbb84/disk.config a6b626e1-3c31-460a-be1a-02b342efbb84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.134 350391 DEBUG nova.compute.manager [req-5697870f-fcd6-4579-b0b7-849ac8a8c43f req-f1bbfbad-8afb-40f8-b3b6-2e582a35f220 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Received event network-vif-unplugged-a7933322-1af0-456e-9e1c-2102f607d4f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.136 350391 DEBUG oslo_concurrency.lockutils [req-5697870f-fcd6-4579-b0b7-849ac8a8c43f req-f1bbfbad-8afb-40f8-b3b6-2e582a35f220 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.137 350391 DEBUG oslo_concurrency.lockutils [req-5697870f-fcd6-4579-b0b7-849ac8a8c43f req-f1bbfbad-8afb-40f8-b3b6-2e582a35f220 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.138 350391 DEBUG oslo_concurrency.lockutils [req-5697870f-fcd6-4579-b0b7-849ac8a8c43f req-f1bbfbad-8afb-40f8-b3b6-2e582a35f220 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.141 350391 DEBUG nova.compute.manager [req-5697870f-fcd6-4579-b0b7-849ac8a8c43f req-f1bbfbad-8afb-40f8-b3b6-2e582a35f220 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] No waiting events found dispatching network-vif-unplugged-a7933322-1af0-456e-9e1c-2102f607d4f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.142 350391 DEBUG nova.compute.manager [req-5697870f-fcd6-4579-b0b7-849ac8a8c43f req-f1bbfbad-8afb-40f8-b3b6-2e582a35f220 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Received event network-vif-unplugged-a7933322-1af0-456e-9e1c-2102f607d4f1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.144 350391 DEBUG nova.compute.manager [req-5697870f-fcd6-4579-b0b7-849ac8a8c43f req-f1bbfbad-8afb-40f8-b3b6-2e582a35f220 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Received event network-vif-plugged-a7933322-1af0-456e-9e1c-2102f607d4f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.146 350391 DEBUG oslo_concurrency.lockutils [req-5697870f-fcd6-4579-b0b7-849ac8a8c43f req-f1bbfbad-8afb-40f8-b3b6-2e582a35f220 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.149 350391 DEBUG oslo_concurrency.lockutils [req-5697870f-fcd6-4579-b0b7-849ac8a8c43f req-f1bbfbad-8afb-40f8-b3b6-2e582a35f220 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.149 350391 DEBUG oslo_concurrency.lockutils [req-5697870f-fcd6-4579-b0b7-849ac8a8c43f req-f1bbfbad-8afb-40f8-b3b6-2e582a35f220 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.150 350391 DEBUG nova.compute.manager [req-5697870f-fcd6-4579-b0b7-849ac8a8c43f req-f1bbfbad-8afb-40f8-b3b6-2e582a35f220 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] No waiting events found dispatching network-vif-plugged-a7933322-1af0-456e-9e1c-2102f607d4f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.150 350391 WARNING nova.compute.manager [req-5697870f-fcd6-4579-b0b7-849ac8a8c43f req-f1bbfbad-8afb-40f8-b3b6-2e582a35f220 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Received unexpected event network-vif-plugged-a7933322-1af0-456e-9e1c-2102f607d4f1 for instance with vm_state active and task_state deleting.#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.155 350391 DEBUG nova.network.neutron [req-0262b343-a63a-4ecc-a2e3-27ffe4fd9f89 req-0e44a45c-a3f8-41bc-aaa3-ffda0df5af69 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Updated VIF entry in instance network info cache for port 422f5ef7-f048-4c83-a300-8b5942aafb8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.156 350391 DEBUG nova.network.neutron [req-0262b343-a63a-4ecc-a2e3-27ffe4fd9f89 req-0e44a45c-a3f8-41bc-aaa3-ffda0df5af69 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Updating instance_info_cache with network_info: [{"id": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "address": "fa:16:3e:a9:2c:51", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422f5ef7-f0", "ovs_interfaceid": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.176 350391 DEBUG oslo_concurrency.lockutils [req-0262b343-a63a-4ecc-a2e3-27ffe4fd9f89 req-0e44a45c-a3f8-41bc-aaa3-ffda0df5af69 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-a6b626e1-3c31-460a-be1a-02b342efbb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.177 350391 DEBUG nova.compute.manager [req-0262b343-a63a-4ecc-a2e3-27ffe4fd9f89 req-0e44a45c-a3f8-41bc-aaa3-ffda0df5af69 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Received event network-vif-plugged-a7933322-1af0-456e-9e1c-2102f607d4f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.177 350391 DEBUG oslo_concurrency.lockutils [req-0262b343-a63a-4ecc-a2e3-27ffe4fd9f89 req-0e44a45c-a3f8-41bc-aaa3-ffda0df5af69 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.179 350391 DEBUG oslo_concurrency.lockutils [req-0262b343-a63a-4ecc-a2e3-27ffe4fd9f89 req-0e44a45c-a3f8-41bc-aaa3-ffda0df5af69 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.179 350391 DEBUG oslo_concurrency.lockutils [req-0262b343-a63a-4ecc-a2e3-27ffe4fd9f89 req-0e44a45c-a3f8-41bc-aaa3-ffda0df5af69 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.181 350391 DEBUG nova.compute.manager [req-0262b343-a63a-4ecc-a2e3-27ffe4fd9f89 req-0e44a45c-a3f8-41bc-aaa3-ffda0df5af69 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] No waiting events found dispatching network-vif-plugged-a7933322-1af0-456e-9e1c-2102f607d4f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.182 350391 WARNING nova.compute.manager [req-0262b343-a63a-4ecc-a2e3-27ffe4fd9f89 req-0e44a45c-a3f8-41bc-aaa3-ffda0df5af69 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Received unexpected event network-vif-plugged-a7933322-1af0-456e-9e1c-2102f607d4f1 for instance with vm_state active and task_state None.#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.302 350391 INFO nova.virt.libvirt.driver [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Deleting instance files /var/lib/nova/instances/5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_del#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.304 350391 INFO nova.virt.libvirt.driver [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Deletion of /var/lib/nova/instances/5f2f6ac2-07e8-46b8-8930-5f9a67979d3f_del complete#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.358 350391 DEBUG oslo_concurrency.processutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a6b626e1-3c31-460a-be1a-02b342efbb84/disk.config a6b626e1-3c31-460a-be1a-02b342efbb84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.359 350391 INFO nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Deleting local config drive /var/lib/nova/instances/a6b626e1-3c31-460a-be1a-02b342efbb84/disk.config because it was imported into RBD.#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.362 350391 INFO nova.compute.manager [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.362 350391 DEBUG oslo.service.loopingcall [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.363 350391 DEBUG nova.compute.manager [-] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.363 350391 DEBUG nova.network.neutron [-] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 02:11:15 compute-0 virtqemud[138515]: End of file while reading data: Input/output error
Nov 26 02:11:15 compute-0 kernel: tap422f5ef7-f0: entered promiscuous mode
Nov 26 02:11:15 compute-0 systemd-udevd[443376]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 02:11:15 compute-0 NetworkManager[48886]: <info>  [1764123075.4614] manager: (tap422f5ef7-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Nov 26 02:11:15 compute-0 ovn_controller[89102]: 2025-11-26T02:11:15Z|00090|binding|INFO|Claiming lport 422f5ef7-f048-4c83-a300-8b5942aafb8f for this chassis.
Nov 26 02:11:15 compute-0 ovn_controller[89102]: 2025-11-26T02:11:15Z|00091|binding|INFO|422f5ef7-f048-4c83-a300-8b5942aafb8f: Claiming fa:16:3e:a9:2c:51 10.100.0.13
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.463 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 207 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 932 KiB/s rd, 3.0 MiB/s wr, 101 op/s
Nov 26 02:11:15 compute-0 NetworkManager[48886]: <info>  [1764123075.4823] device (tap422f5ef7-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 02:11:15 compute-0 NetworkManager[48886]: <info>  [1764123075.4881] device (tap422f5ef7-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.490 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:2c:51 10.100.0.13'], port_security=['fa:16:3e:a9:2c:51 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a6b626e1-3c31-460a-be1a-02b342efbb84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '66fdcaf8e71a4c809ab9cab4c64ca9d5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f8b6275f-0b2c-431d-b2a1-cb057a9f12fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=995a63f2-436e-4878-a062-61a1cd67b7e2, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=422f5ef7-f048-4c83-a300-8b5942aafb8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.493 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 422f5ef7-f048-4c83-a300-8b5942aafb8f in datapath 6006a9a5-9f5c-48b2-8574-7469a748b2e4 bound to our chassis#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.497 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6006a9a5-9f5c-48b2-8574-7469a748b2e4#033[00m
Nov 26 02:11:15 compute-0 ovn_controller[89102]: 2025-11-26T02:11:15Z|00092|binding|INFO|Setting lport 422f5ef7-f048-4c83-a300-8b5942aafb8f ovn-installed in OVS
Nov 26 02:11:15 compute-0 ovn_controller[89102]: 2025-11-26T02:11:15Z|00093|binding|INFO|Setting lport 422f5ef7-f048-4c83-a300-8b5942aafb8f up in Southbound
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.499 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:15 compute-0 systemd-machined[138512]: New machine qemu-9-instance-00000009.
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.515 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[d4586cf2-8f0b-4de2-9d9c-c685d924d2cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.516 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6006a9a5-91 in ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.519 413433 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6006a9a5-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.519 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[4d28e9e1-aec6-450e-963e-572cf97dee0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.520 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[62ee92e2-36f3-45e6-9704-795c1e02037c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.539 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[94694bf7-bbaf-4697-9c2b-b2b901dc5387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.576 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[e78cadad-5d5b-4d77-84cc-b64eee3c1428]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.614 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[b2fb0416-8f11-4c53-88a2-6eadaa7be3a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.624 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[6573488c-41a4-4c58-a6e5-d3bb449b2cae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 NetworkManager[48886]: <info>  [1764123075.6261] manager: (tap6006a9a5-90): new Veth device (/org/freedesktop/NetworkManager/Devices/52)
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.675 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9d2cfe-1429-4d47-9a22-2f2d42a1ffa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.678 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[145e20e7-a076-40f4-a052-5f02b5c21272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 NetworkManager[48886]: <info>  [1764123075.7097] device (tap6006a9a5-90): carrier: link connected
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.717 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[ca85fde0-85a2-42e6-b2b9-e59326639f51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.742 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[1810de7e-9b14-4acf-b25a-bd1736f7dee9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6006a9a5-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:62:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670633, 'reachable_time': 43533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 443625, 'error': None, 'target': 'ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.770 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e9a247-8bca-4a4b-83cc-2468dd4dc687]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:62d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670633, 'tstamp': 670633}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 443629, 'error': None, 'target': 'ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.799 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[2d6c4597-e0af-4d2c-806d-f64b8e5b159a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6006a9a5-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:62:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670633, 'reachable_time': 43533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 443633, 'error': None, 'target': 'ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.844 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[db5ac408-3716-44b0-bc27-1bdb6f40224d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.931 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[7f31ec8a-8c72-4f34-bb3c-39a8366ddba7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.933 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6006a9a5-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.934 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.935 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6006a9a5-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:15 compute-0 kernel: tap6006a9a5-90: entered promiscuous mode
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.938 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:15 compute-0 NetworkManager[48886]: <info>  [1764123075.9392] manager: (tap6006a9a5-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.944 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6006a9a5-90, col_values=(('external_ids', {'iface-id': '0fdbc9f8-20bb-4f6b-b66d-965099ff6047'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.941 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.946 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:15 compute-0 ovn_controller[89102]: 2025-11-26T02:11:15Z|00094|binding|INFO|Releasing lport 0fdbc9f8-20bb-4f6b-b66d-965099ff6047 from this chassis (sb_readonly=0)
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.951 286844 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6006a9a5-9f5c-48b2-8574-7469a748b2e4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6006a9a5-9f5c-48b2-8574-7469a748b2e4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.952 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2b5d88-3d96-49a7-9d6e-209d8cd9f220]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.954 286844 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: global
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    log         /dev/log local0 debug
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    log-tag     haproxy-metadata-proxy-6006a9a5-9f5c-48b2-8574-7469a748b2e4
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    user        root
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    group       root
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    maxconn     1024
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    pidfile     /var/lib/neutron/external/pids/6006a9a5-9f5c-48b2-8574-7469a748b2e4.pid.haproxy
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    daemon
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: defaults
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    log global
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    mode http
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    option httplog
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    option dontlognull
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    option http-server-close
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    option forwardfor
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    retries                 3
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    timeout http-request    30s
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    timeout connect         30s
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    timeout client          32s
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    timeout server          32s
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    timeout http-keep-alive 30s
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: listen listener
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    bind 169.254.169.254:80
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]:    http-request add-header X-OVN-Network-ID 6006a9a5-9f5c-48b2-8574-7469a748b2e4
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 02:11:15 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:15.955 286844 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'env', 'PROCESS_TAG=haproxy-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6006a9a5-9f5c-48b2-8574-7469a748b2e4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.962 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:15 compute-0 nova_compute[350387]: 2025-11-26 02:11:15.963 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:16 compute-0 nova_compute[350387]: 2025-11-26 02:11:16.163 350391 DEBUG nova.network.neutron [-] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:11:16 compute-0 nova_compute[350387]: 2025-11-26 02:11:16.186 350391 INFO nova.compute.manager [-] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Took 0.82 seconds to deallocate network for instance.#033[00m
Nov 26 02:11:16 compute-0 nova_compute[350387]: 2025-11-26 02:11:16.251 350391 DEBUG oslo_concurrency.lockutils [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:16 compute-0 nova_compute[350387]: 2025-11-26 02:11:16.252 350391 DEBUG oslo_concurrency.lockutils [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:16 compute-0 nova_compute[350387]: 2025-11-26 02:11:16.301 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123076.3004045, a6b626e1-3c31-460a-be1a-02b342efbb84 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:11:16 compute-0 nova_compute[350387]: 2025-11-26 02:11:16.303 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] VM Started (Lifecycle Event)#033[00m
Nov 26 02:11:16 compute-0 nova_compute[350387]: 2025-11-26 02:11:16.467 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:11:16 compute-0 nova_compute[350387]: 2025-11-26 02:11:16.473 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123076.300573, a6b626e1-3c31-460a-be1a-02b342efbb84 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:11:16 compute-0 nova_compute[350387]: 2025-11-26 02:11:16.473 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] VM Paused (Lifecycle Event)#033[00m
Nov 26 02:11:16 compute-0 nova_compute[350387]: 2025-11-26 02:11:16.490 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:11:16 compute-0 nova_compute[350387]: 2025-11-26 02:11:16.499 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:11:16 compute-0 nova_compute[350387]: 2025-11-26 02:11:16.532 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:11:16 compute-0 podman[443748]: 2025-11-26 02:11:16.555508362 +0000 UTC m=+0.118178433 container create 233e965bf809b82f1de538910d77139824ee23680c73715bc29898bf0462ea6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 26 02:11:16 compute-0 nova_compute[350387]: 2025-11-26 02:11:16.559 350391 DEBUG oslo_concurrency.processutils [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:16 compute-0 podman[443748]: 2025-11-26 02:11:16.48800016 +0000 UTC m=+0.050670281 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 02:11:16 compute-0 systemd[1]: Started libpod-conmon-233e965bf809b82f1de538910d77139824ee23680c73715bc29898bf0462ea6f.scope.
Nov 26 02:11:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:11:16 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:11:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:11:16 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:11:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:11:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:11:16 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:11:16 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 3c532a38-9e10-43d1-a4c8-f188f14eb728 does not exist
Nov 26 02:11:16 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev d0eaca24-ccf0-4ed3-90b9-18018ce9f3f3 does not exist
Nov 26 02:11:16 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5bbaf6da-6bf4-4b44-8c43-52d9f65ed205 does not exist
Nov 26 02:11:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:11:16 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b87993993d907c4c1282e1161ef9c6579d6143f9cee71cdea5072bafb3dbf4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 02:11:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:11:16 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:11:16 compute-0 podman[443748]: 2025-11-26 02:11:16.683963311 +0000 UTC m=+0.246633462 container init 233e965bf809b82f1de538910d77139824ee23680c73715bc29898bf0462ea6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:11:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:11:16 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:11:16 compute-0 podman[443748]: 2025-11-26 02:11:16.699619009 +0000 UTC m=+0.262289120 container start 233e965bf809b82f1de538910d77139824ee23680c73715bc29898bf0462ea6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 26 02:11:16 compute-0 neutron-haproxy-ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4[443776]: [NOTICE]   (443798) : New worker (443807) forked
Nov 26 02:11:16 compute-0 neutron-haproxy-ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4[443776]: [NOTICE]   (443798) : Loading success.
Nov 26 02:11:16 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:11:16 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:11:16 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:11:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:11:17 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1674525599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:11:17 compute-0 nova_compute[350387]: 2025-11-26 02:11:17.027 350391 DEBUG oslo_concurrency.processutils [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:17 compute-0 nova_compute[350387]: 2025-11-26 02:11:17.037 350391 DEBUG nova.compute.provider_tree [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:11:17 compute-0 nova_compute[350387]: 2025-11-26 02:11:17.060 350391 DEBUG nova.scheduler.client.report [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:11:17 compute-0 nova_compute[350387]: 2025-11-26 02:11:17.083 350391 DEBUG oslo_concurrency.lockutils [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:17 compute-0 nova_compute[350387]: 2025-11-26 02:11:17.115 350391 INFO nova.scheduler.client.report [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Deleted allocations for instance 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f#033[00m
Nov 26 02:11:17 compute-0 nova_compute[350387]: 2025-11-26 02:11:17.192 350391 DEBUG oslo_concurrency.lockutils [None req-b465df2e-3287-474a-aa0f-dff7ef6b0b5d d0f6705a78b34ed4991b2f5db8d428c4 6f3bad7f1e634c97a6a227a970edc48a - - default default] Lock "5f2f6ac2-07e8-46b8-8930-5f9a67979d3f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:17 compute-0 nova_compute[350387]: 2025-11-26 02:11:17.351 350391 DEBUG nova.compute.manager [req-1a5c3c68-f65a-40ca-b81f-be23d9d2f7f0 req-414e31e5-ab17-4c39-bfba-35b78f9f8f24 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Received event network-vif-deleted-a7933322-1af0-456e-9e1c-2102f607d4f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:11:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 183 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Nov 26 02:11:17 compute-0 podman[443952]: 2025-11-26 02:11:17.824252451 +0000 UTC m=+0.101948688 container create b296716971341928fd10565f2abb08b246bc3db4cbdd95eb6d9cef92b7b60eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_roentgen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 26 02:11:17 compute-0 podman[443952]: 2025-11-26 02:11:17.788154769 +0000 UTC m=+0.065851066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:11:17 compute-0 systemd[1]: Started libpod-conmon-b296716971341928fd10565f2abb08b246bc3db4cbdd95eb6d9cef92b7b60eba.scope.
Nov 26 02:11:17 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:11:17 compute-0 podman[443952]: 2025-11-26 02:11:17.990430326 +0000 UTC m=+0.268126623 container init b296716971341928fd10565f2abb08b246bc3db4cbdd95eb6d9cef92b7b60eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_roentgen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 02:11:18 compute-0 podman[443952]: 2025-11-26 02:11:18.007664009 +0000 UTC m=+0.285360246 container start b296716971341928fd10565f2abb08b246bc3db4cbdd95eb6d9cef92b7b60eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_roentgen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:11:18 compute-0 podman[443952]: 2025-11-26 02:11:18.014546902 +0000 UTC m=+0.292243139 container attach b296716971341928fd10565f2abb08b246bc3db4cbdd95eb6d9cef92b7b60eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_roentgen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 02:11:18 compute-0 quirky_roentgen[443966]: 167 167
Nov 26 02:11:18 compute-0 systemd[1]: libpod-b296716971341928fd10565f2abb08b246bc3db4cbdd95eb6d9cef92b7b60eba.scope: Deactivated successfully.
Nov 26 02:11:18 compute-0 podman[443952]: 2025-11-26 02:11:18.025771797 +0000 UTC m=+0.303468034 container died b296716971341928fd10565f2abb08b246bc3db4cbdd95eb6d9cef92b7b60eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_roentgen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 02:11:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b030ef9fd33854033652b637beee19a68a9d1bea098167b23c82aec4358a980a-merged.mount: Deactivated successfully.
Nov 26 02:11:18 compute-0 podman[443952]: 2025-11-26 02:11:18.09727106 +0000 UTC m=+0.374967297 container remove b296716971341928fd10565f2abb08b246bc3db4cbdd95eb6d9cef92b7b60eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_roentgen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 26 02:11:18 compute-0 systemd[1]: libpod-conmon-b296716971341928fd10565f2abb08b246bc3db4cbdd95eb6d9cef92b7b60eba.scope: Deactivated successfully.
Nov 26 02:11:18 compute-0 nova_compute[350387]: 2025-11-26 02:11:18.389 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:18 compute-0 podman[443991]: 2025-11-26 02:11:18.428676786 +0000 UTC m=+0.112660658 container create d440a9e76b615a7b44d2221dad97d342dff975ddf68b67c400a346927f8815d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:11:18 compute-0 podman[443991]: 2025-11-26 02:11:18.378352706 +0000 UTC m=+0.062336668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:11:18 compute-0 systemd[1]: Started libpod-conmon-d440a9e76b615a7b44d2221dad97d342dff975ddf68b67c400a346927f8815d7.scope.
Nov 26 02:11:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c019d9100d8ff6c6f185a1b386691ae2b996cf4c904876dd787f381c4f730d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c019d9100d8ff6c6f185a1b386691ae2b996cf4c904876dd787f381c4f730d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c019d9100d8ff6c6f185a1b386691ae2b996cf4c904876dd787f381c4f730d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c019d9100d8ff6c6f185a1b386691ae2b996cf4c904876dd787f381c4f730d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c019d9100d8ff6c6f185a1b386691ae2b996cf4c904876dd787f381c4f730d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:11:18 compute-0 podman[443991]: 2025-11-26 02:11:18.606660562 +0000 UTC m=+0.290644454 container init d440a9e76b615a7b44d2221dad97d342dff975ddf68b67c400a346927f8815d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:11:18 compute-0 podman[443991]: 2025-11-26 02:11:18.630311275 +0000 UTC m=+0.314295177 container start d440a9e76b615a7b44d2221dad97d342dff975ddf68b67c400a346927f8815d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:11:18 compute-0 podman[443991]: 2025-11-26 02:11:18.637899308 +0000 UTC m=+0.321883290 container attach d440a9e76b615a7b44d2221dad97d342dff975ddf68b67c400a346927f8815d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:11:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1824: 321 pgs: 321 active+clean; 183 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 26 02:11:19 compute-0 nova_compute[350387]: 2025-11-26 02:11:19.529 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:19 compute-0 agitated_bartik[444006]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:11:19 compute-0 agitated_bartik[444006]: --> relative data size: 1.0
Nov 26 02:11:19 compute-0 agitated_bartik[444006]: --> All data devices are unavailable
Nov 26 02:11:19 compute-0 systemd[1]: libpod-d440a9e76b615a7b44d2221dad97d342dff975ddf68b67c400a346927f8815d7.scope: Deactivated successfully.
Nov 26 02:11:19 compute-0 systemd[1]: libpod-d440a9e76b615a7b44d2221dad97d342dff975ddf68b67c400a346927f8815d7.scope: Consumed 1.210s CPU time.
Nov 26 02:11:19 compute-0 podman[443991]: 2025-11-26 02:11:19.908672692 +0000 UTC m=+1.592656584 container died d440a9e76b615a7b44d2221dad97d342dff975ddf68b67c400a346927f8815d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 26 02:11:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c019d9100d8ff6c6f185a1b386691ae2b996cf4c904876dd787f381c4f730d2-merged.mount: Deactivated successfully.
Nov 26 02:11:20 compute-0 podman[443991]: 2025-11-26 02:11:20.000905146 +0000 UTC m=+1.684889068 container remove d440a9e76b615a7b44d2221dad97d342dff975ddf68b67c400a346927f8815d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:11:20 compute-0 systemd[1]: libpod-conmon-d440a9e76b615a7b44d2221dad97d342dff975ddf68b67c400a346927f8815d7.scope: Deactivated successfully.
Nov 26 02:11:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:11:20 compute-0 nova_compute[350387]: 2025-11-26 02:11:20.650 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:21 compute-0 podman[444186]: 2025-11-26 02:11:21.257225486 +0000 UTC m=+0.130934249 container create 3ac01fd17282e6f40a2dda3d0313d72f016d61a18db8066ee2f0a6794b2a289a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:11:21 compute-0 podman[444186]: 2025-11-26 02:11:21.189772127 +0000 UTC m=+0.063480960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:11:21 compute-0 systemd[1]: Started libpod-conmon-3ac01fd17282e6f40a2dda3d0313d72f016d61a18db8066ee2f0a6794b2a289a.scope.
Nov 26 02:11:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:11:21 compute-0 podman[444186]: 2025-11-26 02:11:21.395344526 +0000 UTC m=+0.269053349 container init 3ac01fd17282e6f40a2dda3d0313d72f016d61a18db8066ee2f0a6794b2a289a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:11:21 compute-0 podman[444186]: 2025-11-26 02:11:21.412022814 +0000 UTC m=+0.285731577 container start 3ac01fd17282e6f40a2dda3d0313d72f016d61a18db8066ee2f0a6794b2a289a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_visvesvaraya, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:11:21 compute-0 podman[444186]: 2025-11-26 02:11:21.41901802 +0000 UTC m=+0.292726793 container attach 3ac01fd17282e6f40a2dda3d0313d72f016d61a18db8066ee2f0a6794b2a289a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:11:21 compute-0 bold_visvesvaraya[444201]: 167 167
Nov 26 02:11:21 compute-0 systemd[1]: libpod-3ac01fd17282e6f40a2dda3d0313d72f016d61a18db8066ee2f0a6794b2a289a.scope: Deactivated successfully.
Nov 26 02:11:21 compute-0 podman[444186]: 2025-11-26 02:11:21.424639357 +0000 UTC m=+0.298348120 container died 3ac01fd17282e6f40a2dda3d0313d72f016d61a18db8066ee2f0a6794b2a289a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_visvesvaraya, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:11:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c4f67735202316dbce6ea6f5ff19f1250df8338dfc2e7c08b62777a89d3220f-merged.mount: Deactivated successfully.
Nov 26 02:11:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Nov 26 02:11:21 compute-0 podman[444186]: 2025-11-26 02:11:21.513915249 +0000 UTC m=+0.387624012 container remove 3ac01fd17282e6f40a2dda3d0313d72f016d61a18db8066ee2f0a6794b2a289a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 02:11:21 compute-0 systemd[1]: libpod-conmon-3ac01fd17282e6f40a2dda3d0313d72f016d61a18db8066ee2f0a6794b2a289a.scope: Deactivated successfully.
Nov 26 02:11:21 compute-0 podman[444224]: 2025-11-26 02:11:21.772290248 +0000 UTC m=+0.069498718 container create e16584494f6c6123a94c21d7ee32a4374496bfaa043e8f64f61e9b5b472ecc39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_heyrovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 26 02:11:21 compute-0 podman[444224]: 2025-11-26 02:11:21.748049979 +0000 UTC m=+0.045258499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:11:21 compute-0 systemd[1]: Started libpod-conmon-e16584494f6c6123a94c21d7ee32a4374496bfaa043e8f64f61e9b5b472ecc39.scope.
Nov 26 02:11:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491a879a850ab4ce510b5cc28d6661567e2bf3f859ca8803000d3a4810964705/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491a879a850ab4ce510b5cc28d6661567e2bf3f859ca8803000d3a4810964705/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491a879a850ab4ce510b5cc28d6661567e2bf3f859ca8803000d3a4810964705/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491a879a850ab4ce510b5cc28d6661567e2bf3f859ca8803000d3a4810964705/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:11:21 compute-0 podman[444224]: 2025-11-26 02:11:21.962712693 +0000 UTC m=+0.259921233 container init e16584494f6c6123a94c21d7ee32a4374496bfaa043e8f64f61e9b5b472ecc39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 02:11:21 compute-0 podman[444224]: 2025-11-26 02:11:21.984991397 +0000 UTC m=+0.282199967 container start e16584494f6c6123a94c21d7ee32a4374496bfaa043e8f64f61e9b5b472ecc39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_heyrovsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:11:21 compute-0 podman[444224]: 2025-11-26 02:11:21.992738794 +0000 UTC m=+0.289947364 container attach e16584494f6c6123a94c21d7ee32a4374496bfaa043e8f64f61e9b5b472ecc39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]: {
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:    "0": [
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:        {
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "devices": [
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "/dev/loop3"
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            ],
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "lv_name": "ceph_lv0",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "lv_size": "21470642176",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "name": "ceph_lv0",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "tags": {
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.cluster_name": "ceph",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.crush_device_class": "",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.encrypted": "0",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.osd_id": "0",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.type": "block",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.vdo": "0"
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            },
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "type": "block",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "vg_name": "ceph_vg0"
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:        }
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:    ],
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:    "1": [
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:        {
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "devices": [
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "/dev/loop4"
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            ],
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "lv_name": "ceph_lv1",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "lv_size": "21470642176",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "name": "ceph_lv1",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "tags": {
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.cluster_name": "ceph",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.crush_device_class": "",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.encrypted": "0",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.osd_id": "1",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.type": "block",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.vdo": "0"
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            },
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "type": "block",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "vg_name": "ceph_vg1"
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:        }
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:    ],
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:    "2": [
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:        {
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "devices": [
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "/dev/loop5"
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            ],
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "lv_name": "ceph_lv2",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "lv_size": "21470642176",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "name": "ceph_lv2",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "tags": {
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.cluster_name": "ceph",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.crush_device_class": "",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.encrypted": "0",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.osd_id": "2",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.type": "block",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:                "ceph.vdo": "0"
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            },
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "type": "block",
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:            "vg_name": "ceph_vg2"
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:        }
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]:    ]
Nov 26 02:11:22 compute-0 relaxed_heyrovsky[444240]: }
Nov 26 02:11:22 compute-0 systemd[1]: libpod-e16584494f6c6123a94c21d7ee32a4374496bfaa043e8f64f61e9b5b472ecc39.scope: Deactivated successfully.
Nov 26 02:11:22 compute-0 podman[444224]: 2025-11-26 02:11:22.834675184 +0000 UTC m=+1.131883704 container died e16584494f6c6123a94c21d7ee32a4374496bfaa043e8f64f61e9b5b472ecc39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 02:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-491a879a850ab4ce510b5cc28d6661567e2bf3f859ca8803000d3a4810964705-merged.mount: Deactivated successfully.
Nov 26 02:11:22 compute-0 podman[444224]: 2025-11-26 02:11:22.925743046 +0000 UTC m=+1.222951566 container remove e16584494f6c6123a94c21d7ee32a4374496bfaa043e8f64f61e9b5b472ecc39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:11:22 compute-0 systemd[1]: libpod-conmon-e16584494f6c6123a94c21d7ee32a4374496bfaa043e8f64f61e9b5b472ecc39.scope: Deactivated successfully.
Nov 26 02:11:23 compute-0 nova_compute[350387]: 2025-11-26 02:11:23.392 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 319 KiB/s wr, 111 op/s
Nov 26 02:11:23 compute-0 ovn_controller[89102]: 2025-11-26T02:11:23Z|00095|binding|INFO|Releasing lport 0fdbc9f8-20bb-4f6b-b66d-965099ff6047 from this chassis (sb_readonly=0)
Nov 26 02:11:23 compute-0 ovn_controller[89102]: 2025-11-26T02:11:23Z|00096|binding|INFO|Releasing lport 6285b1b6-6fe8-49b4-8dbc-d2e179b3b43b from this chassis (sb_readonly=0)
Nov 26 02:11:23 compute-0 nova_compute[350387]: 2025-11-26 02:11:23.796 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:24 compute-0 podman[444399]: 2025-11-26 02:11:24.12693386 +0000 UTC m=+0.070302980 container create 988274ef3b517f497efa5f2c1063e6f956646da90a5bfce2308fa1cce0646d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 02:11:24 compute-0 podman[444399]: 2025-11-26 02:11:24.100350246 +0000 UTC m=+0.043719406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:11:24 compute-0 systemd[1]: Started libpod-conmon-988274ef3b517f497efa5f2c1063e6f956646da90a5bfce2308fa1cce0646d72.scope.
Nov 26 02:11:24 compute-0 nova_compute[350387]: 2025-11-26 02:11:24.228 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:11:24 compute-0 podman[444399]: 2025-11-26 02:11:24.278168848 +0000 UTC m=+0.221537948 container init 988274ef3b517f497efa5f2c1063e6f956646da90a5bfce2308fa1cce0646d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:11:24 compute-0 podman[444399]: 2025-11-26 02:11:24.293622171 +0000 UTC m=+0.236991291 container start 988274ef3b517f497efa5f2c1063e6f956646da90a5bfce2308fa1cce0646d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 02:11:24 compute-0 podman[444399]: 2025-11-26 02:11:24.300070271 +0000 UTC m=+0.243439381 container attach 988274ef3b517f497efa5f2c1063e6f956646da90a5bfce2308fa1cce0646d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:11:24 compute-0 busy_austin[444415]: 167 167
Nov 26 02:11:24 compute-0 systemd[1]: libpod-988274ef3b517f497efa5f2c1063e6f956646da90a5bfce2308fa1cce0646d72.scope: Deactivated successfully.
Nov 26 02:11:24 compute-0 podman[444399]: 2025-11-26 02:11:24.306481711 +0000 UTC m=+0.249850831 container died 988274ef3b517f497efa5f2c1063e6f956646da90a5bfce2308fa1cce0646d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:11:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ad522130c3ba361e46d3313fe72024be9d8e13c2556ed407ec1d9d5a63583ca-merged.mount: Deactivated successfully.
Nov 26 02:11:24 compute-0 podman[444399]: 2025-11-26 02:11:24.384034754 +0000 UTC m=+0.327403874 container remove 988274ef3b517f497efa5f2c1063e6f956646da90a5bfce2308fa1cce0646d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:11:24 compute-0 systemd[1]: libpod-conmon-988274ef3b517f497efa5f2c1063e6f956646da90a5bfce2308fa1cce0646d72.scope: Deactivated successfully.
Nov 26 02:11:24 compute-0 nova_compute[350387]: 2025-11-26 02:11:24.531 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:24 compute-0 podman[444437]: 2025-11-26 02:11:24.686592731 +0000 UTC m=+0.094870149 container create 37eec90840b782ce399560916bf117e70d6f1611c805b7e512bee8cf8f2770a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 02:11:24 compute-0 podman[444437]: 2025-11-26 02:11:24.647727422 +0000 UTC m=+0.056004890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:11:24 compute-0 systemd[1]: Started libpod-conmon-37eec90840b782ce399560916bf117e70d6f1611c805b7e512bee8cf8f2770a6.scope.
Nov 26 02:11:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213e09e70cde19c17160884d43db753df35b980633c2022cf7462e5a994bf8f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213e09e70cde19c17160884d43db753df35b980633c2022cf7462e5a994bf8f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213e09e70cde19c17160884d43db753df35b980633c2022cf7462e5a994bf8f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213e09e70cde19c17160884d43db753df35b980633c2022cf7462e5a994bf8f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:11:24 compute-0 podman[444437]: 2025-11-26 02:11:24.851987605 +0000 UTC m=+0.260265033 container init 37eec90840b782ce399560916bf117e70d6f1611c805b7e512bee8cf8f2770a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:11:24 compute-0 podman[444437]: 2025-11-26 02:11:24.887623084 +0000 UTC m=+0.295900502 container start 37eec90840b782ce399560916bf117e70d6f1611c805b7e512bee8cf8f2770a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 02:11:24 compute-0 podman[444437]: 2025-11-26 02:11:24.894040904 +0000 UTC m=+0.302318412 container attach 37eec90840b782ce399560916bf117e70d6f1611c805b7e512bee8cf8f2770a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 02:11:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:24.995 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:24.997 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:24.999 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:11:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 15 KiB/s wr, 86 op/s
Nov 26 02:11:26 compute-0 brave_germain[444453]: {
Nov 26 02:11:26 compute-0 brave_germain[444453]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:11:26 compute-0 brave_germain[444453]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:11:26 compute-0 brave_germain[444453]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:11:26 compute-0 brave_germain[444453]:        "osd_id": 0,
Nov 26 02:11:26 compute-0 brave_germain[444453]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:11:26 compute-0 brave_germain[444453]:        "type": "bluestore"
Nov 26 02:11:26 compute-0 brave_germain[444453]:    },
Nov 26 02:11:26 compute-0 brave_germain[444453]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:11:26 compute-0 brave_germain[444453]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:11:26 compute-0 brave_germain[444453]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:11:26 compute-0 brave_germain[444453]:        "osd_id": 2,
Nov 26 02:11:26 compute-0 brave_germain[444453]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:11:26 compute-0 brave_germain[444453]:        "type": "bluestore"
Nov 26 02:11:26 compute-0 brave_germain[444453]:    },
Nov 26 02:11:26 compute-0 brave_germain[444453]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:11:26 compute-0 brave_germain[444453]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:11:26 compute-0 brave_germain[444453]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:11:26 compute-0 brave_germain[444453]:        "osd_id": 1,
Nov 26 02:11:26 compute-0 brave_germain[444453]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:11:26 compute-0 brave_germain[444453]:        "type": "bluestore"
Nov 26 02:11:26 compute-0 brave_germain[444453]:    }
Nov 26 02:11:26 compute-0 brave_germain[444453]: }
Nov 26 02:11:26 compute-0 systemd[1]: libpod-37eec90840b782ce399560916bf117e70d6f1611c805b7e512bee8cf8f2770a6.scope: Deactivated successfully.
Nov 26 02:11:26 compute-0 podman[444437]: 2025-11-26 02:11:26.134395587 +0000 UTC m=+1.542673005 container died 37eec90840b782ce399560916bf117e70d6f1611c805b7e512bee8cf8f2770a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 02:11:26 compute-0 systemd[1]: libpod-37eec90840b782ce399560916bf117e70d6f1611c805b7e512bee8cf8f2770a6.scope: Consumed 1.236s CPU time.
Nov 26 02:11:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-213e09e70cde19c17160884d43db753df35b980633c2022cf7462e5a994bf8f6-merged.mount: Deactivated successfully.
Nov 26 02:11:26 compute-0 podman[444437]: 2025-11-26 02:11:26.226911079 +0000 UTC m=+1.635188467 container remove 37eec90840b782ce399560916bf117e70d6f1611c805b7e512bee8cf8f2770a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:11:26 compute-0 systemd[1]: libpod-conmon-37eec90840b782ce399560916bf117e70d6f1611c805b7e512bee8cf8f2770a6.scope: Deactivated successfully.
Nov 26 02:11:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:11:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:11:26 compute-0 podman[444487]: 2025-11-26 02:11:26.27836299 +0000 UTC m=+0.101860185 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, managed_by=edpm_ansible)
Nov 26 02:11:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:11:26 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:11:26 compute-0 podman[444495]: 2025-11-26 02:11:26.291652903 +0000 UTC m=+0.103215053 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 02:11:26 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 0d41cc25-b0bd-4827-bf31-0470d50647ba does not exist
Nov 26 02:11:26 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev f396121c-11c7-4370-b5eb-989d52cda574 does not exist
Nov 26 02:11:26 compute-0 podman[444494]: 2025-11-26 02:11:26.34901936 +0000 UTC m=+0.152079632 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.374 350391 DEBUG nova.compute.manager [req-0a3c9e1c-0b01-4f18-8156-5976450fc3a7 req-579f3038-9734-4e8d-aa53-189afd19d673 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Received event network-vif-plugged-422f5ef7-f048-4c83-a300-8b5942aafb8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.374 350391 DEBUG oslo_concurrency.lockutils [req-0a3c9e1c-0b01-4f18-8156-5976450fc3a7 req-579f3038-9734-4e8d-aa53-189afd19d673 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.375 350391 DEBUG oslo_concurrency.lockutils [req-0a3c9e1c-0b01-4f18-8156-5976450fc3a7 req-579f3038-9734-4e8d-aa53-189afd19d673 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.375 350391 DEBUG oslo_concurrency.lockutils [req-0a3c9e1c-0b01-4f18-8156-5976450fc3a7 req-579f3038-9734-4e8d-aa53-189afd19d673 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.376 350391 DEBUG nova.compute.manager [req-0a3c9e1c-0b01-4f18-8156-5976450fc3a7 req-579f3038-9734-4e8d-aa53-189afd19d673 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Processing event network-vif-plugged-422f5ef7-f048-4c83-a300-8b5942aafb8f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.377 350391 DEBUG nova.compute.manager [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Instance event wait completed in 10 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.383 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123086.3829944, a6b626e1-3c31-460a-be1a-02b342efbb84 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.383 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] VM Resumed (Lifecycle Event)#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.386 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.393 350391 INFO nova.virt.libvirt.driver [-] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Instance spawned successfully.#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.393 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.421 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.429 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.430 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.430 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.431 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.431 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.432 350391 DEBUG nova.virt.libvirt.driver [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.441 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.482 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.499 350391 INFO nova.compute.manager [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Took 19.25 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.500 350391 DEBUG nova.compute.manager [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.583 350391 INFO nova.compute.manager [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Took 20.87 seconds to build instance.#033[00m
Nov 26 02:11:26 compute-0 nova_compute[350387]: 2025-11-26 02:11:26.619 350391 DEBUG oslo_concurrency.lockutils [None req-ad375f8a-5d5b-4896-a5c8-4cd425d39d7f a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:11:26 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:11:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:11:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/833920732' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:11:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:11:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/833920732' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:11:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 643 KiB/s rd, 17 KiB/s wr, 46 op/s
Nov 26 02:11:28 compute-0 nova_compute[350387]: 2025-11-26 02:11:28.032 350391 DEBUG nova.objects.instance [None req-f277a4ea-4c5f-4eb1-b4be-3bb056ac3ef5 aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lazy-loading 'flavor' on Instance uuid 5c8719f7-1028-4983-aa89-c99a459b6295 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:11:28 compute-0 nova_compute[350387]: 2025-11-26 02:11:28.084 350391 DEBUG oslo_concurrency.lockutils [None req-f277a4ea-4c5f-4eb1-b4be-3bb056ac3ef5 aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Acquiring lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:11:28 compute-0 nova_compute[350387]: 2025-11-26 02:11:28.085 350391 DEBUG oslo_concurrency.lockutils [None req-f277a4ea-4c5f-4eb1-b4be-3bb056ac3ef5 aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Acquired lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:11:28 compute-0 nova_compute[350387]: 2025-11-26 02:11:28.395 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:28 compute-0 nova_compute[350387]: 2025-11-26 02:11:28.769 350391 DEBUG nova.compute.manager [req-9bfdd2a9-3fec-4c98-9ef4-a749464b2a37 req-93f69fae-0a80-4bba-9381-2236bbc3a431 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Received event network-vif-plugged-422f5ef7-f048-4c83-a300-8b5942aafb8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:11:28 compute-0 nova_compute[350387]: 2025-11-26 02:11:28.769 350391 DEBUG oslo_concurrency.lockutils [req-9bfdd2a9-3fec-4c98-9ef4-a749464b2a37 req-93f69fae-0a80-4bba-9381-2236bbc3a431 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:28 compute-0 nova_compute[350387]: 2025-11-26 02:11:28.770 350391 DEBUG oslo_concurrency.lockutils [req-9bfdd2a9-3fec-4c98-9ef4-a749464b2a37 req-93f69fae-0a80-4bba-9381-2236bbc3a431 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:28 compute-0 nova_compute[350387]: 2025-11-26 02:11:28.770 350391 DEBUG oslo_concurrency.lockutils [req-9bfdd2a9-3fec-4c98-9ef4-a749464b2a37 req-93f69fae-0a80-4bba-9381-2236bbc3a431 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:28 compute-0 nova_compute[350387]: 2025-11-26 02:11:28.771 350391 DEBUG nova.compute.manager [req-9bfdd2a9-3fec-4c98-9ef4-a749464b2a37 req-93f69fae-0a80-4bba-9381-2236bbc3a431 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] No waiting events found dispatching network-vif-plugged-422f5ef7-f048-4c83-a300-8b5942aafb8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:11:28 compute-0 nova_compute[350387]: 2025-11-26 02:11:28.771 350391 WARNING nova.compute.manager [req-9bfdd2a9-3fec-4c98-9ef4-a749464b2a37 req-93f69fae-0a80-4bba-9381-2236bbc3a431 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Received unexpected event network-vif-plugged-422f5ef7-f048-4c83-a300-8b5942aafb8f for instance with vm_state active and task_state None.#033[00m
Nov 26 02:11:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.8 KiB/s wr, 15 op/s
Nov 26 02:11:29 compute-0 nova_compute[350387]: 2025-11-26 02:11:29.490 350391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764123074.4891853, 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:11:29 compute-0 nova_compute[350387]: 2025-11-26 02:11:29.490 350391 INFO nova.compute.manager [-] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] VM Stopped (Lifecycle Event)#033[00m
Nov 26 02:11:29 compute-0 nova_compute[350387]: 2025-11-26 02:11:29.520 350391 DEBUG nova.compute.manager [None req-95b2febc-81c8-4839-9824-38f78faa94dd - - - - - -] [instance: 5f2f6ac2-07e8-46b8-8930-5f9a67979d3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:11:29 compute-0 nova_compute[350387]: 2025-11-26 02:11:29.533 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:29 compute-0 podman[158021]: time="2025-11-26T02:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:11:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45046 "" "Go-http-client/1.1"
Nov 26 02:11:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9106 "" "Go-http-client/1.1"
Nov 26 02:11:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:11:31 compute-0 nova_compute[350387]: 2025-11-26 02:11:31.033 350391 DEBUG nova.network.neutron [None req-f277a4ea-4c5f-4eb1-b4be-3bb056ac3ef5 aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 02:11:31 compute-0 nova_compute[350387]: 2025-11-26 02:11:31.177 350391 DEBUG nova.compute.manager [req-2f080e22-e28e-4060-ae95-ad477f9fafee req-1ab3739b-7943-40db-a12f-4fcaadc4c028 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Received event network-changed-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:11:31 compute-0 nova_compute[350387]: 2025-11-26 02:11:31.177 350391 DEBUG nova.compute.manager [req-2f080e22-e28e-4060-ae95-ad477f9fafee req-1ab3739b-7943-40db-a12f-4fcaadc4c028 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Refreshing instance network info cache due to event network-changed-4b2c5180-2ff0-4b98-90cb-e0e6ba068614. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:11:31 compute-0 nova_compute[350387]: 2025-11-26 02:11:31.177 350391 DEBUG oslo_concurrency.lockutils [req-2f080e22-e28e-4060-ae95-ad477f9fafee req-1ab3739b-7943-40db-a12f-4fcaadc4c028 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:11:31 compute-0 openstack_network_exporter[367323]: ERROR   02:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:11:31 compute-0 openstack_network_exporter[367323]: ERROR   02:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:11:31 compute-0 openstack_network_exporter[367323]: ERROR   02:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:11:31 compute-0 openstack_network_exporter[367323]: ERROR   02:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:11:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:11:31 compute-0 openstack_network_exporter[367323]: ERROR   02:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:11:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:11:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.2 KiB/s wr, 52 op/s
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.325 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.326 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.326 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.326 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.327 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.399 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.431 350391 DEBUG nova.compute.manager [req-0e4eb4af-de83-4eae-8a57-88ab6c2df576 req-1244a145-c942-42e3-a463-0d4fe2ac92f0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Received event network-changed-422f5ef7-f048-4c83-a300-8b5942aafb8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.432 350391 DEBUG nova.compute.manager [req-0e4eb4af-de83-4eae-8a57-88ab6c2df576 req-1244a145-c942-42e3-a463-0d4fe2ac92f0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Refreshing instance network info cache due to event network-changed-422f5ef7-f048-4c83-a300-8b5942aafb8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.434 350391 DEBUG oslo_concurrency.lockutils [req-0e4eb4af-de83-4eae-8a57-88ab6c2df576 req-1244a145-c942-42e3-a463-0d4fe2ac92f0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-a6b626e1-3c31-460a-be1a-02b342efbb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.435 350391 DEBUG oslo_concurrency.lockutils [req-0e4eb4af-de83-4eae-8a57-88ab6c2df576 req-1244a145-c942-42e3-a463-0d4fe2ac92f0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-a6b626e1-3c31-460a-be1a-02b342efbb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.436 350391 DEBUG nova.network.neutron [req-0e4eb4af-de83-4eae-8a57-88ab6c2df576 req-1244a145-c942-42e3-a463-0d4fe2ac92f0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Refreshing network info cache for port 422f5ef7-f048-4c83-a300-8b5942aafb8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:11:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.3 KiB/s wr, 50 op/s
Nov 26 02:11:33 compute-0 podman[444604]: 2025-11-26 02:11:33.565463392 +0000 UTC m=+0.116429663 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 02:11:33 compute-0 podman[444605]: 2025-11-26 02:11:33.612970663 +0000 UTC m=+0.150440676 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 02:11:33 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:11:33 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/897433727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.845 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.976 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.977 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.982 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:11:33 compute-0 nova_compute[350387]: 2025-11-26 02:11:33.982 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:11:34 compute-0 nova_compute[350387]: 2025-11-26 02:11:34.535 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:34 compute-0 nova_compute[350387]: 2025-11-26 02:11:34.666 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:11:34 compute-0 nova_compute[350387]: 2025-11-26 02:11:34.668 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3737MB free_disk=59.92180633544922GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:11:34 compute-0 nova_compute[350387]: 2025-11-26 02:11:34.668 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:34 compute-0 nova_compute[350387]: 2025-11-26 02:11:34.669 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:34 compute-0 nova_compute[350387]: 2025-11-26 02:11:34.766 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 5c8719f7-1028-4983-aa89-c99a459b6295 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:11:34 compute-0 nova_compute[350387]: 2025-11-26 02:11:34.767 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance a6b626e1-3c31-460a-be1a-02b342efbb84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:11:34 compute-0 nova_compute[350387]: 2025-11-26 02:11:34.768 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:11:34 compute-0 nova_compute[350387]: 2025-11-26 02:11:34.768 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:11:34 compute-0 nova_compute[350387]: 2025-11-26 02:11:34.785 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing inventories for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 02:11:34 compute-0 nova_compute[350387]: 2025-11-26 02:11:34.811 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating ProviderTree inventory for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 02:11:34 compute-0 nova_compute[350387]: 2025-11-26 02:11:34.812 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating inventory in ProviderTree for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 02:11:34 compute-0 nova_compute[350387]: 2025-11-26 02:11:34.829 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing aggregate associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 02:11:34 compute-0 nova_compute[350387]: 2025-11-26 02:11:34.850 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing trait associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, traits: COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 02:11:34 compute-0 nova_compute[350387]: 2025-11-26 02:11:34.929 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:11:35 compute-0 nova_compute[350387]: 2025-11-26 02:11:35.093 350391 DEBUG nova.network.neutron [None req-f277a4ea-4c5f-4eb1-b4be-3bb056ac3ef5 aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updating instance_info_cache with network_info: [{"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:11:35 compute-0 nova_compute[350387]: 2025-11-26 02:11:35.131 350391 DEBUG oslo_concurrency.lockutils [None req-f277a4ea-4c5f-4eb1-b4be-3bb056ac3ef5 aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Releasing lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:11:35 compute-0 nova_compute[350387]: 2025-11-26 02:11:35.132 350391 DEBUG nova.compute.manager [None req-f277a4ea-4c5f-4eb1-b4be-3bb056ac3ef5 aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 26 02:11:35 compute-0 nova_compute[350387]: 2025-11-26 02:11:35.132 350391 DEBUG nova.compute.manager [None req-f277a4ea-4c5f-4eb1-b4be-3bb056ac3ef5 aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] network_info to inject: |[{"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 26 02:11:35 compute-0 nova_compute[350387]: 2025-11-26 02:11:35.137 350391 DEBUG oslo_concurrency.lockutils [req-2f080e22-e28e-4060-ae95-ad477f9fafee req-1ab3739b-7943-40db-a12f-4fcaadc4c028 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:11:35 compute-0 nova_compute[350387]: 2025-11-26 02:11:35.137 350391 DEBUG nova.network.neutron [req-2f080e22-e28e-4060-ae95-ad477f9fafee req-1ab3739b-7943-40db-a12f-4fcaadc4c028 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Refreshing network info cache for port 4b2c5180-2ff0-4b98-90cb-e0e6ba068614 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:11:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:11:35 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2535763982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:11:35 compute-0 nova_compute[350387]: 2025-11-26 02:11:35.469 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:35 compute-0 nova_compute[350387]: 2025-11-26 02:11:35.484 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:11:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Nov 26 02:11:35 compute-0 nova_compute[350387]: 2025-11-26 02:11:35.522 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:11:35 compute-0 nova_compute[350387]: 2025-11-26 02:11:35.545 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:11:35 compute-0 nova_compute[350387]: 2025-11-26 02:11:35.546 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:35 compute-0 podman[444690]: 2025-11-26 02:11:35.582543017 +0000 UTC m=+0.126450074 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=)
Nov 26 02:11:36 compute-0 ovn_controller[89102]: 2025-11-26T02:11:36Z|00097|binding|INFO|Releasing lport 0fdbc9f8-20bb-4f6b-b66d-965099ff6047 from this chassis (sb_readonly=0)
Nov 26 02:11:36 compute-0 ovn_controller[89102]: 2025-11-26T02:11:36Z|00098|binding|INFO|Releasing lport 6285b1b6-6fe8-49b4-8dbc-d2e179b3b43b from this chassis (sb_readonly=0)
Nov 26 02:11:36 compute-0 nova_compute[350387]: 2025-11-26 02:11:36.316 350391 DEBUG nova.objects.instance [None req-ca928d20-85dc-4426-a81e-64438cbde70b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lazy-loading 'flavor' on Instance uuid 5c8719f7-1028-4983-aa89-c99a459b6295 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:11:36 compute-0 nova_compute[350387]: 2025-11-26 02:11:36.344 350391 DEBUG oslo_concurrency.lockutils [None req-ca928d20-85dc-4426-a81e-64438cbde70b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Acquiring lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:11:36 compute-0 nova_compute[350387]: 2025-11-26 02:11:36.348 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:36 compute-0 podman[444712]: 2025-11-26 02:11:36.574234343 +0000 UTC m=+0.126716142 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Nov 26 02:11:36 compute-0 nova_compute[350387]: 2025-11-26 02:11:36.630 350391 DEBUG nova.network.neutron [req-0e4eb4af-de83-4eae-8a57-88ab6c2df576 req-1244a145-c942-42e3-a463-0d4fe2ac92f0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Updated VIF entry in instance network info cache for port 422f5ef7-f048-4c83-a300-8b5942aafb8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:11:36 compute-0 nova_compute[350387]: 2025-11-26 02:11:36.631 350391 DEBUG nova.network.neutron [req-0e4eb4af-de83-4eae-8a57-88ab6c2df576 req-1244a145-c942-42e3-a463-0d4fe2ac92f0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Updating instance_info_cache with network_info: [{"id": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "address": "fa:16:3e:a9:2c:51", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422f5ef7-f0", "ovs_interfaceid": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:11:36 compute-0 nova_compute[350387]: 2025-11-26 02:11:36.656 350391 DEBUG oslo_concurrency.lockutils [req-0e4eb4af-de83-4eae-8a57-88ab6c2df576 req-1244a145-c942-42e3-a463-0d4fe2ac92f0 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-a6b626e1-3c31-460a-be1a-02b342efbb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:11:36 compute-0 nova_compute[350387]: 2025-11-26 02:11:36.803 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.0 KiB/s wr, 64 op/s
Nov 26 02:11:37 compute-0 nova_compute[350387]: 2025-11-26 02:11:37.517 350391 DEBUG nova.network.neutron [req-2f080e22-e28e-4060-ae95-ad477f9fafee req-1ab3739b-7943-40db-a12f-4fcaadc4c028 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updated VIF entry in instance network info cache for port 4b2c5180-2ff0-4b98-90cb-e0e6ba068614. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:11:37 compute-0 nova_compute[350387]: 2025-11-26 02:11:37.517 350391 DEBUG nova.network.neutron [req-2f080e22-e28e-4060-ae95-ad477f9fafee req-1ab3739b-7943-40db-a12f-4fcaadc4c028 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updating instance_info_cache with network_info: [{"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:11:37 compute-0 nova_compute[350387]: 2025-11-26 02:11:37.547 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:11:37 compute-0 nova_compute[350387]: 2025-11-26 02:11:37.548 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:11:37 compute-0 nova_compute[350387]: 2025-11-26 02:11:37.552 350391 DEBUG oslo_concurrency.lockutils [req-2f080e22-e28e-4060-ae95-ad477f9fafee req-1ab3739b-7943-40db-a12f-4fcaadc4c028 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:11:37 compute-0 nova_compute[350387]: 2025-11-26 02:11:37.553 350391 DEBUG oslo_concurrency.lockutils [None req-ca928d20-85dc-4426-a81e-64438cbde70b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Acquired lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:11:38 compute-0 nova_compute[350387]: 2025-11-26 02:11:38.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:11:38 compute-0 nova_compute[350387]: 2025-11-26 02:11:38.402 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 64 op/s
Nov 26 02:11:39 compute-0 nova_compute[350387]: 2025-11-26 02:11:39.542 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:39 compute-0 nova_compute[350387]: 2025-11-26 02:11:39.545 350391 DEBUG nova.network.neutron [None req-ca928d20-85dc-4426-a81e-64438cbde70b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 02:11:39 compute-0 nova_compute[350387]: 2025-11-26 02:11:39.763 350391 DEBUG nova.compute.manager [req-9d0cadc8-efee-45d5-9e67-ef0bc72d387d req-6e5d182c-e387-46fe-80dc-4f4cc7747165 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Received event network-changed-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:11:39 compute-0 nova_compute[350387]: 2025-11-26 02:11:39.764 350391 DEBUG nova.compute.manager [req-9d0cadc8-efee-45d5-9e67-ef0bc72d387d req-6e5d182c-e387-46fe-80dc-4f4cc7747165 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Refreshing instance network info cache due to event network-changed-4b2c5180-2ff0-4b98-90cb-e0e6ba068614. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:11:39 compute-0 nova_compute[350387]: 2025-11-26 02:11:39.765 350391 DEBUG oslo_concurrency.lockutils [req-9d0cadc8-efee-45d5-9e67-ef0bc72d387d req-6e5d182c-e387-46fe-80dc-4f4cc7747165 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:11:39 compute-0 nova_compute[350387]: 2025-11-26 02:11:39.971 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.101523) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123100101575, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1017, "num_deletes": 256, "total_data_size": 1393305, "memory_usage": 1415096, "flush_reason": "Manual Compaction"}
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123100110380, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1368641, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36792, "largest_seqno": 37808, "table_properties": {"data_size": 1363744, "index_size": 2421, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10688, "raw_average_key_size": 19, "raw_value_size": 1353820, "raw_average_value_size": 2443, "num_data_blocks": 108, "num_entries": 554, "num_filter_entries": 554, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764123011, "oldest_key_time": 1764123011, "file_creation_time": 1764123100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 8921 microseconds, and 3972 cpu microseconds.
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.110451) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1368641 bytes OK
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.110469) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.112865) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.112879) EVENT_LOG_v1 {"time_micros": 1764123100112874, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.112895) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1388474, prev total WAL file size 1388474, number of live WAL files 2.
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.113735) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323534' seq:72057594037927935, type:22 .. '6C6F676D0031353036' seq:0, type:0; will stop at (end)
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1336KB)], [83(8439KB)]
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123100113772, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 10010707, "oldest_snapshot_seqno": -1}
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 5694 keys, 9906088 bytes, temperature: kUnknown
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123100152433, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 9906088, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9866709, "index_size": 24037, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 144493, "raw_average_key_size": 25, "raw_value_size": 9762416, "raw_average_value_size": 1714, "num_data_blocks": 989, "num_entries": 5694, "num_filter_entries": 5694, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764123100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.152619) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9906088 bytes
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.154588) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 258.6 rd, 255.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.2 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(14.6) write-amplify(7.2) OK, records in: 6218, records dropped: 524 output_compression: NoCompression
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.154606) EVENT_LOG_v1 {"time_micros": 1764123100154598, "job": 48, "event": "compaction_finished", "compaction_time_micros": 38715, "compaction_time_cpu_micros": 19397, "output_level": 6, "num_output_files": 1, "total_output_size": 9906088, "num_input_records": 6218, "num_output_records": 5694, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123100155015, "job": 48, "event": "table_file_deletion", "file_number": 85}
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123100156650, "job": 48, "event": "table_file_deletion", "file_number": 83}
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.113611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.156955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.156963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.156966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.156969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:11:40 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:11:40.156972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:11:40 compute-0 nova_compute[350387]: 2025-11-26 02:11:40.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:11:40 compute-0 nova_compute[350387]: 2025-11-26 02:11:40.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:11:40 compute-0 nova_compute[350387]: 2025-11-26 02:11:40.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:11:40 compute-0 nova_compute[350387]: 2025-11-26 02:11:40.722 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:11:41
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'images', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.meta', '.mgr']
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 64 op/s
Nov 26 02:11:41 compute-0 podman[444731]: 2025-11-26 02:11:41.569741518 +0000 UTC m=+0.112194095 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:11:41 compute-0 podman[444730]: 2025-11-26 02:11:41.592931257 +0000 UTC m=+0.143034118 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, container_name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:11:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:11:42 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:42.552 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:11:42 compute-0 nova_compute[350387]: 2025-11-26 02:11:42.553 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:42 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:42.555 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 02:11:42 compute-0 nova_compute[350387]: 2025-11-26 02:11:42.657 350391 DEBUG nova.network.neutron [None req-ca928d20-85dc-4426-a81e-64438cbde70b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updating instance_info_cache with network_info: [{"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:11:42 compute-0 nova_compute[350387]: 2025-11-26 02:11:42.691 350391 DEBUG oslo_concurrency.lockutils [None req-ca928d20-85dc-4426-a81e-64438cbde70b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Releasing lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:11:42 compute-0 nova_compute[350387]: 2025-11-26 02:11:42.691 350391 DEBUG nova.compute.manager [None req-ca928d20-85dc-4426-a81e-64438cbde70b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 26 02:11:42 compute-0 nova_compute[350387]: 2025-11-26 02:11:42.692 350391 DEBUG nova.compute.manager [None req-ca928d20-85dc-4426-a81e-64438cbde70b aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] network_info to inject: |[{"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 26 02:11:42 compute-0 nova_compute[350387]: 2025-11-26 02:11:42.693 350391 DEBUG oslo_concurrency.lockutils [req-9d0cadc8-efee-45d5-9e67-ef0bc72d387d req-6e5d182c-e387-46fe-80dc-4f4cc7747165 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:11:42 compute-0 nova_compute[350387]: 2025-11-26 02:11:42.694 350391 DEBUG nova.network.neutron [req-9d0cadc8-efee-45d5-9e67-ef0bc72d387d req-6e5d182c-e387-46fe-80dc-4f4cc7747165 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Refreshing network info cache for port 4b2c5180-2ff0-4b98-90cb-e0e6ba068614 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:11:43 compute-0 nova_compute[350387]: 2025-11-26 02:11:43.406 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 836 KiB/s rd, 1.7 KiB/s wr, 27 op/s
Nov 26 02:11:43 compute-0 ovn_controller[89102]: 2025-11-26T02:11:43Z|00099|binding|INFO|Releasing lport 0fdbc9f8-20bb-4f6b-b66d-965099ff6047 from this chassis (sb_readonly=0)
Nov 26 02:11:43 compute-0 ovn_controller[89102]: 2025-11-26T02:11:43Z|00100|binding|INFO|Releasing lport 6285b1b6-6fe8-49b4-8dbc-d2e179b3b43b from this chassis (sb_readonly=0)
Nov 26 02:11:43 compute-0 nova_compute[350387]: 2025-11-26 02:11:43.779 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:43 compute-0 nova_compute[350387]: 2025-11-26 02:11:43.936 350391 DEBUG oslo_concurrency.lockutils [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Acquiring lock "5c8719f7-1028-4983-aa89-c99a459b6295" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:43 compute-0 nova_compute[350387]: 2025-11-26 02:11:43.937 350391 DEBUG oslo_concurrency.lockutils [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:43 compute-0 nova_compute[350387]: 2025-11-26 02:11:43.938 350391 DEBUG oslo_concurrency.lockutils [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Acquiring lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:43 compute-0 nova_compute[350387]: 2025-11-26 02:11:43.938 350391 DEBUG oslo_concurrency.lockutils [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:43 compute-0 nova_compute[350387]: 2025-11-26 02:11:43.939 350391 DEBUG oslo_concurrency.lockutils [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:43 compute-0 nova_compute[350387]: 2025-11-26 02:11:43.942 350391 INFO nova.compute.manager [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Terminating instance#033[00m
Nov 26 02:11:43 compute-0 nova_compute[350387]: 2025-11-26 02:11:43.944 350391 DEBUG nova.compute.manager [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 02:11:44 compute-0 kernel: tap4b2c5180-2f (unregistering): left promiscuous mode
Nov 26 02:11:44 compute-0 NetworkManager[48886]: <info>  [1764123104.0727] device (tap4b2c5180-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.107 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:44 compute-0 ovn_controller[89102]: 2025-11-26T02:11:44Z|00101|binding|INFO|Releasing lport 4b2c5180-2ff0-4b98-90cb-e0e6ba068614 from this chassis (sb_readonly=0)
Nov 26 02:11:44 compute-0 ovn_controller[89102]: 2025-11-26T02:11:44Z|00102|binding|INFO|Setting lport 4b2c5180-2ff0-4b98-90cb-e0e6ba068614 down in Southbound
Nov 26 02:11:44 compute-0 ovn_controller[89102]: 2025-11-26T02:11:44Z|00103|binding|INFO|Removing iface tap4b2c5180-2f ovn-installed in OVS
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.126 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:44 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:44.129 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:7b:7e 10.100.0.9'], port_security=['fa:16:3e:5a:7b:7e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5c8719f7-1028-4983-aa89-c99a459b6295', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e89566-5c79-472a-819f-45cd3bbc2134', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339deb116b764070abc6d50520ee33c8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8bca6503-83d9-4549-b632-308ae47fd689', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dc53788-f43e-4c82-98d2-b64a154786fb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=4b2c5180-2ff0-4b98-90cb-e0e6ba068614) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:11:44 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:44.130 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 4b2c5180-2ff0-4b98-90cb-e0e6ba068614 in datapath 14e89566-5c79-472a-819f-45cd3bbc2134 unbound from our chassis#033[00m
Nov 26 02:11:44 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:44.132 286844 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14e89566-5c79-472a-819f-45cd3bbc2134, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 02:11:44 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:44.133 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[605350e7-c5a4-4237-ac4f-62332ac8694c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:44 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:44.134 286844 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134 namespace which is not needed anymore#033[00m
Nov 26 02:11:44 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 26 02:11:44 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 48.145s CPU time.
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.147 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:44 compute-0 systemd-machined[138512]: Machine qemu-6-instance-00000006 terminated.
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.201 350391 INFO nova.virt.libvirt.driver [-] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Instance destroyed successfully.#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.201 350391 DEBUG nova.objects.instance [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lazy-loading 'resources' on Instance uuid 5c8719f7-1028-4983-aa89-c99a459b6295 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.219 350391 DEBUG nova.virt.libvirt.vif [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T02:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-173154417',display_name='tempest-AttachInterfacesUnderV243Test-server-173154417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-173154417',id=6,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZtmevDBOH7h2uuNZDcJCbOFxIp1AvwcCYBRUuKNsTRUBZcQypMSSPUOMMpAITLGs2JRuuQVbR8AitbKv36s+fXFQUTo2Ffyoxd6fZW1aMdi088cBYkrvxHsEH3GZ43LA==',key_name='tempest-keypair-317693094',keypairs=<?>,launch_index=0,launched_at=2025-11-26T02:10:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='339deb116b764070abc6d50520ee33c8',ramdisk_id='',reservation_id='r-i9hkqns0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-270246256',owner_user_name='tempest-AttachInterfacesUnderV243Test-270246256-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T02:11:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aadae2b9a9834185b051c2bc59c6054a',uuid=5c8719f7-1028-4983-aa89-c99a459b6295,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.219 350391 DEBUG nova.network.os_vif_util [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Converting VIF {"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.220 350391 DEBUG nova.network.os_vif_util [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5a:7b:7e,bridge_name='br-int',has_traffic_filtering=True,id=4b2c5180-2ff0-4b98-90cb-e0e6ba068614,network=Network(14e89566-5c79-472a-819f-45cd3bbc2134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b2c5180-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.220 350391 DEBUG os_vif [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5a:7b:7e,bridge_name='br-int',has_traffic_filtering=True,id=4b2c5180-2ff0-4b98-90cb-e0e6ba068614,network=Network(14e89566-5c79-472a-819f-45cd3bbc2134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b2c5180-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.222 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.223 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b2c5180-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.225 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.228 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.230 350391 INFO os_vif [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5a:7b:7e,bridge_name='br-int',has_traffic_filtering=True,id=4b2c5180-2ff0-4b98-90cb-e0e6ba068614,network=Network(14e89566-5c79-472a-819f-45cd3bbc2134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b2c5180-2f')#033[00m
Nov 26 02:11:44 compute-0 neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134[441869]: [NOTICE]   (441873) : haproxy version is 2.8.14-c23fe91
Nov 26 02:11:44 compute-0 neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134[441869]: [NOTICE]   (441873) : path to executable is /usr/sbin/haproxy
Nov 26 02:11:44 compute-0 neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134[441869]: [WARNING]  (441873) : Exiting Master process...
Nov 26 02:11:44 compute-0 neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134[441869]: [WARNING]  (441873) : Exiting Master process...
Nov 26 02:11:44 compute-0 neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134[441869]: [ALERT]    (441873) : Current worker (441876) exited with code 143 (Terminated)
Nov 26 02:11:44 compute-0 neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134[441869]: [WARNING]  (441873) : All workers exited. Exiting... (0)
Nov 26 02:11:44 compute-0 systemd[1]: libpod-db0b6ce7a587e4b091d8d73a70586f1fd39569d1049d2305b7bd28fb64e2889e.scope: Deactivated successfully.
Nov 26 02:11:44 compute-0 podman[444821]: 2025-11-26 02:11:44.352106665 +0000 UTC m=+0.057418059 container died db0b6ce7a587e4b091d8d73a70586f1fd39569d1049d2305b7bd28fb64e2889e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 02:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-51be4e4eff0774c18211445f63e15087e46460fd324f4463a2ce3dc4a716fb82-merged.mount: Deactivated successfully.
Nov 26 02:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-db0b6ce7a587e4b091d8d73a70586f1fd39569d1049d2305b7bd28fb64e2889e-userdata-shm.mount: Deactivated successfully.
Nov 26 02:11:44 compute-0 podman[444821]: 2025-11-26 02:11:44.41473357 +0000 UTC m=+0.120044964 container cleanup db0b6ce7a587e4b091d8d73a70586f1fd39569d1049d2305b7bd28fb64e2889e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 26 02:11:44 compute-0 systemd[1]: libpod-conmon-db0b6ce7a587e4b091d8d73a70586f1fd39569d1049d2305b7bd28fb64e2889e.scope: Deactivated successfully.
Nov 26 02:11:44 compute-0 podman[444847]: 2025-11-26 02:11:44.510318568 +0000 UTC m=+0.059005324 container remove db0b6ce7a587e4b091d8d73a70586f1fd39569d1049d2305b7bd28fb64e2889e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 02:11:44 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:44.526 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed1793f-1145-4d62-9371-a8f168901fc0]: (4, ('Wed Nov 26 02:11:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134 (db0b6ce7a587e4b091d8d73a70586f1fd39569d1049d2305b7bd28fb64e2889e)\ndb0b6ce7a587e4b091d8d73a70586f1fd39569d1049d2305b7bd28fb64e2889e\nWed Nov 26 02:11:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134 (db0b6ce7a587e4b091d8d73a70586f1fd39569d1049d2305b7bd28fb64e2889e)\ndb0b6ce7a587e4b091d8d73a70586f1fd39569d1049d2305b7bd28fb64e2889e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:44 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:44.529 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[fe58bb82-1589-450b-951c-4e6c439ded4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:44 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:44.530 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e89566-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.536 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:44 compute-0 kernel: tap14e89566-50: left promiscuous mode
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.550 350391 DEBUG nova.network.neutron [req-9d0cadc8-efee-45d5-9e67-ef0bc72d387d req-6e5d182c-e387-46fe-80dc-4f4cc7747165 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updated VIF entry in instance network info cache for port 4b2c5180-2ff0-4b98-90cb-e0e6ba068614. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.550 350391 DEBUG nova.network.neutron [req-9d0cadc8-efee-45d5-9e67-ef0bc72d387d req-6e5d182c-e387-46fe-80dc-4f4cc7747165 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updating instance_info_cache with network_info: [{"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.552 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.554 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:44 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:44.554 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[f30dd289-050b-4439-a0db-fd0a9d2cc144]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.564 350391 DEBUG oslo_concurrency.lockutils [req-9d0cadc8-efee-45d5-9e67-ef0bc72d387d req-6e5d182c-e387-46fe-80dc-4f4cc7747165 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.564 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.564 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:11:44 compute-0 nova_compute[350387]: 2025-11-26 02:11:44.564 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5c8719f7-1028-4983-aa89-c99a459b6295 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:11:44 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:44.572 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[0a903a49-565c-4d16-94b9-53aee7911fba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:44 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:44.574 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[eb28c916-fc91-44c4-bea8-db9fdea3bca3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:44 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:44.594 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[375a293c-8549-4873-9801-2d821b1db1f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664600, 'reachable_time': 29049, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 444860, 'error': None, 'target': 'ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d14e89566\x2d5c79\x2d472a\x2d819f\x2d45cd3bbc2134.mount: Deactivated successfully.
Nov 26 02:11:44 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:44.604 287175 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14e89566-5c79-472a-819f-45cd3bbc2134 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 02:11:44 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:44.604 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a936c3-3119-45dd-af0c-8d1eeae35dad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:11:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:11:45 compute-0 nova_compute[350387]: 2025-11-26 02:11:45.099 350391 INFO nova.virt.libvirt.driver [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Deleting instance files /var/lib/nova/instances/5c8719f7-1028-4983-aa89-c99a459b6295_del#033[00m
Nov 26 02:11:45 compute-0 nova_compute[350387]: 2025-11-26 02:11:45.100 350391 INFO nova.virt.libvirt.driver [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Deletion of /var/lib/nova/instances/5c8719f7-1028-4983-aa89-c99a459b6295_del complete#033[00m
Nov 26 02:11:45 compute-0 nova_compute[350387]: 2025-11-26 02:11:45.159 350391 INFO nova.compute.manager [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Took 1.21 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 02:11:45 compute-0 nova_compute[350387]: 2025-11-26 02:11:45.160 350391 DEBUG oslo.service.loopingcall [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 02:11:45 compute-0 nova_compute[350387]: 2025-11-26 02:11:45.160 350391 DEBUG nova.compute.manager [-] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 02:11:45 compute-0 nova_compute[350387]: 2025-11-26 02:11:45.161 350391 DEBUG nova.network.neutron [-] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 02:11:45 compute-0 nova_compute[350387]: 2025-11-26 02:11:45.333 350391 DEBUG nova.compute.manager [req-743d0d4c-d280-44da-94a7-aec85536170b req-80f77f2c-875c-4f60-b777-665034ef7267 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Received event network-vif-unplugged-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:11:45 compute-0 nova_compute[350387]: 2025-11-26 02:11:45.334 350391 DEBUG oslo_concurrency.lockutils [req-743d0d4c-d280-44da-94a7-aec85536170b req-80f77f2c-875c-4f60-b777-665034ef7267 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:45 compute-0 nova_compute[350387]: 2025-11-26 02:11:45.334 350391 DEBUG oslo_concurrency.lockutils [req-743d0d4c-d280-44da-94a7-aec85536170b req-80f77f2c-875c-4f60-b777-665034ef7267 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:45 compute-0 nova_compute[350387]: 2025-11-26 02:11:45.334 350391 DEBUG oslo_concurrency.lockutils [req-743d0d4c-d280-44da-94a7-aec85536170b req-80f77f2c-875c-4f60-b777-665034ef7267 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:45 compute-0 nova_compute[350387]: 2025-11-26 02:11:45.335 350391 DEBUG nova.compute.manager [req-743d0d4c-d280-44da-94a7-aec85536170b req-80f77f2c-875c-4f60-b777-665034ef7267 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] No waiting events found dispatching network-vif-unplugged-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:11:45 compute-0 nova_compute[350387]: 2025-11-26 02:11:45.335 350391 DEBUG nova.compute.manager [req-743d0d4c-d280-44da-94a7-aec85536170b req-80f77f2c-875c-4f60-b777-665034ef7267 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Received event network-vif-unplugged-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 02:11:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 448 KiB/s rd, 1.8 KiB/s wr, 25 op/s
Nov 26 02:11:45 compute-0 ovn_controller[89102]: 2025-11-26T02:11:45Z|00104|binding|INFO|Releasing lport 0fdbc9f8-20bb-4f6b-b66d-965099ff6047 from this chassis (sb_readonly=0)
Nov 26 02:11:46 compute-0 nova_compute[350387]: 2025-11-26 02:11:46.046 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:46 compute-0 nova_compute[350387]: 2025-11-26 02:11:46.520 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updating instance_info_cache with network_info: [{"id": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "address": "fa:16:3e:5a:7b:7e", "network": {"id": "14e89566-5c79-472a-819f-45cd3bbc2134", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1836704104-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339deb116b764070abc6d50520ee33c8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b2c5180-2f", "ovs_interfaceid": "4b2c5180-2ff0-4b98-90cb-e0e6ba068614", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:11:46 compute-0 nova_compute[350387]: 2025-11-26 02:11:46.546 350391 DEBUG nova.network.neutron [-] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:11:46 compute-0 nova_compute[350387]: 2025-11-26 02:11:46.550 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-5c8719f7-1028-4983-aa89-c99a459b6295" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:11:46 compute-0 nova_compute[350387]: 2025-11-26 02:11:46.551 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:11:46 compute-0 nova_compute[350387]: 2025-11-26 02:11:46.552 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:11:46 compute-0 nova_compute[350387]: 2025-11-26 02:11:46.552 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:11:46 compute-0 nova_compute[350387]: 2025-11-26 02:11:46.552 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:11:46 compute-0 nova_compute[350387]: 2025-11-26 02:11:46.581 350391 INFO nova.compute.manager [-] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Took 1.42 seconds to deallocate network for instance.#033[00m
Nov 26 02:11:46 compute-0 nova_compute[350387]: 2025-11-26 02:11:46.627 350391 DEBUG oslo_concurrency.lockutils [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:46 compute-0 nova_compute[350387]: 2025-11-26 02:11:46.628 350391 DEBUG oslo_concurrency.lockutils [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:46 compute-0 nova_compute[350387]: 2025-11-26 02:11:46.730 350391 DEBUG oslo_concurrency.processutils [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:11:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:11:47 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3245752722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:11:47 compute-0 nova_compute[350387]: 2025-11-26 02:11:47.266 350391 DEBUG oslo_concurrency.processutils [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:11:47 compute-0 nova_compute[350387]: 2025-11-26 02:11:47.278 350391 DEBUG nova.compute.provider_tree [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:11:47 compute-0 nova_compute[350387]: 2025-11-26 02:11:47.300 350391 DEBUG nova.scheduler.client.report [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:11:47 compute-0 nova_compute[350387]: 2025-11-26 02:11:47.329 350391 DEBUG oslo_concurrency.lockutils [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:47 compute-0 nova_compute[350387]: 2025-11-26 02:11:47.366 350391 INFO nova.scheduler.client.report [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Deleted allocations for instance 5c8719f7-1028-4983-aa89-c99a459b6295#033[00m
Nov 26 02:11:47 compute-0 nova_compute[350387]: 2025-11-26 02:11:47.456 350391 DEBUG oslo_concurrency.lockutils [None req-f1055f77-d580-4a45-b056-3b6bda893bbf aadae2b9a9834185b051c2bc59c6054a 339deb116b764070abc6d50520ee33c8 - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.7 KiB/s wr, 26 op/s
Nov 26 02:11:47 compute-0 nova_compute[350387]: 2025-11-26 02:11:47.592 350391 DEBUG nova.compute.manager [req-279e8558-82bf-41a1-9b43-e530c85a22bc req-69e2c7d1-460d-4b5c-976e-0ec49683f698 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Received event network-vif-plugged-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:11:47 compute-0 nova_compute[350387]: 2025-11-26 02:11:47.592 350391 DEBUG oslo_concurrency.lockutils [req-279e8558-82bf-41a1-9b43-e530c85a22bc req-69e2c7d1-460d-4b5c-976e-0ec49683f698 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:11:47 compute-0 nova_compute[350387]: 2025-11-26 02:11:47.593 350391 DEBUG oslo_concurrency.lockutils [req-279e8558-82bf-41a1-9b43-e530c85a22bc req-69e2c7d1-460d-4b5c-976e-0ec49683f698 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:11:47 compute-0 nova_compute[350387]: 2025-11-26 02:11:47.594 350391 DEBUG oslo_concurrency.lockutils [req-279e8558-82bf-41a1-9b43-e530c85a22bc req-69e2c7d1-460d-4b5c-976e-0ec49683f698 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "5c8719f7-1028-4983-aa89-c99a459b6295-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:11:47 compute-0 nova_compute[350387]: 2025-11-26 02:11:47.594 350391 DEBUG nova.compute.manager [req-279e8558-82bf-41a1-9b43-e530c85a22bc req-69e2c7d1-460d-4b5c-976e-0ec49683f698 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] No waiting events found dispatching network-vif-plugged-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:11:47 compute-0 nova_compute[350387]: 2025-11-26 02:11:47.595 350391 WARNING nova.compute.manager [req-279e8558-82bf-41a1-9b43-e530c85a22bc req-69e2c7d1-460d-4b5c-976e-0ec49683f698 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Received unexpected event network-vif-plugged-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 for instance with vm_state deleted and task_state None.#033[00m
Nov 26 02:11:47 compute-0 nova_compute[350387]: 2025-11-26 02:11:47.596 350391 DEBUG nova.compute.manager [req-279e8558-82bf-41a1-9b43-e530c85a22bc req-69e2c7d1-460d-4b5c-976e-0ec49683f698 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Received event network-vif-deleted-4b2c5180-2ff0-4b98-90cb-e0e6ba068614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:11:47 compute-0 nova_compute[350387]: 2025-11-26 02:11:47.882 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:48 compute-0 nova_compute[350387]: 2025-11-26 02:11:48.414 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:48 compute-0 nova_compute[350387]: 2025-11-26 02:11:48.548 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:11:48 compute-0 nova_compute[350387]: 2025-11-26 02:11:48.549 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:11:49 compute-0 nova_compute[350387]: 2025-11-26 02:11:49.225 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1023 B/s wr, 26 op/s
Nov 26 02:11:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003484770788565475 of space, bias 1.0, pg target 0.10454312365696425 quantized to 32 (current 32)
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:11:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1840: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 26 02:11:51 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:11:51.558 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:11:51 compute-0 nova_compute[350387]: 2025-11-26 02:11:51.821 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:52 compute-0 ovn_controller[89102]: 2025-11-26T02:11:52Z|00105|binding|INFO|Releasing lport 0fdbc9f8-20bb-4f6b-b66d-965099ff6047 from this chassis (sb_readonly=0)
Nov 26 02:11:53 compute-0 nova_compute[350387]: 2025-11-26 02:11:53.019 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:53 compute-0 nova_compute[350387]: 2025-11-26 02:11:53.414 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 26 02:11:54 compute-0 nova_compute[350387]: 2025-11-26 02:11:54.018 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:54 compute-0 nova_compute[350387]: 2025-11-26 02:11:54.227 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:11:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 26 02:11:56 compute-0 podman[444886]: 2025-11-26 02:11:56.590519704 +0000 UTC m=+0.137812582 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Nov 26 02:11:56 compute-0 podman[444887]: 2025-11-26 02:11:56.6003664 +0000 UTC m=+0.142800122 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 26 02:11:56 compute-0 podman[444888]: 2025-11-26 02:11:56.602032177 +0000 UTC m=+0.140315783 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 02:11:57 compute-0 nova_compute[350387]: 2025-11-26 02:11:57.197 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1023 B/s wr, 15 op/s
Nov 26 02:11:58 compute-0 nova_compute[350387]: 2025-11-26 02:11:58.418 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:59 compute-0 nova_compute[350387]: 2025-11-26 02:11:59.199 350391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764123104.1971064, 5c8719f7-1028-4983-aa89-c99a459b6295 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:11:59 compute-0 nova_compute[350387]: 2025-11-26 02:11:59.200 350391 INFO nova.compute.manager [-] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] VM Stopped (Lifecycle Event)#033[00m
Nov 26 02:11:59 compute-0 nova_compute[350387]: 2025-11-26 02:11:59.218 350391 DEBUG nova.compute.manager [None req-7314da98-7d84-4a6d-9db3-b1d2edfe0f74 - - - - - -] [instance: 5c8719f7-1028-4983-aa89-c99a459b6295] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:11:59 compute-0 nova_compute[350387]: 2025-11-26 02:11:59.230 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:11:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Nov 26 02:11:59 compute-0 podman[158021]: time="2025-11-26T02:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:11:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:11:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8650 "" "Go-http-client/1.1"
Nov 26 02:11:59 compute-0 nova_compute[350387]: 2025-11-26 02:11:59.960 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:12:01 compute-0 openstack_network_exporter[367323]: ERROR   02:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:12:01 compute-0 openstack_network_exporter[367323]: ERROR   02:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:12:01 compute-0 openstack_network_exporter[367323]: ERROR   02:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:12:01 compute-0 openstack_network_exporter[367323]: ERROR   02:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:12:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:12:01 compute-0 openstack_network_exporter[367323]: ERROR   02:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:12:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:12:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Nov 26 02:12:02 compute-0 nova_compute[350387]: 2025-11-26 02:12:02.493 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:02 compute-0 ovn_controller[89102]: 2025-11-26T02:12:02Z|00106|binding|INFO|Releasing lport 0fdbc9f8-20bb-4f6b-b66d-965099ff6047 from this chassis (sb_readonly=0)
Nov 26 02:12:02 compute-0 nova_compute[350387]: 2025-11-26 02:12:02.909 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:03 compute-0 nova_compute[350387]: 2025-11-26 02:12:03.420 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 321 active+clean; 103 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 4.2 KiB/s wr, 5 op/s
Nov 26 02:12:04 compute-0 nova_compute[350387]: 2025-11-26 02:12:04.233 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:04 compute-0 podman[444944]: 2025-11-26 02:12:04.587088255 +0000 UTC m=+0.129609953 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:12:04 compute-0 podman[444945]: 2025-11-26 02:12:04.640499651 +0000 UTC m=+0.176236008 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 02:12:04 compute-0 nova_compute[350387]: 2025-11-26 02:12:04.778 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:12:05 compute-0 ovn_controller[89102]: 2025-11-26T02:12:05Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:2c:51 10.100.0.13
Nov 26 02:12:05 compute-0 ovn_controller[89102]: 2025-11-26T02:12:05Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:2c:51 10.100.0.13
Nov 26 02:12:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 115 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 697 KiB/s wr, 20 op/s
Nov 26 02:12:06 compute-0 podman[444990]: 2025-11-26 02:12:06.579696434 +0000 UTC m=+0.122583636 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=edpm, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, version=9.4)
Nov 26 02:12:06 compute-0 podman[445009]: 2025-11-26 02:12:06.773647008 +0000 UTC m=+0.125352933 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:12:07 compute-0 nova_compute[350387]: 2025-11-26 02:12:07.279 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 26 02:12:08 compute-0 nova_compute[350387]: 2025-11-26 02:12:08.423 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:09 compute-0 nova_compute[350387]: 2025-11-26 02:12:09.238 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 26 02:12:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:12:10 compute-0 nova_compute[350387]: 2025-11-26 02:12:10.504 350391 INFO nova.compute.manager [None req-18a66808-48f7-4a0c-8c08-07f907788f99 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Get console output#033[00m
Nov 26 02:12:10 compute-0 nova_compute[350387]: 2025-11-26 02:12:10.516 350391 INFO oslo.privsep.daemon [None req-18a66808-48f7-4a0c-8c08-07f907788f99 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpa9to488y/privsep.sock']#033[00m
Nov 26 02:12:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 26 02:12:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 26 02:12:10 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.058 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Acquiring lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.059 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.083 350391 DEBUG nova.compute.manager [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 02:12:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:12:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:12:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:12:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:12:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:12:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.201 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.201 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.212 350391 DEBUG nova.virt.hardware [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.213 350391 INFO nova.compute.claims [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.369 350391 DEBUG oslo_concurrency.processutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.410 350391 INFO oslo.privsep.daemon [None req-18a66808-48f7-4a0c-8c08-07f907788f99 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.268 445032 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.275 445032 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.279 445032 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.279 445032 INFO oslo.privsep.daemon [-] privsep daemon running as pid 445032#033[00m
Nov 26 02:12:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 400 KiB/s rd, 2.6 MiB/s wr, 89 op/s
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.539 445032 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 26 02:12:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:12:11 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/813847913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.862 350391 DEBUG oslo_concurrency.processutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.875 350391 DEBUG nova.compute.provider_tree [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.896 350391 DEBUG nova.scheduler.client.report [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.922 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.924 350391 DEBUG nova.compute.manager [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.987 350391 DEBUG nova.compute.manager [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 02:12:11 compute-0 nova_compute[350387]: 2025-11-26 02:12:11.989 350391 DEBUG nova.network.neutron [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.038 350391 INFO nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.056 350391 DEBUG nova.compute.manager [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.148 350391 DEBUG nova.compute.manager [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.150 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.150 350391 INFO nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Creating image(s)#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.199 350391 DEBUG nova.storage.rbd_utils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] rbd image 7f2d249d-4d0b-4ee7-ac66-deb2637c906d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.254 350391 DEBUG nova.storage.rbd_utils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] rbd image 7f2d249d-4d0b-4ee7-ac66-deb2637c906d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.309 350391 DEBUG nova.storage.rbd_utils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] rbd image 7f2d249d-4d0b-4ee7-ac66-deb2637c906d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.324 350391 DEBUG oslo_concurrency.processutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.384 350391 DEBUG nova.policy [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '57143d8a520a40849581651b89c19756', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca49eb89e83e4ab8a7d9392b980106ac', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.455 350391 DEBUG oslo_concurrency.processutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 --force-share --output=json" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.456 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Acquiring lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.456 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.457 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.500 350391 DEBUG nova.storage.rbd_utils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] rbd image 7f2d249d-4d0b-4ee7-ac66-deb2637c906d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.507 350391 DEBUG oslo_concurrency.processutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 7f2d249d-4d0b-4ee7-ac66-deb2637c906d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:12 compute-0 podman[445111]: 2025-11-26 02:12:12.567112553 +0000 UTC m=+0.114542481 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, distribution-scope=public, name=ubi9-minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Nov 26 02:12:12 compute-0 podman[445112]: 2025-11-26 02:12:12.572158104 +0000 UTC m=+0.129600312 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:12:12 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.851 350391 DEBUG oslo_concurrency.processutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 7f2d249d-4d0b-4ee7-ac66-deb2637c906d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:13 compute-0 nova_compute[350387]: 2025-11-26 02:12:12.998 350391 DEBUG nova.storage.rbd_utils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] resizing rbd image 7f2d249d-4d0b-4ee7-ac66-deb2637c906d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 26 02:12:13 compute-0 nova_compute[350387]: 2025-11-26 02:12:13.244 350391 DEBUG nova.objects.instance [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lazy-loading 'migration_context' on Instance uuid 7f2d249d-4d0b-4ee7-ac66-deb2637c906d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:12:13 compute-0 nova_compute[350387]: 2025-11-26 02:12:13.264 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 02:12:13 compute-0 nova_compute[350387]: 2025-11-26 02:12:13.265 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Ensure instance console log exists: /var/lib/nova/instances/7f2d249d-4d0b-4ee7-ac66-deb2637c906d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 02:12:13 compute-0 nova_compute[350387]: 2025-11-26 02:12:13.265 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:13 compute-0 nova_compute[350387]: 2025-11-26 02:12:13.265 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:13 compute-0 nova_compute[350387]: 2025-11-26 02:12:13.265 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:13 compute-0 nova_compute[350387]: 2025-11-26 02:12:13.426 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 144 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 3.4 MiB/s wr, 88 op/s
Nov 26 02:12:13 compute-0 nova_compute[350387]: 2025-11-26 02:12:13.750 350391 DEBUG nova.network.neutron [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Successfully created port: 719174c4-1a03-42f1-a0c2-6d96523c40e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 02:12:14 compute-0 nova_compute[350387]: 2025-11-26 02:12:14.095 350391 DEBUG nova.compute.manager [req-5f26b533-29ea-4bac-9069-cd9c26b4cbdd req-c6bfd950-a53a-4533-92a3-cdea57815e79 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Received event network-changed-422f5ef7-f048-4c83-a300-8b5942aafb8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:12:14 compute-0 nova_compute[350387]: 2025-11-26 02:12:14.096 350391 DEBUG nova.compute.manager [req-5f26b533-29ea-4bac-9069-cd9c26b4cbdd req-c6bfd950-a53a-4533-92a3-cdea57815e79 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Refreshing instance network info cache due to event network-changed-422f5ef7-f048-4c83-a300-8b5942aafb8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:12:14 compute-0 nova_compute[350387]: 2025-11-26 02:12:14.096 350391 DEBUG oslo_concurrency.lockutils [req-5f26b533-29ea-4bac-9069-cd9c26b4cbdd req-c6bfd950-a53a-4533-92a3-cdea57815e79 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-a6b626e1-3c31-460a-be1a-02b342efbb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:12:14 compute-0 nova_compute[350387]: 2025-11-26 02:12:14.097 350391 DEBUG oslo_concurrency.lockutils [req-5f26b533-29ea-4bac-9069-cd9c26b4cbdd req-c6bfd950-a53a-4533-92a3-cdea57815e79 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-a6b626e1-3c31-460a-be1a-02b342efbb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:12:14 compute-0 nova_compute[350387]: 2025-11-26 02:12:14.098 350391 DEBUG nova.network.neutron [req-5f26b533-29ea-4bac-9069-cd9c26b4cbdd req-c6bfd950-a53a-4533-92a3-cdea57815e79 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Refreshing network info cache for port 422f5ef7-f048-4c83-a300-8b5942aafb8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:12:14 compute-0 nova_compute[350387]: 2025-11-26 02:12:14.241 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:12:15 compute-0 nova_compute[350387]: 2025-11-26 02:12:15.298 350391 DEBUG nova.network.neutron [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Successfully updated port: 719174c4-1a03-42f1-a0c2-6d96523c40e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 02:12:15 compute-0 nova_compute[350387]: 2025-11-26 02:12:15.326 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Acquiring lock "refresh_cache-7f2d249d-4d0b-4ee7-ac66-deb2637c906d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:12:15 compute-0 nova_compute[350387]: 2025-11-26 02:12:15.327 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Acquired lock "refresh_cache-7f2d249d-4d0b-4ee7-ac66-deb2637c906d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:12:15 compute-0 nova_compute[350387]: 2025-11-26 02:12:15.328 350391 DEBUG nova.network.neutron [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 02:12:15 compute-0 nova_compute[350387]: 2025-11-26 02:12:15.450 350391 DEBUG nova.compute.manager [req-bbaf7eba-c69d-4125-a916-49fb5342d988 req-056d901f-7cae-4ec8-b1cb-2d92a7389609 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Received event network-changed-719174c4-1a03-42f1-a0c2-6d96523c40e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:12:15 compute-0 nova_compute[350387]: 2025-11-26 02:12:15.451 350391 DEBUG nova.compute.manager [req-bbaf7eba-c69d-4125-a916-49fb5342d988 req-056d901f-7cae-4ec8-b1cb-2d92a7389609 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Refreshing instance network info cache due to event network-changed-719174c4-1a03-42f1-a0c2-6d96523c40e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:12:15 compute-0 nova_compute[350387]: 2025-11-26 02:12:15.451 350391 DEBUG oslo_concurrency.lockutils [req-bbaf7eba-c69d-4125-a916-49fb5342d988 req-056d901f-7cae-4ec8-b1cb-2d92a7389609 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-7f2d249d-4d0b-4ee7-ac66-deb2637c906d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:12:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 163 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 281 KiB/s rd, 4.3 MiB/s wr, 71 op/s
Nov 26 02:12:15 compute-0 nova_compute[350387]: 2025-11-26 02:12:15.542 350391 DEBUG nova.network.neutron [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 02:12:16 compute-0 nova_compute[350387]: 2025-11-26 02:12:16.466 350391 DEBUG nova.network.neutron [req-5f26b533-29ea-4bac-9069-cd9c26b4cbdd req-c6bfd950-a53a-4533-92a3-cdea57815e79 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Updated VIF entry in instance network info cache for port 422f5ef7-f048-4c83-a300-8b5942aafb8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:12:16 compute-0 nova_compute[350387]: 2025-11-26 02:12:16.467 350391 DEBUG nova.network.neutron [req-5f26b533-29ea-4bac-9069-cd9c26b4cbdd req-c6bfd950-a53a-4533-92a3-cdea57815e79 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Updating instance_info_cache with network_info: [{"id": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "address": "fa:16:3e:a9:2c:51", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422f5ef7-f0", "ovs_interfaceid": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:12:16 compute-0 nova_compute[350387]: 2025-11-26 02:12:16.501 350391 DEBUG oslo_concurrency.lockutils [req-5f26b533-29ea-4bac-9069-cd9c26b4cbdd req-c6bfd950-a53a-4533-92a3-cdea57815e79 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-a6b626e1-3c31-460a-be1a-02b342efbb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.447 350391 DEBUG nova.network.neutron [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Updating instance_info_cache with network_info: [{"id": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "address": "fa:16:3e:a8:fb:ef", "network": {"id": "d76cf0d9-50e2-47d9-b2d5-30e62916ffe8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1179013875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca49eb89e83e4ab8a7d9392b980106ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap719174c4-1a", "ovs_interfaceid": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.470 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Releasing lock "refresh_cache-7f2d249d-4d0b-4ee7-ac66-deb2637c906d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.470 350391 DEBUG nova.compute.manager [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Instance network_info: |[{"id": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "address": "fa:16:3e:a8:fb:ef", "network": {"id": "d76cf0d9-50e2-47d9-b2d5-30e62916ffe8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1179013875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca49eb89e83e4ab8a7d9392b980106ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap719174c4-1a", "ovs_interfaceid": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.472 350391 DEBUG oslo_concurrency.lockutils [req-bbaf7eba-c69d-4125-a916-49fb5342d988 req-056d901f-7cae-4ec8-b1cb-2d92a7389609 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-7f2d249d-4d0b-4ee7-ac66-deb2637c906d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.473 350391 DEBUG nova.network.neutron [req-bbaf7eba-c69d-4125-a916-49fb5342d988 req-056d901f-7cae-4ec8-b1cb-2d92a7389609 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Refreshing network info cache for port 719174c4-1a03-42f1-a0c2-6d96523c40e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.479 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Start _get_guest_xml network_info=[{"id": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "address": "fa:16:3e:a8:fb:ef", "network": {"id": "d76cf0d9-50e2-47d9-b2d5-30e62916ffe8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1179013875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca49eb89e83e4ab8a7d9392b980106ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap719174c4-1a", "ovs_interfaceid": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': '4728a8a0-1107-4816-98c6-74482d53f92c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.492 350391 WARNING nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.506 350391 DEBUG nova.virt.libvirt.host [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.507 350391 DEBUG nova.virt.libvirt.host [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 02:12:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.2 MiB/s wr, 55 op/s
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.516 350391 DEBUG nova.virt.libvirt.host [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.517 350391 DEBUG nova.virt.libvirt.host [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.518 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.519 350391 DEBUG nova.virt.hardware [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T02:09:05Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6db4d080-ab1e-4a78-a6d9-858137b0ba8b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.520 350391 DEBUG nova.virt.hardware [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.521 350391 DEBUG nova.virt.hardware [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.522 350391 DEBUG nova.virt.hardware [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.522 350391 DEBUG nova.virt.hardware [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.523 350391 DEBUG nova.virt.hardware [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.524 350391 DEBUG nova.virt.hardware [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.525 350391 DEBUG nova.virt.hardware [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.525 350391 DEBUG nova.virt.hardware [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.526 350391 DEBUG nova.virt.hardware [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.527 350391 DEBUG nova.virt.hardware [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 02:12:17 compute-0 nova_compute[350387]: 2025-11-26 02:12:17.532 350391 DEBUG oslo_concurrency.processutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:12:18 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3515189883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.030 350391 DEBUG oslo_concurrency.processutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.079 350391 DEBUG nova.storage.rbd_utils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] rbd image 7f2d249d-4d0b-4ee7-ac66-deb2637c906d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.092 350391 DEBUG oslo_concurrency.processutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.431 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:12:18 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2024367778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.586 350391 DEBUG oslo_concurrency.processutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.589 350391 DEBUG nova.virt.libvirt.vif [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:12:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-541013646',display_name='tempest-ServersTestJSON-server-541013646',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-541013646',id=10,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGKVUegaH1Htvq2XC9FKjmU8zrop1laJ5QojS3YRHB+4UCGceER3ARUSxRIp7nELquc4lnEsRwKU0piTX6wsV8MrOOo8Im2xBOWcUMZqe5pKDFxY3rwsCq0XHHwDz9cBQ==',key_name='tempest-keypair-1225664049',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca49eb89e83e4ab8a7d9392b980106ac',ramdisk_id='',reservation_id='r-jvbje3ax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1689888068',owner_user_name='tempest-ServersTestJSON-1689888068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:12:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57143d8a520a40849581651b89c19756',uuid=7f2d249d-4d0b-4ee7-ac66-deb2637c906d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "address": "fa:16:3e:a8:fb:ef", "network": {"id": "d76cf0d9-50e2-47d9-b2d5-30e62916ffe8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1179013875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca49eb89e83e4ab8a7d9392b980106ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap719174c4-1a", "ovs_interfaceid": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.590 350391 DEBUG nova.network.os_vif_util [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Converting VIF {"id": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "address": "fa:16:3e:a8:fb:ef", "network": {"id": "d76cf0d9-50e2-47d9-b2d5-30e62916ffe8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1179013875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca49eb89e83e4ab8a7d9392b980106ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap719174c4-1a", "ovs_interfaceid": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.592 350391 DEBUG nova.network.os_vif_util [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:fb:ef,bridge_name='br-int',has_traffic_filtering=True,id=719174c4-1a03-42f1-a0c2-6d96523c40e9,network=Network(d76cf0d9-50e2-47d9-b2d5-30e62916ffe8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap719174c4-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.594 350391 DEBUG nova.objects.instance [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f2d249d-4d0b-4ee7-ac66-deb2637c906d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.615 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] End _get_guest_xml xml=<domain type="kvm">
Nov 26 02:12:18 compute-0 nova_compute[350387]:  <uuid>7f2d249d-4d0b-4ee7-ac66-deb2637c906d</uuid>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  <name>instance-0000000a</name>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  <memory>131072</memory>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  <metadata>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <nova:name>tempest-ServersTestJSON-server-541013646</nova:name>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 02:12:17</nova:creationTime>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <nova:flavor name="m1.nano">
Nov 26 02:12:18 compute-0 nova_compute[350387]:        <nova:memory>128</nova:memory>
Nov 26 02:12:18 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 02:12:18 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 02:12:18 compute-0 nova_compute[350387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 02:12:18 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 02:12:18 compute-0 nova_compute[350387]:        <nova:user uuid="57143d8a520a40849581651b89c19756">tempest-ServersTestJSON-1689888068-project-member</nova:user>
Nov 26 02:12:18 compute-0 nova_compute[350387]:        <nova:project uuid="ca49eb89e83e4ab8a7d9392b980106ac">tempest-ServersTestJSON-1689888068</nova:project>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="4728a8a0-1107-4816-98c6-74482d53f92c"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <nova:ports>
Nov 26 02:12:18 compute-0 nova_compute[350387]:        <nova:port uuid="719174c4-1a03-42f1-a0c2-6d96523c40e9">
Nov 26 02:12:18 compute-0 nova_compute[350387]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:        </nova:port>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      </nova:ports>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  </metadata>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <system>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <entry name="serial">7f2d249d-4d0b-4ee7-ac66-deb2637c906d</entry>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <entry name="uuid">7f2d249d-4d0b-4ee7-ac66-deb2637c906d</entry>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    </system>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  <os>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  </os>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  <features>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <apic/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  </features>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  </clock>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  </cpu>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  <devices>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/7f2d249d-4d0b-4ee7-ac66-deb2637c906d_disk">
Nov 26 02:12:18 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      </source>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:12:18 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/7f2d249d-4d0b-4ee7-ac66-deb2637c906d_disk.config">
Nov 26 02:12:18 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      </source>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:12:18 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <interface type="ethernet">
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <mac address="fa:16:3e:a8:fb:ef"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <mtu size="1442"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <target dev="tap719174c4-1a"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    </interface>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/7f2d249d-4d0b-4ee7-ac66-deb2637c906d/console.log" append="off"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    </serial>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <video>
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    </video>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    </rng>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 02:12:18 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 02:12:18 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 02:12:18 compute-0 nova_compute[350387]:  </devices>
Nov 26 02:12:18 compute-0 nova_compute[350387]: </domain>
Nov 26 02:12:18 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.618 350391 DEBUG nova.compute.manager [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Preparing to wait for external event network-vif-plugged-719174c4-1a03-42f1-a0c2-6d96523c40e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.619 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Acquiring lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.620 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.621 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.623 350391 DEBUG nova.virt.libvirt.vif [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:12:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-541013646',display_name='tempest-ServersTestJSON-server-541013646',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-541013646',id=10,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGKVUegaH1Htvq2XC9FKjmU8zrop1laJ5QojS3YRHB+4UCGceER3ARUSxRIp7nELquc4lnEsRwKU0piTX6wsV8MrOOo8Im2xBOWcUMZqe5pKDFxY3rwsCq0XHHwDz9cBQ==',key_name='tempest-keypair-1225664049',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca49eb89e83e4ab8a7d9392b980106ac',ramdisk_id='',reservation_id='r-jvbje3ax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1689888068',owner_user_name='tempest-ServersTestJSON-1689888068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:12:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57143d8a520a40849581651b89c19756',uuid=7f2d249d-4d0b-4ee7-ac66-deb2637c906d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "address": "fa:16:3e:a8:fb:ef", "network": {"id": "d76cf0d9-50e2-47d9-b2d5-30e62916ffe8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1179013875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca49eb89e83e4ab8a7d9392b980106ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap719174c4-1a", "ovs_interfaceid": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.625 350391 DEBUG nova.network.os_vif_util [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Converting VIF {"id": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "address": "fa:16:3e:a8:fb:ef", "network": {"id": "d76cf0d9-50e2-47d9-b2d5-30e62916ffe8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1179013875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca49eb89e83e4ab8a7d9392b980106ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap719174c4-1a", "ovs_interfaceid": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.626 350391 DEBUG nova.network.os_vif_util [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:fb:ef,bridge_name='br-int',has_traffic_filtering=True,id=719174c4-1a03-42f1-a0c2-6d96523c40e9,network=Network(d76cf0d9-50e2-47d9-b2d5-30e62916ffe8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap719174c4-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.628 350391 DEBUG os_vif [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:fb:ef,bridge_name='br-int',has_traffic_filtering=True,id=719174c4-1a03-42f1-a0c2-6d96523c40e9,network=Network(d76cf0d9-50e2-47d9-b2d5-30e62916ffe8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap719174c4-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.629 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.631 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.632 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.638 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.639 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap719174c4-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.639 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap719174c4-1a, col_values=(('external_ids', {'iface-id': '719174c4-1a03-42f1-a0c2-6d96523c40e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:fb:ef', 'vm-uuid': '7f2d249d-4d0b-4ee7-ac66-deb2637c906d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.642 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:18 compute-0 NetworkManager[48886]: <info>  [1764123138.6442] manager: (tap719174c4-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.646 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.656 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.657 350391 INFO os_vif [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:fb:ef,bridge_name='br-int',has_traffic_filtering=True,id=719174c4-1a03-42f1-a0c2-6d96523c40e9,network=Network(d76cf0d9-50e2-47d9-b2d5-30e62916ffe8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap719174c4-1a')#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.741 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.741 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.742 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] No VIF found with MAC fa:16:3e:a8:fb:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.743 350391 INFO nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Using config drive#033[00m
Nov 26 02:12:18 compute-0 nova_compute[350387]: 2025-11-26 02:12:18.805 350391 DEBUG nova.storage.rbd_utils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] rbd image 7f2d249d-4d0b-4ee7-ac66-deb2637c906d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.2 MiB/s wr, 55 op/s
Nov 26 02:12:19 compute-0 nova_compute[350387]: 2025-11-26 02:12:19.732 350391 INFO nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Creating config drive at /var/lib/nova/instances/7f2d249d-4d0b-4ee7-ac66-deb2637c906d/disk.config#033[00m
Nov 26 02:12:19 compute-0 nova_compute[350387]: 2025-11-26 02:12:19.746 350391 DEBUG oslo_concurrency.processutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f2d249d-4d0b-4ee7-ac66-deb2637c906d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb7l0am8j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:19 compute-0 nova_compute[350387]: 2025-11-26 02:12:19.844 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "74d081af-66cd-4e37-99e4-31f777885766" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:19 compute-0 nova_compute[350387]: 2025-11-26 02:12:19.844 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:19 compute-0 nova_compute[350387]: 2025-11-26 02:12:19.862 350391 DEBUG nova.compute.manager [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 02:12:19 compute-0 nova_compute[350387]: 2025-11-26 02:12:19.899 350391 DEBUG oslo_concurrency.processutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f2d249d-4d0b-4ee7-ac66-deb2637c906d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb7l0am8j" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:19 compute-0 nova_compute[350387]: 2025-11-26 02:12:19.959 350391 DEBUG nova.storage.rbd_utils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] rbd image 7f2d249d-4d0b-4ee7-ac66-deb2637c906d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:19 compute-0 nova_compute[350387]: 2025-11-26 02:12:19.970 350391 DEBUG oslo_concurrency.processutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f2d249d-4d0b-4ee7-ac66-deb2637c906d/disk.config 7f2d249d-4d0b-4ee7-ac66-deb2637c906d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.041 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.041 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.050 350391 DEBUG nova.virt.hardware [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.050 350391 INFO nova.compute.claims [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 02:12:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.106 350391 DEBUG nova.network.neutron [req-bbaf7eba-c69d-4125-a916-49fb5342d988 req-056d901f-7cae-4ec8-b1cb-2d92a7389609 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Updated VIF entry in instance network info cache for port 719174c4-1a03-42f1-a0c2-6d96523c40e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.107 350391 DEBUG nova.network.neutron [req-bbaf7eba-c69d-4125-a916-49fb5342d988 req-056d901f-7cae-4ec8-b1cb-2d92a7389609 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Updating instance_info_cache with network_info: [{"id": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "address": "fa:16:3e:a8:fb:ef", "network": {"id": "d76cf0d9-50e2-47d9-b2d5-30e62916ffe8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1179013875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca49eb89e83e4ab8a7d9392b980106ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap719174c4-1a", "ovs_interfaceid": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.123 350391 DEBUG oslo_concurrency.lockutils [req-bbaf7eba-c69d-4125-a916-49fb5342d988 req-056d901f-7cae-4ec8-b1cb-2d92a7389609 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-7f2d249d-4d0b-4ee7-ac66-deb2637c906d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.201 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.265 350391 DEBUG oslo_concurrency.processutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f2d249d-4d0b-4ee7-ac66-deb2637c906d/disk.config 7f2d249d-4d0b-4ee7-ac66-deb2637c906d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.267 350391 INFO nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Deleting local config drive /var/lib/nova/instances/7f2d249d-4d0b-4ee7-ac66-deb2637c906d/disk.config because it was imported into RBD.#033[00m
Nov 26 02:12:20 compute-0 kernel: tap719174c4-1a: entered promiscuous mode
Nov 26 02:12:20 compute-0 NetworkManager[48886]: <info>  [1764123140.3699] manager: (tap719174c4-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Nov 26 02:12:20 compute-0 ovn_controller[89102]: 2025-11-26T02:12:20Z|00107|binding|INFO|Claiming lport 719174c4-1a03-42f1-a0c2-6d96523c40e9 for this chassis.
Nov 26 02:12:20 compute-0 ovn_controller[89102]: 2025-11-26T02:12:20Z|00108|binding|INFO|719174c4-1a03-42f1-a0c2-6d96523c40e9: Claiming fa:16:3e:a8:fb:ef 10.100.0.11
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.374 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.382 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:fb:ef 10.100.0.11'], port_security=['fa:16:3e:a8:fb:ef 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '7f2d249d-4d0b-4ee7-ac66-deb2637c906d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca49eb89e83e4ab8a7d9392b980106ac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '643cfa5b-0d1f-4720-9e41-527415b2cf4b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf94a939-941d-4bff-aba1-203a8ce4724e, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=719174c4-1a03-42f1-a0c2-6d96523c40e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.383 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 719174c4-1a03-42f1-a0c2-6d96523c40e9 in datapath d76cf0d9-50e2-47d9-b2d5-30e62916ffe8 bound to our chassis#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.384 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d76cf0d9-50e2-47d9-b2d5-30e62916ffe8#033[00m
Nov 26 02:12:20 compute-0 ovn_controller[89102]: 2025-11-26T02:12:20Z|00109|binding|INFO|Setting lport 719174c4-1a03-42f1-a0c2-6d96523c40e9 ovn-installed in OVS
Nov 26 02:12:20 compute-0 ovn_controller[89102]: 2025-11-26T02:12:20Z|00110|binding|INFO|Setting lport 719174c4-1a03-42f1-a0c2-6d96523c40e9 up in Southbound
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.398 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[12b23e4f-df20-43a0-a0c0-f446d0ee453a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.399 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd76cf0d9-51 in ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.401 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.402 413433 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd76cf0d9-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.402 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[639e7219-9339-4118-9160-c78420466a3d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.405 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[d9241b42-6e55-4693-a109-e9a79515fdcc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.428 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[352fe1f4-e136-4050-a576-10f51b6a46a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 systemd-udevd[445423]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 02:12:20 compute-0 systemd-machined[138512]: New machine qemu-10-instance-0000000a.
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.452 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[83e7964d-824b-41b9-8638-5d92a7808a3c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 NetworkManager[48886]: <info>  [1764123140.4575] device (tap719174c4-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 02:12:20 compute-0 NetworkManager[48886]: <info>  [1764123140.4585] device (tap719174c4-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 02:12:20 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.485 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[3c564160-6f3c-492a-81cd-0e2717b8c3c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.491 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[932a2187-51c1-4613-874f-f3b12e89e85a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 NetworkManager[48886]: <info>  [1764123140.4955] manager: (tapd76cf0d9-50): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.535 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1a58d4-c7ae-43ed-b877-3e0ed707f695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.538 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[75e91666-d4e1-4c08-a5bf-c65b90ce10c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 NetworkManager[48886]: <info>  [1764123140.5651] device (tapd76cf0d9-50): carrier: link connected
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.572 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[58aebb8c-b59f-4bcf-81b0-690b6fa4fdd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.596 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[3963b4dc-a38d-4753-b8a6-96a2cac477cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd76cf0d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:36:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677119, 'reachable_time': 43281, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445453, 'error': None, 'target': 'ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.615 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[f0dad113-97de-4a78-bed7-4726ced8a4e9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaa:3618'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677119, 'tstamp': 677119}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 445454, 'error': None, 'target': 'ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.634 350391 DEBUG nova.compute.manager [req-9fdf8fd8-48f7-46b4-a971-d0570657f21d req-2c82cfec-9b49-4f08-b981-0c2ab7d308bb 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Received event network-vif-plugged-719174c4-1a03-42f1-a0c2-6d96523c40e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.634 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[86972a29-249c-417b-911d-82297eb76b71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd76cf0d9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:36:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677119, 'reachable_time': 43281, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 445455, 'error': None, 'target': 'ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.635 350391 DEBUG oslo_concurrency.lockutils [req-9fdf8fd8-48f7-46b4-a971-d0570657f21d req-2c82cfec-9b49-4f08-b981-0c2ab7d308bb 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.636 350391 DEBUG oslo_concurrency.lockutils [req-9fdf8fd8-48f7-46b4-a971-d0570657f21d req-2c82cfec-9b49-4f08-b981-0c2ab7d308bb 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.637 350391 DEBUG oslo_concurrency.lockutils [req-9fdf8fd8-48f7-46b4-a971-d0570657f21d req-2c82cfec-9b49-4f08-b981-0c2ab7d308bb 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.637 350391 DEBUG nova.compute.manager [req-9fdf8fd8-48f7-46b4-a971-d0570657f21d req-2c82cfec-9b49-4f08-b981-0c2ab7d308bb 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Processing event network-vif-plugged-719174c4-1a03-42f1-a0c2-6d96523c40e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.693 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[ff4b6615-af7e-4070-9b3c-175955719ec4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:12:20 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1172794214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.761 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.777 350391 DEBUG nova.compute.provider_tree [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.796 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[c46e638e-fbe4-4411-a0e0-f4b8183e51a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.798 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd76cf0d9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.799 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.800 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd76cf0d9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.800 350391 DEBUG nova.scheduler.client.report [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:12:20 compute-0 NetworkManager[48886]: <info>  [1764123140.8039] manager: (tapd76cf0d9-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 26 02:12:20 compute-0 kernel: tapd76cf0d9-50: entered promiscuous mode
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.807 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.811 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd76cf0d9-50, col_values=(('external_ids', {'iface-id': '91cba734-d478-4044-a91f-03f4387d1c38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:20 compute-0 ovn_controller[89102]: 2025-11-26T02:12:20Z|00111|binding|INFO|Releasing lport 91cba734-d478-4044-a91f-03f4387d1c38 from this chassis (sb_readonly=0)
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.832 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.833 286844 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d76cf0d9-50e2-47d9-b2d5-30e62916ffe8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d76cf0d9-50e2-47d9-b2d5-30e62916ffe8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.835 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e3acee-a87e-4680-8cc1-babf62a66c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.838 286844 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: global
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    log         /dev/log local0 debug
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    log-tag     haproxy-metadata-proxy-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    user        root
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    group       root
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    maxconn     1024
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    pidfile     /var/lib/neutron/external/pids/d76cf0d9-50e2-47d9-b2d5-30e62916ffe8.pid.haproxy
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    daemon
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.838 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: defaults
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    log global
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    mode http
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    option httplog
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    option dontlognull
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    option http-server-close
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    option forwardfor
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    retries                 3
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    timeout http-request    30s
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    timeout connect         30s
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    timeout client          32s
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    timeout server          32s
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    timeout http-keep-alive 30s
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: listen listener
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    bind 169.254.169.254:80
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]:    http-request add-header X-OVN-Network-ID d76cf0d9-50e2-47d9-b2d5-30e62916ffe8
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.838 350391 DEBUG nova.compute.manager [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 02:12:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:20.839 286844 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8', 'env', 'PROCESS_TAG=haproxy-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d76cf0d9-50e2-47d9-b2d5-30e62916ffe8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.891 350391 DEBUG nova.compute.manager [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.891 350391 DEBUG nova.network.neutron [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.920 350391 INFO nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 02:12:20 compute-0 nova_compute[350387]: 2025-11-26 02:12:20.943 350391 DEBUG nova.compute.manager [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.027 350391 DEBUG nova.compute.manager [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.029 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.030 350391 INFO nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Creating image(s)#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.103 350391 DEBUG nova.storage.rbd_utils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] rbd image 74d081af-66cd-4e37-99e4-31f777885766_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.183 350391 DEBUG nova.storage.rbd_utils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] rbd image 74d081af-66cd-4e37-99e4-31f777885766_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.228 350391 DEBUG nova.storage.rbd_utils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] rbd image 74d081af-66cd-4e37-99e4-31f777885766_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.237 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "75aa7190add890d937d223054d1bca64341e098f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.238 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "75aa7190add890d937d223054d1bca64341e098f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.243 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123141.1117332, 7f2d249d-4d0b-4ee7-ac66-deb2637c906d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.243 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] VM Started (Lifecycle Event)#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.245 350391 DEBUG nova.compute.manager [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.256 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.264 350391 INFO nova.virt.libvirt.driver [-] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Instance spawned successfully.#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.265 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.267 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.274 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.285 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.286 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.286 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.286 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.286 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.287 350391 DEBUG nova.virt.libvirt.driver [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.290 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.290 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123141.1118982, 7f2d249d-4d0b-4ee7-ac66-deb2637c906d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.290 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] VM Paused (Lifecycle Event)#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.329 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.335 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123141.250901, 7f2d249d-4d0b-4ee7-ac66-deb2637c906d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.335 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] VM Resumed (Lifecycle Event)#033[00m
Nov 26 02:12:21 compute-0 podman[445584]: 2025-11-26 02:12:21.360973542 +0000 UTC m=+0.102985937 container create 6c96b803404e29982e313ac8921af978475d2c5f02c6a56fdbd868d8c6e368ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.373 350391 DEBUG nova.policy [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9710ede02d47cbb016ff596d936633', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.383 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.393 350391 INFO nova.compute.manager [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Took 9.24 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.393 350391 DEBUG nova.compute.manager [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.400 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:12:21 compute-0 podman[445584]: 2025-11-26 02:12:21.318906533 +0000 UTC m=+0.060919008 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 02:12:21 compute-0 systemd[1]: Started libpod-conmon-6c96b803404e29982e313ac8921af978475d2c5f02c6a56fdbd868d8c6e368ba.scope.
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.441 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:12:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b89ec0d83882f5ec3c09ee82e381215bb8e5cec118d21fbb2a1608b6d91e154/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.497 350391 INFO nova.compute.manager [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Took 10.34 seconds to build instance.#033[00m
Nov 26 02:12:21 compute-0 podman[445584]: 2025-11-26 02:12:21.499594156 +0000 UTC m=+0.241606571 container init 6c96b803404e29982e313ac8921af978475d2c5f02c6a56fdbd868d8c6e368ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.507 350391 DEBUG nova.virt.libvirt.imagebackend [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Image locations are: [{'url': 'rbd://36901f64-240e-5c29-a2e2-29b56f2c329c/images/dbaf181e-c7da-4938-bfef-7ab3aa9a19bc/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://36901f64-240e-5c29-a2e2-29b56f2c329c/images/dbaf181e-c7da-4938-bfef-7ab3aa9a19bc/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 26 02:12:21 compute-0 podman[445584]: 2025-11-26 02:12:21.5093862 +0000 UTC m=+0.251398605 container start 6c96b803404e29982e313ac8921af978475d2c5f02c6a56fdbd868d8c6e368ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 26 02:12:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.9 MiB/s wr, 51 op/s
Nov 26 02:12:21 compute-0 neutron-haproxy-ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8[445598]: [NOTICE]   (445602) : New worker (445604) forked
Nov 26 02:12:21 compute-0 neutron-haproxy-ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8[445598]: [NOTICE]   (445602) : Loading success.
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.536 350391 DEBUG oslo_concurrency.lockutils [None req-a2dd538d-eb9a-4cc9-94d6-84f20e5a4779 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.667 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "e897c19f-7590-405d-9e92-ff9e0fd9b366" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.668 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.684 350391 DEBUG nova.compute.manager [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.758 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.759 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.776 350391 DEBUG nova.virt.hardware [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.778 350391 INFO nova.compute.claims [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 02:12:21 compute-0 nova_compute[350387]: 2025-11-26 02:12:21.974 350391 DEBUG oslo_concurrency.processutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.449 350391 DEBUG nova.network.neutron [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Successfully created port: 0659d4f2-a740-4ecb-92df-7e2267226c3e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 02:12:22 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:12:22 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/632086853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.506 350391 DEBUG oslo_concurrency.processutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.521 350391 DEBUG nova.compute.provider_tree [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.539 350391 DEBUG nova.scheduler.client.report [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.561 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.562 350391 DEBUG nova.compute.manager [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.614 350391 DEBUG nova.compute.manager [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.614 350391 DEBUG nova.network.neutron [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.635 350391 INFO nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.654 350391 DEBUG nova.compute.manager [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.713 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.789 350391 DEBUG nova.compute.manager [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.791 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.791 350391 INFO nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Creating image(s)#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.822 350391 DEBUG nova.storage.rbd_utils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] rbd image e897c19f-7590-405d-9e92-ff9e0fd9b366_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.864 350391 DEBUG nova.storage.rbd_utils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] rbd image e897c19f-7590-405d-9e92-ff9e0fd9b366_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.904 350391 DEBUG nova.storage.rbd_utils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] rbd image e897c19f-7590-405d-9e92-ff9e0fd9b366_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.914 350391 DEBUG oslo_concurrency.processutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.936 350391 DEBUG nova.policy [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a7102c5716b644e9a49ae0b2b6d2bd04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '66fdcaf8e71a4c809ab9cab4c64ca9d5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.939 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f.part --force-share --output=json" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.940 350391 DEBUG nova.virt.images [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] dbaf181e-c7da-4938-bfef-7ab3aa9a19bc was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.941 350391 DEBUG nova.privsep.utils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.942 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f.part /var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.979 350391 DEBUG oslo_concurrency.processutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.980 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.981 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:22 compute-0 nova_compute[350387]: 2025-11-26 02:12:22.982 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.024 350391 DEBUG nova.storage.rbd_utils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] rbd image e897c19f-7590-405d-9e92-ff9e0fd9b366_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.038 350391 DEBUG oslo_concurrency.processutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 e897c19f-7590-405d-9e92-ff9e0fd9b366_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.079 350391 DEBUG nova.compute.manager [req-4f8814b2-001b-4f73-b3bf-2bb1337966c3 req-7b1d1698-1c3e-4986-98c7-5e2df05f5ff3 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Received event network-vif-plugged-719174c4-1a03-42f1-a0c2-6d96523c40e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.080 350391 DEBUG oslo_concurrency.lockutils [req-4f8814b2-001b-4f73-b3bf-2bb1337966c3 req-7b1d1698-1c3e-4986-98c7-5e2df05f5ff3 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.081 350391 DEBUG oslo_concurrency.lockutils [req-4f8814b2-001b-4f73-b3bf-2bb1337966c3 req-7b1d1698-1c3e-4986-98c7-5e2df05f5ff3 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.081 350391 DEBUG oslo_concurrency.lockutils [req-4f8814b2-001b-4f73-b3bf-2bb1337966c3 req-7b1d1698-1c3e-4986-98c7-5e2df05f5ff3 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.082 350391 DEBUG nova.compute.manager [req-4f8814b2-001b-4f73-b3bf-2bb1337966c3 req-7b1d1698-1c3e-4986-98c7-5e2df05f5ff3 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] No waiting events found dispatching network-vif-plugged-719174c4-1a03-42f1-a0c2-6d96523c40e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.083 350391 WARNING nova.compute.manager [req-4f8814b2-001b-4f73-b3bf-2bb1337966c3 req-7b1d1698-1c3e-4986-98c7-5e2df05f5ff3 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Received unexpected event network-vif-plugged-719174c4-1a03-42f1-a0c2-6d96523c40e9 for instance with vm_state active and task_state None.#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.209 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f.part /var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f.converted" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.214 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.252 350391 DEBUG nova.network.neutron [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Successfully updated port: 0659d4f2-a740-4ecb-92df-7e2267226c3e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.268 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.269 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquired lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.269 350391 DEBUG nova.network.neutron [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.311 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f.converted --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.315 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "75aa7190add890d937d223054d1bca64341e098f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.370 350391 DEBUG nova.storage.rbd_utils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] rbd image 74d081af-66cd-4e37-99e4-31f777885766_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.380 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f 74d081af-66cd-4e37-99e4-31f777885766_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.425 350391 DEBUG oslo_concurrency.processutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 e897c19f-7590-405d-9e92-ff9e0fd9b366_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.487 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.493 350391 DEBUG nova.network.neutron [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 02:12:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 203 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 710 KiB/s rd, 3.5 MiB/s wr, 37 op/s
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.578 350391 DEBUG nova.network.neutron [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Successfully created port: 03ba18c7-398e-48f9-9269-730aa0ea6368 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.594 350391 DEBUG nova.storage.rbd_utils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] resizing rbd image e897c19f-7590-405d-9e92-ff9e0fd9b366_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.668 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.779 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f 74d081af-66cd-4e37-99e4-31f777885766_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.833 350391 DEBUG nova.objects.instance [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lazy-loading 'migration_context' on Instance uuid e897c19f-7590-405d-9e92-ff9e0fd9b366 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.889 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.889 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Ensure instance console log exists: /var/lib/nova/instances/e897c19f-7590-405d-9e92-ff9e0fd9b366/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.890 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.891 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.891 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:23 compute-0 nova_compute[350387]: 2025-11-26 02:12:23.901 350391 DEBUG nova.storage.rbd_utils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] resizing rbd image 74d081af-66cd-4e37-99e4-31f777885766_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.081 350391 DEBUG nova.objects.instance [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lazy-loading 'migration_context' on Instance uuid 74d081af-66cd-4e37-99e4-31f777885766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.095 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.095 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Ensure instance console log exists: /var/lib/nova/instances/74d081af-66cd-4e37-99e4-31f777885766/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.095 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.096 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.096 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.197 350391 DEBUG nova.network.neutron [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Successfully updated port: 03ba18c7-398e-48f9-9269-730aa0ea6368 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.215 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "refresh_cache-e897c19f-7590-405d-9e92-ff9e0fd9b366" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.215 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquired lock "refresh_cache-e897c19f-7590-405d-9e92-ff9e0fd9b366" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.215 350391 DEBUG nova.network.neutron [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.333 350391 DEBUG nova.compute.manager [req-4698306b-753f-4d9d-9a26-424a092bf201 req-02bd18a5-ec83-47b3-985b-c99f9fe52178 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Received event network-changed-03ba18c7-398e-48f9-9269-730aa0ea6368 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.333 350391 DEBUG nova.compute.manager [req-4698306b-753f-4d9d-9a26-424a092bf201 req-02bd18a5-ec83-47b3-985b-c99f9fe52178 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Refreshing instance network info cache due to event network-changed-03ba18c7-398e-48f9-9269-730aa0ea6368. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.334 350391 DEBUG oslo_concurrency.lockutils [req-4698306b-753f-4d9d-9a26-424a092bf201 req-02bd18a5-ec83-47b3-985b-c99f9fe52178 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-e897c19f-7590-405d-9e92-ff9e0fd9b366" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.417 350391 DEBUG nova.network.neutron [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.756 350391 DEBUG nova.network.neutron [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updating instance_info_cache with network_info: [{"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.781 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Releasing lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.782 350391 DEBUG nova.compute.manager [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Instance network_info: |[{"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.788 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Start _get_guest_xml network_info=[{"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:12:09Z,direct_url=<?>,disk_format='qcow2',id=dbaf181e-c7da-4938-bfef-7ab3aa9a19bc,min_disk=0,min_ram=0,name='tempest-scenario-img--177366414',owner='cb4e9e1ffe494961ba45f8f24f21b106',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:12:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': 'dbaf181e-c7da-4938-bfef-7ab3aa9a19bc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.799 350391 WARNING nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.805 350391 DEBUG nova.virt.libvirt.host [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.806 350391 DEBUG nova.virt.libvirt.host [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.811 350391 DEBUG nova.virt.libvirt.host [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.812 350391 DEBUG nova.virt.libvirt.host [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.813 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.813 350391 DEBUG nova.virt.hardware [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T02:09:05Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6db4d080-ab1e-4a78-a6d9-858137b0ba8b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:12:09Z,direct_url=<?>,disk_format='qcow2',id=dbaf181e-c7da-4938-bfef-7ab3aa9a19bc,min_disk=0,min_ram=0,name='tempest-scenario-img--177366414',owner='cb4e9e1ffe494961ba45f8f24f21b106',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:12:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.814 350391 DEBUG nova.virt.hardware [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.814 350391 DEBUG nova.virt.hardware [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.814 350391 DEBUG nova.virt.hardware [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.815 350391 DEBUG nova.virt.hardware [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.815 350391 DEBUG nova.virt.hardware [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.815 350391 DEBUG nova.virt.hardware [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.815 350391 DEBUG nova.virt.hardware [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.816 350391 DEBUG nova.virt.hardware [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.816 350391 DEBUG nova.virt.hardware [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.816 350391 DEBUG nova.virt.hardware [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 02:12:24 compute-0 nova_compute[350387]: 2025-11-26 02:12:24.820 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:24.996 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:24.997 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:24.999 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.087 350391 DEBUG nova.network.neutron [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Updating instance_info_cache with network_info: [{"id": "03ba18c7-398e-48f9-9269-730aa0ea6368", "address": "fa:16:3e:49:31:0c", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03ba18c7-39", "ovs_interfaceid": "03ba18c7-398e-48f9-9269-730aa0ea6368", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:12:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.111 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Releasing lock "refresh_cache-e897c19f-7590-405d-9e92-ff9e0fd9b366" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.112 350391 DEBUG nova.compute.manager [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Instance network_info: |[{"id": "03ba18c7-398e-48f9-9269-730aa0ea6368", "address": "fa:16:3e:49:31:0c", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03ba18c7-39", "ovs_interfaceid": "03ba18c7-398e-48f9-9269-730aa0ea6368", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.113 350391 DEBUG oslo_concurrency.lockutils [req-4698306b-753f-4d9d-9a26-424a092bf201 req-02bd18a5-ec83-47b3-985b-c99f9fe52178 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-e897c19f-7590-405d-9e92-ff9e0fd9b366" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.113 350391 DEBUG nova.network.neutron [req-4698306b-753f-4d9d-9a26-424a092bf201 req-02bd18a5-ec83-47b3-985b-c99f9fe52178 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Refreshing network info cache for port 03ba18c7-398e-48f9-9269-730aa0ea6368 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.119 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Start _get_guest_xml network_info=[{"id": "03ba18c7-398e-48f9-9269-730aa0ea6368", "address": "fa:16:3e:49:31:0c", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03ba18c7-39", "ovs_interfaceid": "03ba18c7-398e-48f9-9269-730aa0ea6368", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': '4728a8a0-1107-4816-98c6-74482d53f92c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.131 350391 WARNING nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.138 350391 DEBUG nova.virt.libvirt.host [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.139 350391 DEBUG nova.virt.libvirt.host [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.153 350391 DEBUG nova.virt.libvirt.host [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.153 350391 DEBUG nova.virt.libvirt.host [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.154 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.154 350391 DEBUG nova.virt.hardware [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T02:09:05Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6db4d080-ab1e-4a78-a6d9-858137b0ba8b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.155 350391 DEBUG nova.virt.hardware [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.155 350391 DEBUG nova.virt.hardware [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.155 350391 DEBUG nova.virt.hardware [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.155 350391 DEBUG nova.virt.hardware [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.156 350391 DEBUG nova.virt.hardware [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.156 350391 DEBUG nova.virt.hardware [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.156 350391 DEBUG nova.virt.hardware [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.157 350391 DEBUG nova.virt.hardware [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.157 350391 DEBUG nova.virt.hardware [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.157 350391 DEBUG nova.virt.hardware [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.161 350391 DEBUG oslo_concurrency.processutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.213 350391 DEBUG nova.compute.manager [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Received event network-changed-0659d4f2-a740-4ecb-92df-7e2267226c3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.214 350391 DEBUG nova.compute.manager [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Refreshing instance network info cache due to event network-changed-0659d4f2-a740-4ecb-92df-7e2267226c3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.214 350391 DEBUG oslo_concurrency.lockutils [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.214 350391 DEBUG oslo_concurrency.lockutils [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.214 350391 DEBUG nova.network.neutron [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Refreshing network info cache for port 0659d4f2-a740-4ecb-92df-7e2267226c3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:12:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:12:25 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4125724846' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.331 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.394 350391 DEBUG nova.storage.rbd_utils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] rbd image 74d081af-66cd-4e37-99e4-31f777885766_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.422 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.498 350391 DEBUG oslo_concurrency.lockutils [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Acquiring lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.499 350391 DEBUG oslo_concurrency.lockutils [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.500 350391 DEBUG oslo_concurrency.lockutils [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Acquiring lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.501 350391 DEBUG oslo_concurrency.lockutils [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.501 350391 DEBUG oslo_concurrency.lockutils [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.503 350391 INFO nova.compute.manager [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Terminating instance#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.504 350391 DEBUG nova.compute.manager [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 02:12:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 234 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 75 op/s
Nov 26 02:12:25 compute-0 kernel: tap719174c4-1a (unregistering): left promiscuous mode
Nov 26 02:12:25 compute-0 NetworkManager[48886]: <info>  [1764123145.5709] device (tap719174c4-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.580 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:25 compute-0 ovn_controller[89102]: 2025-11-26T02:12:25Z|00112|binding|INFO|Releasing lport 719174c4-1a03-42f1-a0c2-6d96523c40e9 from this chassis (sb_readonly=0)
Nov 26 02:12:25 compute-0 ovn_controller[89102]: 2025-11-26T02:12:25Z|00113|binding|INFO|Setting lport 719174c4-1a03-42f1-a0c2-6d96523c40e9 down in Southbound
Nov 26 02:12:25 compute-0 ovn_controller[89102]: 2025-11-26T02:12:25Z|00114|binding|INFO|Removing iface tap719174c4-1a ovn-installed in OVS
Nov 26 02:12:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:25.594 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:fb:ef 10.100.0.11'], port_security=['fa:16:3e:a8:fb:ef 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '7f2d249d-4d0b-4ee7-ac66-deb2637c906d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca49eb89e83e4ab8a7d9392b980106ac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '643cfa5b-0d1f-4720-9e41-527415b2cf4b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf94a939-941d-4bff-aba1-203a8ce4724e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=719174c4-1a03-42f1-a0c2-6d96523c40e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.597 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:25.600 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 719174c4-1a03-42f1-a0c2-6d96523c40e9 in datapath d76cf0d9-50e2-47d9-b2d5-30e62916ffe8 unbound from our chassis#033[00m
Nov 26 02:12:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:25.605 286844 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d76cf0d9-50e2-47d9-b2d5-30e62916ffe8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 02:12:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:25.606 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[8bdd70e7-da34-4ffc-b586-039856fffffe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:25.608 286844 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8 namespace which is not needed anymore#033[00m
Nov 26 02:12:25 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 26 02:12:25 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 5.262s CPU time.
Nov 26 02:12:25 compute-0 systemd-machined[138512]: Machine qemu-10-instance-0000000a terminated.
Nov 26 02:12:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:12:25 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3139750125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.722 350391 DEBUG oslo_concurrency.processutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.770 350391 DEBUG nova.storage.rbd_utils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] rbd image e897c19f-7590-405d-9e92-ff9e0fd9b366_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.785 350391 DEBUG oslo_concurrency.processutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:25 compute-0 neutron-haproxy-ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8[445598]: [NOTICE]   (445602) : haproxy version is 2.8.14-c23fe91
Nov 26 02:12:25 compute-0 neutron-haproxy-ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8[445598]: [NOTICE]   (445602) : path to executable is /usr/sbin/haproxy
Nov 26 02:12:25 compute-0 neutron-haproxy-ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8[445598]: [WARNING]  (445602) : Exiting Master process...
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.815 350391 INFO nova.virt.libvirt.driver [-] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Instance destroyed successfully.#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.815 350391 DEBUG nova.objects.instance [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lazy-loading 'resources' on Instance uuid 7f2d249d-4d0b-4ee7-ac66-deb2637c906d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:12:25 compute-0 neutron-haproxy-ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8[445598]: [ALERT]    (445602) : Current worker (445604) exited with code 143 (Terminated)
Nov 26 02:12:25 compute-0 neutron-haproxy-ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8[445598]: [WARNING]  (445602) : All workers exited. Exiting... (0)
Nov 26 02:12:25 compute-0 systemd[1]: libpod-6c96b803404e29982e313ac8921af978475d2c5f02c6a56fdbd868d8c6e368ba.scope: Deactivated successfully.
Nov 26 02:12:25 compute-0 podman[446027]: 2025-11-26 02:12:25.826642752 +0000 UTC m=+0.094359575 container died 6c96b803404e29982e313ac8921af978475d2c5f02c6a56fdbd868d8c6e368ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.831 350391 DEBUG nova.virt.libvirt.vif [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T02:12:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-541013646',display_name='tempest-ServersTestJSON-server-541013646',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-541013646',id=10,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGKVUegaH1Htvq2XC9FKjmU8zrop1laJ5QojS3YRHB+4UCGceER3ARUSxRIp7nELquc4lnEsRwKU0piTX6wsV8MrOOo8Im2xBOWcUMZqe5pKDFxY3rwsCq0XHHwDz9cBQ==',key_name='tempest-keypair-1225664049',keypairs=<?>,launch_index=0,launched_at=2025-11-26T02:12:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca49eb89e83e4ab8a7d9392b980106ac',ramdisk_id='',reservation_id='r-jvbje3ax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1689888068',owner_user_name='tempest-ServersTestJSON-1689888068-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T02:12:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57143d8a520a40849581651b89c19756',uuid=7f2d249d-4d0b-4ee7-ac66-deb2637c906d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "address": "fa:16:3e:a8:fb:ef", "network": {"id": "d76cf0d9-50e2-47d9-b2d5-30e62916ffe8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1179013875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca49eb89e83e4ab8a7d9392b980106ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap719174c4-1a", "ovs_interfaceid": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.831 350391 DEBUG nova.network.os_vif_util [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Converting VIF {"id": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "address": "fa:16:3e:a8:fb:ef", "network": {"id": "d76cf0d9-50e2-47d9-b2d5-30e62916ffe8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1179013875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca49eb89e83e4ab8a7d9392b980106ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap719174c4-1a", "ovs_interfaceid": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.832 350391 DEBUG nova.network.os_vif_util [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:fb:ef,bridge_name='br-int',has_traffic_filtering=True,id=719174c4-1a03-42f1-a0c2-6d96523c40e9,network=Network(d76cf0d9-50e2-47d9-b2d5-30e62916ffe8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap719174c4-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.832 350391 DEBUG os_vif [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:fb:ef,bridge_name='br-int',has_traffic_filtering=True,id=719174c4-1a03-42f1-a0c2-6d96523c40e9,network=Network(d76cf0d9-50e2-47d9-b2d5-30e62916ffe8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap719174c4-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.834 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.835 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap719174c4-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.838 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.840 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:12:25 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.841 350391 INFO os_vif [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:fb:ef,bridge_name='br-int',has_traffic_filtering=True,id=719174c4-1a03-42f1-a0c2-6d96523c40e9,network=Network(d76cf0d9-50e2-47d9-b2d5-30e62916ffe8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap719174c4-1a')#033[00m
Nov 26 02:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b89ec0d83882f5ec3c09ee82e381215bb8e5cec118d21fbb2a1608b6d91e154-merged.mount: Deactivated successfully.
Nov 26 02:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6c96b803404e29982e313ac8921af978475d2c5f02c6a56fdbd868d8c6e368ba-userdata-shm.mount: Deactivated successfully.
Nov 26 02:12:25 compute-0 podman[446027]: 2025-11-26 02:12:25.892788635 +0000 UTC m=+0.160505438 container cleanup 6c96b803404e29982e313ac8921af978475d2c5f02c6a56fdbd868d8c6e368ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:12:25 compute-0 systemd[1]: libpod-conmon-6c96b803404e29982e313ac8921af978475d2c5f02c6a56fdbd868d8c6e368ba.scope: Deactivated successfully.
Nov 26 02:12:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:12:25 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3649873258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:12:25 compute-0 podman[446111]: 2025-11-26 02:12:25.983706873 +0000 UTC m=+0.053952613 container remove 6c96b803404e29982e313ac8921af978475d2c5f02c6a56fdbd868d8c6e368ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:12:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:25.993 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[5311f120-3440-47eb-bc4e-fcd37dfc43e4]: (4, ('Wed Nov 26 02:12:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8 (6c96b803404e29982e313ac8921af978475d2c5f02c6a56fdbd868d8c6e368ba)\n6c96b803404e29982e313ac8921af978475d2c5f02c6a56fdbd868d8c6e368ba\nWed Nov 26 02:12:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8 (6c96b803404e29982e313ac8921af978475d2c5f02c6a56fdbd868d8c6e368ba)\n6c96b803404e29982e313ac8921af978475d2c5f02c6a56fdbd868d8c6e368ba\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:25.995 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[cd78d252-a72b-4cb8-ad8f-04a3b8fddfe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:25.997 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd76cf0d9-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:25.999 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:26 compute-0 kernel: tapd76cf0d9-50: left promiscuous mode
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.001 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:26.008 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[0f19764f-c432-48e0-b50f-feb6aea28729]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.015 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.021 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.022 350391 DEBUG nova.virt.libvirt.vif [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:12:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n',id=11,image_ref='dbaf181e-c7da-4938-bfef-7ab3aa9a19bc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bd820598-acdd-4f42-8252-1f5951161b01'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb4e9e1ffe494961ba45f8f24f21b106',ramdisk_id='',reservation_id='r-sdlvzrp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='dbaf181e-c7da-4938-bfef-7ab3aa9a19bc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-624283200',owner_user_name='tempest-PrometheusGabbiTest-624283200-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:12:20Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='3a9710ede02d47cbb016ff596d936633',uuid=74d081af-66cd-4e37-99e4-31f777885766,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.022 350391 DEBUG nova.network.os_vif_util [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Converting VIF {"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.023 350391 DEBUG nova.network.os_vif_util [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:80:c9,bridge_name='br-int',has_traffic_filtering=True,id=0659d4f2-a740-4ecb-92df-7e2267226c3e,network=Network(02245f78-e221-4ecd-ae3b-975782a68c5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0659d4f2-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.024 350391 DEBUG nova.objects.instance [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lazy-loading 'pci_devices' on Instance uuid 74d081af-66cd-4e37-99e4-31f777885766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:12:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:26.025 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[c23e78e7-3ae6-48a4-905e-f161a8c7782e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:26.026 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[1fecb158-0bfe-4ae8-b55c-6edb4f2ba632]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.035 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] End _get_guest_xml xml=<domain type="kvm">
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <uuid>74d081af-66cd-4e37-99e4-31f777885766</uuid>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <name>instance-0000000b</name>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <memory>131072</memory>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <metadata>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <nova:name>te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n</nova:name>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 02:12:24</nova:creationTime>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <nova:flavor name="m1.nano">
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:memory>128</nova:memory>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:user uuid="3a9710ede02d47cbb016ff596d936633">tempest-PrometheusGabbiTest-624283200-project-member</nova:user>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:project uuid="cb4e9e1ffe494961ba45f8f24f21b106">tempest-PrometheusGabbiTest-624283200</nova:project>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="dbaf181e-c7da-4938-bfef-7ab3aa9a19bc"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <nova:ports>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:port uuid="0659d4f2-a740-4ecb-92df-7e2267226c3e">
Nov 26 02:12:26 compute-0 nova_compute[350387]:          <nova:ip type="fixed" address="10.100.2.57" ipVersion="4"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        </nova:port>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      </nova:ports>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  </metadata>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <system>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <entry name="serial">74d081af-66cd-4e37-99e4-31f777885766</entry>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <entry name="uuid">74d081af-66cd-4e37-99e4-31f777885766</entry>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </system>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <os>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  </os>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <features>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <apic/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  </features>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  </clock>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  </cpu>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <devices>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/74d081af-66cd-4e37-99e4-31f777885766_disk">
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      </source>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/74d081af-66cd-4e37-99e4-31f777885766_disk.config">
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      </source>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <interface type="ethernet">
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <mac address="fa:16:3e:91:80:c9"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <mtu size="1442"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <target dev="tap0659d4f2-a7"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </interface>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/74d081af-66cd-4e37-99e4-31f777885766/console.log" append="off"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </serial>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <video>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </video>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </rng>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  </devices>
Nov 26 02:12:26 compute-0 nova_compute[350387]: </domain>
Nov 26 02:12:26 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.035 350391 DEBUG nova.compute.manager [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Preparing to wait for external event network-vif-plugged-0659d4f2-a740-4ecb-92df-7e2267226c3e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.036 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "74d081af-66cd-4e37-99e4-31f777885766-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.036 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.036 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:26 compute-0 NetworkManager[48886]: <info>  [1764123146.0435] manager: (tap0659d4f2-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.036 350391 DEBUG nova.virt.libvirt.vif [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:12:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n',id=11,image_ref='dbaf181e-c7da-4938-bfef-7ab3aa9a19bc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bd820598-acdd-4f42-8252-1f5951161b01'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb4e9e1ffe494961ba45f8f24f21b106',ramdisk_id='',reservation_id='r-sdlvzrp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='dbaf181e-c7da-4938-bfef-7ab3aa9a19bc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-624283200',owner_user_name='tempest-PrometheusGabbiTest-624283200-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:12:20Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='3a9710ede02d47cbb016ff596d936633',uuid=74d081af-66cd-4e37-99e4-31f777885766,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.037 350391 DEBUG nova.network.os_vif_util [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Converting VIF {"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.037 350391 DEBUG nova.network.os_vif_util [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:80:c9,bridge_name='br-int',has_traffic_filtering=True,id=0659d4f2-a740-4ecb-92df-7e2267226c3e,network=Network(02245f78-e221-4ecd-ae3b-975782a68c5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0659d4f2-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.037 350391 DEBUG os_vif [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:80:c9,bridge_name='br-int',has_traffic_filtering=True,id=0659d4f2-a740-4ecb-92df-7e2267226c3e,network=Network(02245f78-e221-4ecd-ae3b-975782a68c5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0659d4f2-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.038 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.038 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.038 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.041 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.041 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0659d4f2-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.041 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0659d4f2-a7, col_values=(('external_ids', {'iface-id': '0659d4f2-a740-4ecb-92df-7e2267226c3e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:80:c9', 'vm-uuid': '74d081af-66cd-4e37-99e4-31f777885766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.042 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.046 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:12:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:26.046 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[9acd1962-eeee-491f-95df-f7b0c6722f41]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677110, 'reachable_time': 16366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 446134, 'error': None, 'target': 'ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.048 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.048 350391 INFO os_vif [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:80:c9,bridge_name='br-int',has_traffic_filtering=True,id=0659d4f2-a740-4ecb-92df-7e2267226c3e,network=Network(02245f78-e221-4ecd-ae3b-975782a68c5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0659d4f2-a7')#033[00m
Nov 26 02:12:26 compute-0 systemd[1]: run-netns-ovnmeta\x2dd76cf0d9\x2d50e2\x2d47d9\x2db2d5\x2d30e62916ffe8.mount: Deactivated successfully.
Nov 26 02:12:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:26.050 287175 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d76cf0d9-50e2-47d9-b2d5-30e62916ffe8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 02:12:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:26.051 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ce2c43-e908-45b2-bf46-4f025f72d0d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.102 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.102 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.103 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] No VIF found with MAC fa:16:3e:91:80:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.103 350391 INFO nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Using config drive#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.152 350391 DEBUG nova.storage.rbd_utils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] rbd image 74d081af-66cd-4e37-99e4-31f777885766_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:12:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884877606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.273 350391 DEBUG oslo_concurrency.processutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.274 350391 DEBUG nova.virt.libvirt.vif [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:12:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1078684613',display_name='tempest-TestNetworkBasicOps-server-1078684613',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1078684613',id=12,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFyDWvQdidKliIH+HIM7JVqsdDWQeY4BVkCwHvJcJLGUWAll4CaOk+2wkf46FTVDdHANhS0iRBWBKyNFCHlN5GDxGFhUaMWUW4q21XCkvMkhXsFc+huMMpeYvIKQhZN2Gg==',key_name='tempest-TestNetworkBasicOps-281693536',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='66fdcaf8e71a4c809ab9cab4c64ca9d5',ramdisk_id='',reservation_id='r-0ccuj94c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-345735252',owner_user_name='tempest-TestNetworkBasicOps-345735252-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:12:22Z,user_data=None,user_id='a7102c5716b644e9a49ae0b2b6d2bd04',uuid=e897c19f-7590-405d-9e92-ff9e0fd9b366,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "03ba18c7-398e-48f9-9269-730aa0ea6368", "address": "fa:16:3e:49:31:0c", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03ba18c7-39", "ovs_interfaceid": "03ba18c7-398e-48f9-9269-730aa0ea6368", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.274 350391 DEBUG nova.network.os_vif_util [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Converting VIF {"id": "03ba18c7-398e-48f9-9269-730aa0ea6368", "address": "fa:16:3e:49:31:0c", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03ba18c7-39", "ovs_interfaceid": "03ba18c7-398e-48f9-9269-730aa0ea6368", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.275 350391 DEBUG nova.network.os_vif_util [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:31:0c,bridge_name='br-int',has_traffic_filtering=True,id=03ba18c7-398e-48f9-9269-730aa0ea6368,network=Network(6006a9a5-9f5c-48b2-8574-7469a748b2e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03ba18c7-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.276 350391 DEBUG nova.objects.instance [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lazy-loading 'pci_devices' on Instance uuid e897c19f-7590-405d-9e92-ff9e0fd9b366 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.289 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] End _get_guest_xml xml=<domain type="kvm">
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <uuid>e897c19f-7590-405d-9e92-ff9e0fd9b366</uuid>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <name>instance-0000000c</name>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <memory>131072</memory>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <metadata>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <nova:name>tempest-TestNetworkBasicOps-server-1078684613</nova:name>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 02:12:25</nova:creationTime>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <nova:flavor name="m1.nano">
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:memory>128</nova:memory>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:user uuid="a7102c5716b644e9a49ae0b2b6d2bd04">tempest-TestNetworkBasicOps-345735252-project-member</nova:user>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:project uuid="66fdcaf8e71a4c809ab9cab4c64ca9d5">tempest-TestNetworkBasicOps-345735252</nova:project>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="4728a8a0-1107-4816-98c6-74482d53f92c"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <nova:ports>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <nova:port uuid="03ba18c7-398e-48f9-9269-730aa0ea6368">
Nov 26 02:12:26 compute-0 nova_compute[350387]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:        </nova:port>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      </nova:ports>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  </metadata>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <system>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <entry name="serial">e897c19f-7590-405d-9e92-ff9e0fd9b366</entry>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <entry name="uuid">e897c19f-7590-405d-9e92-ff9e0fd9b366</entry>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </system>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <os>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  </os>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <features>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <apic/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  </features>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  </clock>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  </cpu>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  <devices>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/e897c19f-7590-405d-9e92-ff9e0fd9b366_disk">
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      </source>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/e897c19f-7590-405d-9e92-ff9e0fd9b366_disk.config">
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      </source>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:12:26 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <interface type="ethernet">
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <mac address="fa:16:3e:49:31:0c"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <mtu size="1442"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <target dev="tap03ba18c7-39"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </interface>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/e897c19f-7590-405d-9e92-ff9e0fd9b366/console.log" append="off"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </serial>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <video>
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </video>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </rng>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 02:12:26 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 02:12:26 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 02:12:26 compute-0 nova_compute[350387]:  </devices>
Nov 26 02:12:26 compute-0 nova_compute[350387]: </domain>
Nov 26 02:12:26 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.291 350391 DEBUG nova.compute.manager [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Preparing to wait for external event network-vif-plugged-03ba18c7-398e-48f9-9269-730aa0ea6368 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.291 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.292 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.292 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.293 350391 DEBUG nova.virt.libvirt.vif [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:12:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1078684613',display_name='tempest-TestNetworkBasicOps-server-1078684613',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1078684613',id=12,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFyDWvQdidKliIH+HIM7JVqsdDWQeY4BVkCwHvJcJLGUWAll4CaOk+2wkf46FTVDdHANhS0iRBWBKyNFCHlN5GDxGFhUaMWUW4q21XCkvMkhXsFc+huMMpeYvIKQhZN2Gg==',key_name='tempest-TestNetworkBasicOps-281693536',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='66fdcaf8e71a4c809ab9cab4c64ca9d5',ramdisk_id='',reservation_id='r-0ccuj94c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-345735252',owner_user_name='tempest-TestNetworkBasicOps-345735252-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:12:22Z,user_data=None,user_id='a7102c5716b644e9a49ae0b2b6d2bd04',uuid=e897c19f-7590-405d-9e92-ff9e0fd9b366,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "03ba18c7-398e-48f9-9269-730aa0ea6368", "address": "fa:16:3e:49:31:0c", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03ba18c7-39", "ovs_interfaceid": "03ba18c7-398e-48f9-9269-730aa0ea6368", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.293 350391 DEBUG nova.network.os_vif_util [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Converting VIF {"id": "03ba18c7-398e-48f9-9269-730aa0ea6368", "address": "fa:16:3e:49:31:0c", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03ba18c7-39", "ovs_interfaceid": "03ba18c7-398e-48f9-9269-730aa0ea6368", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.293 350391 DEBUG nova.network.os_vif_util [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:31:0c,bridge_name='br-int',has_traffic_filtering=True,id=03ba18c7-398e-48f9-9269-730aa0ea6368,network=Network(6006a9a5-9f5c-48b2-8574-7469a748b2e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03ba18c7-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.294 350391 DEBUG os_vif [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:31:0c,bridge_name='br-int',has_traffic_filtering=True,id=03ba18c7-398e-48f9-9269-730aa0ea6368,network=Network(6006a9a5-9f5c-48b2-8574-7469a748b2e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03ba18c7-39') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.294 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.294 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.295 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.297 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.297 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03ba18c7-39, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.298 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap03ba18c7-39, col_values=(('external_ids', {'iface-id': '03ba18c7-398e-48f9-9269-730aa0ea6368', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:31:0c', 'vm-uuid': 'e897c19f-7590-405d-9e92-ff9e0fd9b366'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.301 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:26 compute-0 NetworkManager[48886]: <info>  [1764123146.3027] manager: (tap03ba18c7-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.303 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.316 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.317 350391 INFO os_vif [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:31:0c,bridge_name='br-int',has_traffic_filtering=True,id=03ba18c7-398e-48f9-9269-730aa0ea6368,network=Network(6006a9a5-9f5c-48b2-8574-7469a748b2e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03ba18c7-39')#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.368 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.368 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.369 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] No VIF found with MAC fa:16:3e:49:31:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.369 350391 INFO nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Using config drive#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.404 350391 DEBUG nova.storage.rbd_utils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] rbd image e897c19f-7590-405d-9e92-ff9e0fd9b366_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.539 350391 INFO nova.virt.libvirt.driver [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Deleting instance files /var/lib/nova/instances/7f2d249d-4d0b-4ee7-ac66-deb2637c906d_del#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.540 350391 INFO nova.virt.libvirt.driver [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Deletion of /var/lib/nova/instances/7f2d249d-4d0b-4ee7-ac66-deb2637c906d_del complete#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.617 350391 INFO nova.compute.manager [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Took 1.11 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.618 350391 DEBUG oslo.service.loopingcall [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.619 350391 DEBUG nova.compute.manager [-] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.619 350391 DEBUG nova.network.neutron [-] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.679 350391 INFO nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Creating config drive at /var/lib/nova/instances/74d081af-66cd-4e37-99e4-31f777885766/disk.config#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.686 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/74d081af-66cd-4e37-99e4-31f777885766/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw4483d54 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:26 compute-0 podman[446202]: 2025-11-26 02:12:26.79413152 +0000 UTC m=+0.101029052 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 02:12:26 compute-0 podman[446204]: 2025-11-26 02:12:26.801250489 +0000 UTC m=+0.103524381 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 02:12:26 compute-0 podman[446203]: 2025-11-26 02:12:26.812226717 +0000 UTC m=+0.118560383 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.828 350391 DEBUG nova.network.neutron [req-4698306b-753f-4d9d-9a26-424a092bf201 req-02bd18a5-ec83-47b3-985b-c99f9fe52178 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Updated VIF entry in instance network info cache for port 03ba18c7-398e-48f9-9269-730aa0ea6368. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.829 350391 DEBUG nova.network.neutron [req-4698306b-753f-4d9d-9a26-424a092bf201 req-02bd18a5-ec83-47b3-985b-c99f9fe52178 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Updating instance_info_cache with network_info: [{"id": "03ba18c7-398e-48f9-9269-730aa0ea6368", "address": "fa:16:3e:49:31:0c", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03ba18c7-39", "ovs_interfaceid": "03ba18c7-398e-48f9-9269-730aa0ea6368", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.831 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/74d081af-66cd-4e37-99e4-31f777885766/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw4483d54" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.868 350391 DEBUG nova.storage.rbd_utils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] rbd image 74d081af-66cd-4e37-99e4-31f777885766_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.879 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/74d081af-66cd-4e37-99e4-31f777885766/disk.config 74d081af-66cd-4e37-99e4-31f777885766_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.900 350391 DEBUG oslo_concurrency.lockutils [req-4698306b-753f-4d9d-9a26-424a092bf201 req-02bd18a5-ec83-47b3-985b-c99f9fe52178 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-e897c19f-7590-405d-9e92-ff9e0fd9b366" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.938 350391 DEBUG nova.network.neutron [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updated VIF entry in instance network info cache for port 0659d4f2-a740-4ecb-92df-7e2267226c3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.938 350391 DEBUG nova.network.neutron [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updating instance_info_cache with network_info: [{"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.960 350391 DEBUG oslo_concurrency.lockutils [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.960 350391 DEBUG nova.compute.manager [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Received event network-changed-719174c4-1a03-42f1-a0c2-6d96523c40e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.960 350391 DEBUG nova.compute.manager [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Refreshing instance network info cache due to event network-changed-719174c4-1a03-42f1-a0c2-6d96523c40e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.961 350391 DEBUG oslo_concurrency.lockutils [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-7f2d249d-4d0b-4ee7-ac66-deb2637c906d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.962 350391 DEBUG oslo_concurrency.lockutils [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-7f2d249d-4d0b-4ee7-ac66-deb2637c906d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:12:26 compute-0 nova_compute[350387]: 2025-11-26 02:12:26.963 350391 DEBUG nova.network.neutron [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Refreshing network info cache for port 719174c4-1a03-42f1-a0c2-6d96523c40e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:12:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:12:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2723885702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:12:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:12:26 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2723885702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.107 350391 DEBUG oslo_concurrency.processutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/74d081af-66cd-4e37-99e4-31f777885766/disk.config 74d081af-66cd-4e37-99e4-31f777885766_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.107 350391 INFO nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Deleting local config drive /var/lib/nova/instances/74d081af-66cd-4e37-99e4-31f777885766/disk.config because it was imported into RBD.#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.147 350391 INFO nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Creating config drive at /var/lib/nova/instances/e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.config#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.151 350391 DEBUG oslo_concurrency.processutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnn3qzwyc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:27 compute-0 systemd-udevd[446009]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 02:12:27 compute-0 NetworkManager[48886]: <info>  [1764123147.1923] manager: (tap0659d4f2-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Nov 26 02:12:27 compute-0 kernel: tap0659d4f2-a7: entered promiscuous mode
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.197 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:27 compute-0 ovn_controller[89102]: 2025-11-26T02:12:27Z|00115|binding|INFO|Claiming lport 0659d4f2-a740-4ecb-92df-7e2267226c3e for this chassis.
Nov 26 02:12:27 compute-0 ovn_controller[89102]: 2025-11-26T02:12:27Z|00116|binding|INFO|0659d4f2-a740-4ecb-92df-7e2267226c3e: Claiming fa:16:3e:91:80:c9 10.100.2.57
Nov 26 02:12:27 compute-0 NetworkManager[48886]: <info>  [1764123147.2142] device (tap0659d4f2-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 02:12:27 compute-0 NetworkManager[48886]: <info>  [1764123147.2155] device (tap0659d4f2-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.212 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:80:c9 10.100.2.57'], port_security=['fa:16:3e:91:80:c9 10.100.2.57'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.57/16', 'neutron:device_id': '74d081af-66cd-4e37-99e4-31f777885766', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02245f78-e221-4ecd-ae3b-975782a68c5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20511ddf-b2cd-472a-84f8-e35fd6d0c575', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61c2d3e7-61df-4898-a297-774785d24b01, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=0659d4f2-a740-4ecb-92df-7e2267226c3e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.215 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 0659d4f2-a740-4ecb-92df-7e2267226c3e in datapath 02245f78-e221-4ecd-ae3b-975782a68c5e bound to our chassis#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.220 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02245f78-e221-4ecd-ae3b-975782a68c5e#033[00m
Nov 26 02:12:27 compute-0 systemd-machined[138512]: New machine qemu-11-instance-0000000b.
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.232 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[4e435e71-ef7a-40e9-9d7e-83d7b3482bd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.235 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap02245f78-e1 in ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.237 413433 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap02245f78-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.237 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[15708c28-e1fc-4c49-b7d9-3d703d314165]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.239 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6d4473-55f1-4f73-9613-3ed52f5f2ff2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.252 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:27 compute-0 ovn_controller[89102]: 2025-11-26T02:12:27Z|00117|binding|INFO|Setting lport 0659d4f2-a740-4ecb-92df-7e2267226c3e ovn-installed in OVS
Nov 26 02:12:27 compute-0 ovn_controller[89102]: 2025-11-26T02:12:27Z|00118|binding|INFO|Setting lport 0659d4f2-a740-4ecb-92df-7e2267226c3e up in Southbound
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.253 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[1c69f8a6-2c99-4255-8d19-3ab7ba1e7199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.256 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.284 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3b44c2-2d10-4718-bd7a-cc45fcdd8aa5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.287 350391 DEBUG oslo_concurrency.processutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnn3qzwyc" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.314 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[17d5eaf1-3a96-49f3-b1ed-91064e0e94e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.326 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[79dae3aa-ff2e-4412-bb85-a846c6054c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 NetworkManager[48886]: <info>  [1764123147.3291] manager: (tap02245f78-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.334 350391 DEBUG nova.storage.rbd_utils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] rbd image e897c19f-7590-405d-9e92-ff9e0fd9b366_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.344 350391 DEBUG oslo_concurrency.processutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.config e897c19f-7590-405d-9e92-ff9e0fd9b366_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.362 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[4f986f90-dc9b-432c-8659-97cede86eceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.366 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[f86d5116-577a-4525-ba81-1d84edfe30c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.376 350391 DEBUG nova.compute.manager [req-33969e42-e35f-46fa-b0ea-fbba8cbb7272 req-bd9834ee-5c16-4f6b-81d3-65a9d6b71d0d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Received event network-vif-unplugged-719174c4-1a03-42f1-a0c2-6d96523c40e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.376 350391 DEBUG oslo_concurrency.lockutils [req-33969e42-e35f-46fa-b0ea-fbba8cbb7272 req-bd9834ee-5c16-4f6b-81d3-65a9d6b71d0d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.377 350391 DEBUG oslo_concurrency.lockutils [req-33969e42-e35f-46fa-b0ea-fbba8cbb7272 req-bd9834ee-5c16-4f6b-81d3-65a9d6b71d0d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.377 350391 DEBUG oslo_concurrency.lockutils [req-33969e42-e35f-46fa-b0ea-fbba8cbb7272 req-bd9834ee-5c16-4f6b-81d3-65a9d6b71d0d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.377 350391 DEBUG nova.compute.manager [req-33969e42-e35f-46fa-b0ea-fbba8cbb7272 req-bd9834ee-5c16-4f6b-81d3-65a9d6b71d0d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] No waiting events found dispatching network-vif-unplugged-719174c4-1a03-42f1-a0c2-6d96523c40e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.377 350391 DEBUG nova.compute.manager [req-33969e42-e35f-46fa-b0ea-fbba8cbb7272 req-bd9834ee-5c16-4f6b-81d3-65a9d6b71d0d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Received event network-vif-unplugged-719174c4-1a03-42f1-a0c2-6d96523c40e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.377 350391 DEBUG nova.compute.manager [req-33969e42-e35f-46fa-b0ea-fbba8cbb7272 req-bd9834ee-5c16-4f6b-81d3-65a9d6b71d0d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Received event network-vif-plugged-719174c4-1a03-42f1-a0c2-6d96523c40e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.377 350391 DEBUG oslo_concurrency.lockutils [req-33969e42-e35f-46fa-b0ea-fbba8cbb7272 req-bd9834ee-5c16-4f6b-81d3-65a9d6b71d0d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.378 350391 DEBUG oslo_concurrency.lockutils [req-33969e42-e35f-46fa-b0ea-fbba8cbb7272 req-bd9834ee-5c16-4f6b-81d3-65a9d6b71d0d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.378 350391 DEBUG oslo_concurrency.lockutils [req-33969e42-e35f-46fa-b0ea-fbba8cbb7272 req-bd9834ee-5c16-4f6b-81d3-65a9d6b71d0d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.378 350391 DEBUG nova.compute.manager [req-33969e42-e35f-46fa-b0ea-fbba8cbb7272 req-bd9834ee-5c16-4f6b-81d3-65a9d6b71d0d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] No waiting events found dispatching network-vif-plugged-719174c4-1a03-42f1-a0c2-6d96523c40e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.378 350391 WARNING nova.compute.manager [req-33969e42-e35f-46fa-b0ea-fbba8cbb7272 req-bd9834ee-5c16-4f6b-81d3-65a9d6b71d0d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Received unexpected event network-vif-plugged-719174c4-1a03-42f1-a0c2-6d96523c40e9 for instance with vm_state active and task_state deleting.#033[00m
Nov 26 02:12:27 compute-0 NetworkManager[48886]: <info>  [1764123147.3922] device (tap02245f78-e0): carrier: link connected
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.399 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[ca533cc7-2b80-495d-830d-c90cb429ea0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.419 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[679d59b9-d788-4d36-b121-9f6be99dce38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02245f78-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:c1:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677802, 'reachable_time': 23658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 446492, 'error': None, 'target': 'ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.439 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[7165a2b7-9210-47a8-a51a-6bf819438b58]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:c156'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677802, 'tstamp': 677802}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 446503, 'error': None, 'target': 'ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.460 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[98a801ed-c543-4103-89db-dfd354836b15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02245f78-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:c1:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677802, 'reachable_time': 23658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 446506, 'error': None, 'target': 'ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.491 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[c40efb5e-263a-457f-83dc-e466f9c613da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1859: 321 pgs: 321 active+clean; 290 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.9 MiB/s wr, 171 op/s
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.569 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[d735d5b7-46a9-41d2-b9c1-e3f2040ff2b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.571 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02245f78-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.571 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.572 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02245f78-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:27 compute-0 NetworkManager[48886]: <info>  [1764123147.5756] manager: (tap02245f78-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.574 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.577 350391 DEBUG oslo_concurrency.processutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.config e897c19f-7590-405d-9e92-ff9e0fd9b366_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.577 350391 INFO nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Deleting local config drive /var/lib/nova/instances/e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.config because it was imported into RBD.#033[00m
Nov 26 02:12:27 compute-0 kernel: tap02245f78-e0: entered promiscuous mode
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.581 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.584 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02245f78-e0, col_values=(('external_ids', {'iface-id': 'b6066942-f0e5-4ff0-92ae-a027fdd86fa7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:27 compute-0 ovn_controller[89102]: 2025-11-26T02:12:27Z|00119|binding|INFO|Releasing lport b6066942-f0e5-4ff0-92ae-a027fdd86fa7 from this chassis (sb_readonly=0)
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.588 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:27 compute-0 virtqemud[138515]: End of file while reading data: Input/output error
Nov 26 02:12:27 compute-0 virtqemud[138515]: End of file while reading data: Input/output error
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.611 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.614 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.615 286844 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/02245f78-e221-4ecd-ae3b-975782a68c5e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/02245f78-e221-4ecd-ae3b-975782a68c5e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.617 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[92c9cbe2-c6fe-4432-97ed-463a06c92726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.618 286844 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: global
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    log         /dev/log local0 debug
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    log-tag     haproxy-metadata-proxy-02245f78-e221-4ecd-ae3b-975782a68c5e
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    user        root
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    group       root
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    maxconn     1024
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    pidfile     /var/lib/neutron/external/pids/02245f78-e221-4ecd-ae3b-975782a68c5e.pid.haproxy
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    daemon
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: defaults
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    log global
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    mode http
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    option httplog
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    option dontlognull
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    option http-server-close
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    option forwardfor
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    retries                 3
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    timeout http-request    30s
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    timeout connect         30s
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    timeout client          32s
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    timeout server          32s
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    timeout http-keep-alive 30s
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: listen listener
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    bind 169.254.169.254:80
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]:    http-request add-header X-OVN-Network-ID 02245f78-e221-4ecd-ae3b-975782a68c5e
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.619 286844 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e', 'env', 'PROCESS_TAG=haproxy-02245f78-e221-4ecd-ae3b-975782a68c5e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/02245f78-e221-4ecd-ae3b-975782a68c5e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 02:12:27 compute-0 kernel: tap03ba18c7-39: entered promiscuous mode
Nov 26 02:12:27 compute-0 NetworkManager[48886]: <info>  [1764123147.6403] manager: (tap03ba18c7-39): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.641 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:27 compute-0 systemd-udevd[446467]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 02:12:27 compute-0 ovn_controller[89102]: 2025-11-26T02:12:27Z|00120|binding|INFO|Claiming lport 03ba18c7-398e-48f9-9269-730aa0ea6368 for this chassis.
Nov 26 02:12:27 compute-0 ovn_controller[89102]: 2025-11-26T02:12:27Z|00121|binding|INFO|03ba18c7-398e-48f9-9269-730aa0ea6368: Claiming fa:16:3e:49:31:0c 10.100.0.4
Nov 26 02:12:27 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:27.653 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:31:0c 10.100.0.4'], port_security=['fa:16:3e:49:31:0c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e897c19f-7590-405d-9e92-ff9e0fd9b366', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '66fdcaf8e71a4c809ab9cab4c64ca9d5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '60e683b1-41d9-43e8-8fca-b523d72cc1fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=995a63f2-436e-4878-a062-61a1cd67b7e2, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=03ba18c7-398e-48f9-9269-730aa0ea6368) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:12:27 compute-0 NetworkManager[48886]: <info>  [1764123147.6583] device (tap03ba18c7-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 02:12:27 compute-0 NetworkManager[48886]: <info>  [1764123147.6603] device (tap03ba18c7-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.665 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:27 compute-0 ovn_controller[89102]: 2025-11-26T02:12:27Z|00122|binding|INFO|Setting lport 03ba18c7-398e-48f9-9269-730aa0ea6368 ovn-installed in OVS
Nov 26 02:12:27 compute-0 ovn_controller[89102]: 2025-11-26T02:12:27Z|00123|binding|INFO|Setting lport 03ba18c7-398e-48f9-9269-730aa0ea6368 up in Southbound
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.670 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:27 compute-0 systemd-machined[138512]: New machine qemu-12-instance-0000000c.
Nov 26 02:12:27 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Nov 26 02:12:27 compute-0 podman[446541]: 2025-11-26 02:12:27.693090356 +0000 UTC m=+0.103416699 container exec 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 02:12:27 compute-0 podman[446541]: 2025-11-26 02:12:27.798276253 +0000 UTC m=+0.208602616 container exec_died 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.934 350391 DEBUG nova.network.neutron [-] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.953 350391 INFO nova.compute.manager [-] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Took 1.33 seconds to deallocate network for instance.#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.993 350391 DEBUG oslo_concurrency.lockutils [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:27 compute-0 nova_compute[350387]: 2025-11-26 02:12:27.994 350391 DEBUG oslo_concurrency.lockutils [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.089 350391 DEBUG oslo_concurrency.processutils [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:28 compute-0 podman[446673]: 2025-11-26 02:12:28.105541742 +0000 UTC m=+0.085934248 container create 9219975fe38cbf3212487f1feb2bf487a2f4b1fd222e03710329805d22453326 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.131 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123148.130175, 74d081af-66cd-4e37-99e4-31f777885766 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.132 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] VM Started (Lifecycle Event)#033[00m
Nov 26 02:12:28 compute-0 systemd[1]: Started libpod-conmon-9219975fe38cbf3212487f1feb2bf487a2f4b1fd222e03710329805d22453326.scope.
Nov 26 02:12:28 compute-0 podman[446673]: 2025-11-26 02:12:28.063487354 +0000 UTC m=+0.043879800 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.160 350391 DEBUG nova.compute.manager [req-af870390-ed26-4516-aec0-295ecf716c08 req-d3b1d00c-ca3f-4178-9e7c-75c8627def06 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Received event network-vif-plugged-03ba18c7-398e-48f9-9269-730aa0ea6368 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.161 350391 DEBUG oslo_concurrency.lockutils [req-af870390-ed26-4516-aec0-295ecf716c08 req-d3b1d00c-ca3f-4178-9e7c-75c8627def06 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.161 350391 DEBUG oslo_concurrency.lockutils [req-af870390-ed26-4516-aec0-295ecf716c08 req-d3b1d00c-ca3f-4178-9e7c-75c8627def06 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.162 350391 DEBUG oslo_concurrency.lockutils [req-af870390-ed26-4516-aec0-295ecf716c08 req-d3b1d00c-ca3f-4178-9e7c-75c8627def06 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.162 350391 DEBUG nova.compute.manager [req-af870390-ed26-4516-aec0-295ecf716c08 req-d3b1d00c-ca3f-4178-9e7c-75c8627def06 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Processing event network-vif-plugged-03ba18c7-398e-48f9-9269-730aa0ea6368 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.164 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.170 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123148.1311953, 74d081af-66cd-4e37-99e4-31f777885766 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.170 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] VM Paused (Lifecycle Event)#033[00m
Nov 26 02:12:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1872517a346af7240d3817d8e2052f966c97db92ccb2808d6c4b55f00422ae1f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.192 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:12:28 compute-0 podman[446673]: 2025-11-26 02:12:28.200935925 +0000 UTC m=+0.181328361 container init 9219975fe38cbf3212487f1feb2bf487a2f4b1fd222e03710329805d22453326 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.202 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:12:28 compute-0 podman[446673]: 2025-11-26 02:12:28.209235248 +0000 UTC m=+0.189627664 container start 9219975fe38cbf3212487f1feb2bf487a2f4b1fd222e03710329805d22453326 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.225 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:12:28 compute-0 neutron-haproxy-ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e[446753]: [NOTICE]   (446774) : New worker (446800) forked
Nov 26 02:12:28 compute-0 neutron-haproxy-ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e[446753]: [NOTICE]   (446774) : Loading success.
Nov 26 02:12:28 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:28.272 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 03ba18c7-398e-48f9-9269-730aa0ea6368 in datapath 6006a9a5-9f5c-48b2-8574-7469a748b2e4 unbound from our chassis#033[00m
Nov 26 02:12:28 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:28.273 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6006a9a5-9f5c-48b2-8574-7469a748b2e4#033[00m
Nov 26 02:12:28 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:28.289 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[ae31cba4-65a7-4f7e-a0fe-321287f84194]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.291 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123148.2905614, e897c19f-7590-405d-9e92-ff9e0fd9b366 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.292 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] VM Started (Lifecycle Event)#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.294 350391 DEBUG nova.compute.manager [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.298 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.313 350391 INFO nova.virt.libvirt.driver [-] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Instance spawned successfully.#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.314 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.317 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.324 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:12:28 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:28.329 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[47b952ef-a2ba-49af-9a9f-8c1a006cbd31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:28 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:28.332 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[37ff6ad0-1b44-4a69-a15d-bfe0590887d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.361 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.362 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.362 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.362 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.363 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.363 350391 DEBUG nova.virt.libvirt.driver [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:28 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:28.367 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[8c4d3cbe-c74f-4fd0-9862-593c154cc4f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:28 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:28.386 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[5daa2c0a-627b-453c-bdd0-ee9f0c07d590]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6006a9a5-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:62:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670633, 'reachable_time': 43533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 446836, 'error': None, 'target': 'ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.397 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.398 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123148.2906458, e897c19f-7590-405d-9e92-ff9e0fd9b366 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.398 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] VM Paused (Lifecycle Event)#033[00m
Nov 26 02:12:28 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:28.402 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[1ffc51bc-a56d-41b4-95fd-9c8e83c30617]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6006a9a5-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670651, 'tstamp': 670651}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 446840, 'error': None, 'target': 'ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6006a9a5-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670655, 'tstamp': 670655}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 446840, 'error': None, 'target': 'ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:12:28 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:28.404 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6006a9a5-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.406 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.407 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:28 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:28.407 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6006a9a5-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:28 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:28.408 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:12:28 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:28.408 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6006a9a5-90, col_values=(('external_ids', {'iface-id': '0fdbc9f8-20bb-4f6b-b66d-965099ff6047'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:28 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:28.408 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.433 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.436 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.439 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123148.2977018, e897c19f-7590-405d-9e92-ff9e0fd9b366 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.439 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] VM Resumed (Lifecycle Event)#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.448 350391 INFO nova.compute.manager [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Took 5.66 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.449 350391 DEBUG nova.compute.manager [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.459 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.468 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:12:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:12:28 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099803673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.525 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.537 350391 DEBUG oslo_concurrency.processutils [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.550 350391 DEBUG nova.compute.provider_tree [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.567 350391 INFO nova.compute.manager [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Took 6.83 seconds to build instance.#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.569 350391 DEBUG nova.scheduler.client.report [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.589 350391 DEBUG oslo_concurrency.lockutils [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.591 350391 DEBUG oslo_concurrency.lockutils [None req-268d44b6-c0cd-423a-95db-8f8576d76920 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.609 350391 INFO nova.scheduler.client.report [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Deleted allocations for instance 7f2d249d-4d0b-4ee7-ac66-deb2637c906d#033[00m
Nov 26 02:12:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:12:28 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:12:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:12:28 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.685 350391 DEBUG oslo_concurrency.lockutils [None req-b042a651-5ae1-4143-a7a7-b2b99a8deef3 57143d8a520a40849581651b89c19756 ca49eb89e83e4ab8a7d9392b980106ac - - default default] Lock "7f2d249d-4d0b-4ee7-ac66-deb2637c906d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.748 350391 DEBUG nova.network.neutron [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Updated VIF entry in instance network info cache for port 719174c4-1a03-42f1-a0c2-6d96523c40e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.749 350391 DEBUG nova.network.neutron [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Updating instance_info_cache with network_info: [{"id": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "address": "fa:16:3e:a8:fb:ef", "network": {"id": "d76cf0d9-50e2-47d9-b2d5-30e62916ffe8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1179013875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca49eb89e83e4ab8a7d9392b980106ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap719174c4-1a", "ovs_interfaceid": "719174c4-1a03-42f1-a0c2-6d96523c40e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:12:28 compute-0 nova_compute[350387]: 2025-11-26 02:12:28.770 350391 DEBUG oslo_concurrency.lockutils [req-09b7faf0-7f70-4095-8e0e-d0b1427fa2c9 req-f09eb3c5-cfc1-435a-953f-426cfe271bbe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-7f2d249d-4d0b-4ee7-ac66-deb2637c906d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:12:28 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:12:28 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.384 350391 DEBUG nova.compute.manager [req-381bdbad-76c9-4ace-b212-e7d0cbc8e152 req-6ebd2804-c9a6-46d1-a6a5-a46121d4547f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Received event network-vif-plugged-0659d4f2-a740-4ecb-92df-7e2267226c3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.384 350391 DEBUG oslo_concurrency.lockutils [req-381bdbad-76c9-4ace-b212-e7d0cbc8e152 req-6ebd2804-c9a6-46d1-a6a5-a46121d4547f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "74d081af-66cd-4e37-99e4-31f777885766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.385 350391 DEBUG oslo_concurrency.lockutils [req-381bdbad-76c9-4ace-b212-e7d0cbc8e152 req-6ebd2804-c9a6-46d1-a6a5-a46121d4547f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.385 350391 DEBUG oslo_concurrency.lockutils [req-381bdbad-76c9-4ace-b212-e7d0cbc8e152 req-6ebd2804-c9a6-46d1-a6a5-a46121d4547f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.385 350391 DEBUG nova.compute.manager [req-381bdbad-76c9-4ace-b212-e7d0cbc8e152 req-6ebd2804-c9a6-46d1-a6a5-a46121d4547f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Processing event network-vif-plugged-0659d4f2-a740-4ecb-92df-7e2267226c3e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.386 350391 DEBUG nova.compute.manager [req-381bdbad-76c9-4ace-b212-e7d0cbc8e152 req-6ebd2804-c9a6-46d1-a6a5-a46121d4547f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Received event network-vif-plugged-0659d4f2-a740-4ecb-92df-7e2267226c3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.386 350391 DEBUG oslo_concurrency.lockutils [req-381bdbad-76c9-4ace-b212-e7d0cbc8e152 req-6ebd2804-c9a6-46d1-a6a5-a46121d4547f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "74d081af-66cd-4e37-99e4-31f777885766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.386 350391 DEBUG oslo_concurrency.lockutils [req-381bdbad-76c9-4ace-b212-e7d0cbc8e152 req-6ebd2804-c9a6-46d1-a6a5-a46121d4547f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.387 350391 DEBUG oslo_concurrency.lockutils [req-381bdbad-76c9-4ace-b212-e7d0cbc8e152 req-6ebd2804-c9a6-46d1-a6a5-a46121d4547f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.387 350391 DEBUG nova.compute.manager [req-381bdbad-76c9-4ace-b212-e7d0cbc8e152 req-6ebd2804-c9a6-46d1-a6a5-a46121d4547f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] No waiting events found dispatching network-vif-plugged-0659d4f2-a740-4ecb-92df-7e2267226c3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.387 350391 WARNING nova.compute.manager [req-381bdbad-76c9-4ace-b212-e7d0cbc8e152 req-6ebd2804-c9a6-46d1-a6a5-a46121d4547f 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Received unexpected event network-vif-plugged-0659d4f2-a740-4ecb-92df-7e2267226c3e for instance with vm_state building and task_state spawning.#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.388 350391 DEBUG nova.compute.manager [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.395 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.397 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123149.3965724, 74d081af-66cd-4e37-99e4-31f777885766 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.398 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] VM Resumed (Lifecycle Event)#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.426 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.429 350391 INFO nova.virt.libvirt.driver [-] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Instance spawned successfully.#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.429 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.434 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.460 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.473 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.474 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.474 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.475 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.476 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.477 350391 DEBUG nova.virt.libvirt.driver [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:12:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 290 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 144 op/s
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.563 350391 INFO nova.compute.manager [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Took 8.53 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.564 350391 DEBUG nova.compute.manager [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.639 350391 INFO nova.compute.manager [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Took 9.63 seconds to build instance.#033[00m
Nov 26 02:12:29 compute-0 nova_compute[350387]: 2025-11-26 02:12:29.664 350391 DEBUG oslo_concurrency.lockutils [None req-3b627895-2f8f-48a9-b91e-a48065990662 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:29 compute-0 podman[158021]: time="2025-11-26T02:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:12:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45046 "" "Go-http-client/1.1"
Nov 26 02:12:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9108 "" "Go-http-client/1.1"
Nov 26 02:12:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:12:29 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:12:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:12:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:12:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:12:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:12:29 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 71e42f08-cedc-43a5-a3f1-1e44b47baca1 does not exist
Nov 26 02:12:29 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev b6452283-7112-4ef8-ab94-9952f381c3d9 does not exist
Nov 26 02:12:29 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 7e057594-465f-43ad-810d-83ad9f2497cb does not exist
Nov 26 02:12:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:12:29 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:12:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:12:29 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:12:29 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:12:29 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:12:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:12:30 compute-0 nova_compute[350387]: 2025-11-26 02:12:30.411 350391 DEBUG nova.compute.manager [req-5388def2-0353-4d19-a1ab-c53b73fd187c req-5d8e7097-e0a4-4f8e-9441-9c1f3a141723 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Received event network-vif-deleted-719174c4-1a03-42f1-a0c2-6d96523c40e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:12:30 compute-0 nova_compute[350387]: 2025-11-26 02:12:30.412 350391 DEBUG nova.compute.manager [req-5388def2-0353-4d19-a1ab-c53b73fd187c req-5d8e7097-e0a4-4f8e-9441-9c1f3a141723 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Received event network-vif-plugged-03ba18c7-398e-48f9-9269-730aa0ea6368 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:12:30 compute-0 nova_compute[350387]: 2025-11-26 02:12:30.412 350391 DEBUG oslo_concurrency.lockutils [req-5388def2-0353-4d19-a1ab-c53b73fd187c req-5d8e7097-e0a4-4f8e-9441-9c1f3a141723 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:30 compute-0 nova_compute[350387]: 2025-11-26 02:12:30.413 350391 DEBUG oslo_concurrency.lockutils [req-5388def2-0353-4d19-a1ab-c53b73fd187c req-5d8e7097-e0a4-4f8e-9441-9c1f3a141723 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:30 compute-0 nova_compute[350387]: 2025-11-26 02:12:30.413 350391 DEBUG oslo_concurrency.lockutils [req-5388def2-0353-4d19-a1ab-c53b73fd187c req-5d8e7097-e0a4-4f8e-9441-9c1f3a141723 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:30 compute-0 nova_compute[350387]: 2025-11-26 02:12:30.413 350391 DEBUG nova.compute.manager [req-5388def2-0353-4d19-a1ab-c53b73fd187c req-5d8e7097-e0a4-4f8e-9441-9c1f3a141723 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] No waiting events found dispatching network-vif-plugged-03ba18c7-398e-48f9-9269-730aa0ea6368 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:12:30 compute-0 nova_compute[350387]: 2025-11-26 02:12:30.414 350391 WARNING nova.compute.manager [req-5388def2-0353-4d19-a1ab-c53b73fd187c req-5d8e7097-e0a4-4f8e-9441-9c1f3a141723 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Received unexpected event network-vif-plugged-03ba18c7-398e-48f9-9269-730aa0ea6368 for instance with vm_state active and task_state None.#033[00m
Nov 26 02:12:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:12:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:12:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:12:31 compute-0 nova_compute[350387]: 2025-11-26 02:12:31.303 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:31 compute-0 openstack_network_exporter[367323]: ERROR   02:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:12:31 compute-0 openstack_network_exporter[367323]: ERROR   02:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:12:31 compute-0 openstack_network_exporter[367323]: ERROR   02:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:12:31 compute-0 openstack_network_exporter[367323]: ERROR   02:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:12:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:12:31 compute-0 openstack_network_exporter[367323]: ERROR   02:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:12:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:12:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 186 op/s
Nov 26 02:12:31 compute-0 podman[447141]: 2025-11-26 02:12:31.883952577 +0000 UTC m=+0.082463781 container create b40c41fded55cf953b3a0340f07c9d6f9e2b3196aa2c7e54d66b9a94c96bec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_davinci, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:12:31 compute-0 podman[447141]: 2025-11-26 02:12:31.849254965 +0000 UTC m=+0.047766219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:12:31 compute-0 systemd[1]: Started libpod-conmon-b40c41fded55cf953b3a0340f07c9d6f9e2b3196aa2c7e54d66b9a94c96bec2a.scope.
Nov 26 02:12:31 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:12:32 compute-0 podman[447141]: 2025-11-26 02:12:32.009812714 +0000 UTC m=+0.208323948 container init b40c41fded55cf953b3a0340f07c9d6f9e2b3196aa2c7e54d66b9a94c96bec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:12:32 compute-0 podman[447141]: 2025-11-26 02:12:32.019637209 +0000 UTC m=+0.218148413 container start b40c41fded55cf953b3a0340f07c9d6f9e2b3196aa2c7e54d66b9a94c96bec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_davinci, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 02:12:32 compute-0 podman[447141]: 2025-11-26 02:12:32.024572327 +0000 UTC m=+0.223083531 container attach b40c41fded55cf953b3a0340f07c9d6f9e2b3196aa2c7e54d66b9a94c96bec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_davinci, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 02:12:32 compute-0 systemd[1]: libpod-b40c41fded55cf953b3a0340f07c9d6f9e2b3196aa2c7e54d66b9a94c96bec2a.scope: Deactivated successfully.
Nov 26 02:12:32 compute-0 festive_davinci[447157]: 167 167
Nov 26 02:12:32 compute-0 conmon[447157]: conmon b40c41fded55cf953b3a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b40c41fded55cf953b3a0340f07c9d6f9e2b3196aa2c7e54d66b9a94c96bec2a.scope/container/memory.events
Nov 26 02:12:32 compute-0 podman[447162]: 2025-11-26 02:12:32.104584369 +0000 UTC m=+0.055682141 container died b40c41fded55cf953b3a0340f07c9d6f9e2b3196aa2c7e54d66b9a94c96bec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_davinci, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:12:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dc8fa693fa1e1bd26637a7e223a5be3342a5de3beb021d2d4bd2ffb2072c1e4-merged.mount: Deactivated successfully.
Nov 26 02:12:32 compute-0 podman[447162]: 2025-11-26 02:12:32.151942126 +0000 UTC m=+0.103039898 container remove b40c41fded55cf953b3a0340f07c9d6f9e2b3196aa2c7e54d66b9a94c96bec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_davinci, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 02:12:32 compute-0 systemd[1]: libpod-conmon-b40c41fded55cf953b3a0340f07c9d6f9e2b3196aa2c7e54d66b9a94c96bec2a.scope: Deactivated successfully.
Nov 26 02:12:32 compute-0 nova_compute[350387]: 2025-11-26 02:12:32.290 350391 DEBUG nova.compute.manager [req-b130c322-7267-47eb-a365-0a352e4176d4 req-c7d22e6b-df1d-4342-ab11-0361c2afe2f1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Received event network-changed-03ba18c7-398e-48f9-9269-730aa0ea6368 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:12:32 compute-0 nova_compute[350387]: 2025-11-26 02:12:32.292 350391 DEBUG nova.compute.manager [req-b130c322-7267-47eb-a365-0a352e4176d4 req-c7d22e6b-df1d-4342-ab11-0361c2afe2f1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Refreshing instance network info cache due to event network-changed-03ba18c7-398e-48f9-9269-730aa0ea6368. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:12:32 compute-0 nova_compute[350387]: 2025-11-26 02:12:32.292 350391 DEBUG oslo_concurrency.lockutils [req-b130c322-7267-47eb-a365-0a352e4176d4 req-c7d22e6b-df1d-4342-ab11-0361c2afe2f1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-e897c19f-7590-405d-9e92-ff9e0fd9b366" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:12:32 compute-0 nova_compute[350387]: 2025-11-26 02:12:32.292 350391 DEBUG oslo_concurrency.lockutils [req-b130c322-7267-47eb-a365-0a352e4176d4 req-c7d22e6b-df1d-4342-ab11-0361c2afe2f1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-e897c19f-7590-405d-9e92-ff9e0fd9b366" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:12:32 compute-0 nova_compute[350387]: 2025-11-26 02:12:32.293 350391 DEBUG nova.network.neutron [req-b130c322-7267-47eb-a365-0a352e4176d4 req-c7d22e6b-df1d-4342-ab11-0361c2afe2f1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Refreshing network info cache for port 03ba18c7-398e-48f9-9269-730aa0ea6368 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:12:32 compute-0 podman[447181]: 2025-11-26 02:12:32.438503425 +0000 UTC m=+0.078746347 container create db2e26bfb19b4d99544336b4aaa3c11895a54d16e6a7de10d2b7066343d4da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 26 02:12:32 compute-0 podman[447181]: 2025-11-26 02:12:32.409005898 +0000 UTC m=+0.049248830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:12:32 compute-0 systemd[1]: Started libpod-conmon-db2e26bfb19b4d99544336b4aaa3c11895a54d16e6a7de10d2b7066343d4da4e.scope.
Nov 26 02:12:32 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a247c5bf19ed39649291fc2f1d633289397cffebff91d14618b6b1e58463e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a247c5bf19ed39649291fc2f1d633289397cffebff91d14618b6b1e58463e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a247c5bf19ed39649291fc2f1d633289397cffebff91d14618b6b1e58463e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a247c5bf19ed39649291fc2f1d633289397cffebff91d14618b6b1e58463e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a247c5bf19ed39649291fc2f1d633289397cffebff91d14618b6b1e58463e9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:12:32 compute-0 podman[447181]: 2025-11-26 02:12:32.611941724 +0000 UTC m=+0.252184646 container init db2e26bfb19b4d99544336b4aaa3c11895a54d16e6a7de10d2b7066343d4da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_saha, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 02:12:32 compute-0 podman[447181]: 2025-11-26 02:12:32.623476298 +0000 UTC m=+0.263719210 container start db2e26bfb19b4d99544336b4aaa3c11895a54d16e6a7de10d2b7066343d4da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_saha, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:12:32 compute-0 podman[447181]: 2025-11-26 02:12:32.627590293 +0000 UTC m=+0.267833225 container attach db2e26bfb19b4d99544336b4aaa3c11895a54d16e6a7de10d2b7066343d4da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:12:33 compute-0 nova_compute[350387]: 2025-11-26 02:12:33.438 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Nov 26 02:12:33 compute-0 quirky_saha[447196]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:12:33 compute-0 quirky_saha[447196]: --> relative data size: 1.0
Nov 26 02:12:33 compute-0 quirky_saha[447196]: --> All data devices are unavailable
Nov 26 02:12:33 compute-0 systemd[1]: libpod-db2e26bfb19b4d99544336b4aaa3c11895a54d16e6a7de10d2b7066343d4da4e.scope: Deactivated successfully.
Nov 26 02:12:33 compute-0 systemd[1]: libpod-db2e26bfb19b4d99544336b4aaa3c11895a54d16e6a7de10d2b7066343d4da4e.scope: Consumed 1.216s CPU time.
Nov 26 02:12:33 compute-0 podman[447181]: 2025-11-26 02:12:33.958225315 +0000 UTC m=+1.598468277 container died db2e26bfb19b4d99544336b4aaa3c11895a54d16e6a7de10d2b7066343d4da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 26 02:12:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-48a247c5bf19ed39649291fc2f1d633289397cffebff91d14618b6b1e58463e9-merged.mount: Deactivated successfully.
Nov 26 02:12:34 compute-0 podman[447181]: 2025-11-26 02:12:34.073081863 +0000 UTC m=+1.713324785 container remove db2e26bfb19b4d99544336b4aaa3c11895a54d16e6a7de10d2b7066343d4da4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:12:34 compute-0 systemd[1]: libpod-conmon-db2e26bfb19b4d99544336b4aaa3c11895a54d16e6a7de10d2b7066343d4da4e.scope: Deactivated successfully.
Nov 26 02:12:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:12:35 compute-0 podman[447373]: 2025-11-26 02:12:35.118752731 +0000 UTC m=+0.081460304 container create cbdce76416e0bcd8e6526ea1c452024dfe5aa60d8341474089a997b4da9a44a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_gates, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 02:12:35 compute-0 systemd[1]: Started libpod-conmon-cbdce76416e0bcd8e6526ea1c452024dfe5aa60d8341474089a997b4da9a44a6.scope.
Nov 26 02:12:35 compute-0 podman[447373]: 2025-11-26 02:12:35.086072085 +0000 UTC m=+0.048779748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:12:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:12:35 compute-0 podman[447373]: 2025-11-26 02:12:35.218038683 +0000 UTC m=+0.180746286 container init cbdce76416e0bcd8e6526ea1c452024dfe5aa60d8341474089a997b4da9a44a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 02:12:35 compute-0 podman[447373]: 2025-11-26 02:12:35.226659424 +0000 UTC m=+0.189367007 container start cbdce76416e0bcd8e6526ea1c452024dfe5aa60d8341474089a997b4da9a44a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 02:12:35 compute-0 podman[447373]: 2025-11-26 02:12:35.23151872 +0000 UTC m=+0.194226303 container attach cbdce76416e0bcd8e6526ea1c452024dfe5aa60d8341474089a997b4da9a44a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 02:12:35 compute-0 hardcore_gates[447390]: 167 167
Nov 26 02:12:35 compute-0 podman[447373]: 2025-11-26 02:12:35.234197875 +0000 UTC m=+0.196905458 container died cbdce76416e0bcd8e6526ea1c452024dfe5aa60d8341474089a997b4da9a44a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_gates, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 02:12:35 compute-0 systemd[1]: libpod-cbdce76416e0bcd8e6526ea1c452024dfe5aa60d8341474089a997b4da9a44a6.scope: Deactivated successfully.
Nov 26 02:12:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b4c5d080c27d205268a29cd7358b765459cf073f50e5c3e7498ad81827f21c8-merged.mount: Deactivated successfully.
Nov 26 02:12:35 compute-0 podman[447386]: 2025-11-26 02:12:35.284155245 +0000 UTC m=+0.120752964 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:12:35 compute-0 podman[447373]: 2025-11-26 02:12:35.291602454 +0000 UTC m=+0.254310037 container remove cbdce76416e0bcd8e6526ea1c452024dfe5aa60d8341474089a997b4da9a44a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:12:35 compute-0 systemd[1]: libpod-conmon-cbdce76416e0bcd8e6526ea1c452024dfe5aa60d8341474089a997b4da9a44a6.scope: Deactivated successfully.
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.326 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.327 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.327 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.327 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.327 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:35 compute-0 podman[447389]: 2025-11-26 02:12:35.332800508 +0000 UTC m=+0.167242807 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.492 350391 DEBUG nova.network.neutron [req-b130c322-7267-47eb-a365-0a352e4176d4 req-c7d22e6b-df1d-4342-ab11-0361c2afe2f1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Updated VIF entry in instance network info cache for port 03ba18c7-398e-48f9-9269-730aa0ea6368. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.492 350391 DEBUG nova.network.neutron [req-b130c322-7267-47eb-a365-0a352e4176d4 req-c7d22e6b-df1d-4342-ab11-0361c2afe2f1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Updating instance_info_cache with network_info: [{"id": "03ba18c7-398e-48f9-9269-730aa0ea6368", "address": "fa:16:3e:49:31:0c", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03ba18c7-39", "ovs_interfaceid": "03ba18c7-398e-48f9-9269-730aa0ea6368", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:12:35 compute-0 podman[447458]: 2025-11-26 02:12:35.509781867 +0000 UTC m=+0.061535965 container create b7fb7d3ca3237cf3e4c98244785992389ec66cedfee80d4db09ba5c7b012b37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:12:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 231 op/s
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.523 350391 DEBUG oslo_concurrency.lockutils [req-b130c322-7267-47eb-a365-0a352e4176d4 req-c7d22e6b-df1d-4342-ab11-0361c2afe2f1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-e897c19f-7590-405d-9e92-ff9e0fd9b366" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:12:35 compute-0 podman[447458]: 2025-11-26 02:12:35.477586475 +0000 UTC m=+0.029340583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:12:35 compute-0 systemd[1]: Started libpod-conmon-b7fb7d3ca3237cf3e4c98244785992389ec66cedfee80d4db09ba5c7b012b37e.scope.
Nov 26 02:12:35 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/818b760c445f2771acd613b3efad3ecb61655b1c2ac86413248ee3a2efd046bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/818b760c445f2771acd613b3efad3ecb61655b1c2ac86413248ee3a2efd046bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/818b760c445f2771acd613b3efad3ecb61655b1c2ac86413248ee3a2efd046bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/818b760c445f2771acd613b3efad3ecb61655b1c2ac86413248ee3a2efd046bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:12:35 compute-0 podman[447458]: 2025-11-26 02:12:35.667862166 +0000 UTC m=+0.219616234 container init b7fb7d3ca3237cf3e4c98244785992389ec66cedfee80d4db09ba5c7b012b37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:12:35 compute-0 podman[447458]: 2025-11-26 02:12:35.68692168 +0000 UTC m=+0.238675738 container start b7fb7d3ca3237cf3e4c98244785992389ec66cedfee80d4db09ba5c7b012b37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:12:35 compute-0 podman[447458]: 2025-11-26 02:12:35.692039113 +0000 UTC m=+0.243793191 container attach b7fb7d3ca3237cf3e4c98244785992389ec66cedfee80d4db09ba5c7b012b37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:12:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:12:35 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/635043419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.855 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.987 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.989 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.997 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:12:35 compute-0 nova_compute[350387]: 2025-11-26 02:12:35.997 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:12:36 compute-0 nova_compute[350387]: 2025-11-26 02:12:36.005 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:12:36 compute-0 nova_compute[350387]: 2025-11-26 02:12:36.005 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:12:36 compute-0 nova_compute[350387]: 2025-11-26 02:12:36.305 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:36 compute-0 loving_carson[447490]: {
Nov 26 02:12:36 compute-0 loving_carson[447490]:    "0": [
Nov 26 02:12:36 compute-0 loving_carson[447490]:        {
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "devices": [
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "/dev/loop3"
Nov 26 02:12:36 compute-0 loving_carson[447490]:            ],
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "lv_name": "ceph_lv0",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "lv_size": "21470642176",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "name": "ceph_lv0",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "tags": {
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.cluster_name": "ceph",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.crush_device_class": "",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.encrypted": "0",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.osd_id": "0",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.type": "block",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.vdo": "0"
Nov 26 02:12:36 compute-0 loving_carson[447490]:            },
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "type": "block",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "vg_name": "ceph_vg0"
Nov 26 02:12:36 compute-0 loving_carson[447490]:        }
Nov 26 02:12:36 compute-0 loving_carson[447490]:    ],
Nov 26 02:12:36 compute-0 loving_carson[447490]:    "1": [
Nov 26 02:12:36 compute-0 loving_carson[447490]:        {
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "devices": [
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "/dev/loop4"
Nov 26 02:12:36 compute-0 loving_carson[447490]:            ],
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "lv_name": "ceph_lv1",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "lv_size": "21470642176",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "name": "ceph_lv1",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "tags": {
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.cluster_name": "ceph",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.crush_device_class": "",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.encrypted": "0",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.osd_id": "1",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.type": "block",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.vdo": "0"
Nov 26 02:12:36 compute-0 loving_carson[447490]:            },
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "type": "block",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "vg_name": "ceph_vg1"
Nov 26 02:12:36 compute-0 loving_carson[447490]:        }
Nov 26 02:12:36 compute-0 loving_carson[447490]:    ],
Nov 26 02:12:36 compute-0 loving_carson[447490]:    "2": [
Nov 26 02:12:36 compute-0 loving_carson[447490]:        {
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "devices": [
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "/dev/loop5"
Nov 26 02:12:36 compute-0 loving_carson[447490]:            ],
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "lv_name": "ceph_lv2",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "lv_size": "21470642176",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "name": "ceph_lv2",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "tags": {
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.cluster_name": "ceph",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.crush_device_class": "",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.encrypted": "0",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.osd_id": "2",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.type": "block",
Nov 26 02:12:36 compute-0 loving_carson[447490]:                "ceph.vdo": "0"
Nov 26 02:12:36 compute-0 loving_carson[447490]:            },
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "type": "block",
Nov 26 02:12:36 compute-0 loving_carson[447490]:            "vg_name": "ceph_vg2"
Nov 26 02:12:36 compute-0 loving_carson[447490]:        }
Nov 26 02:12:36 compute-0 loving_carson[447490]:    ]
Nov 26 02:12:36 compute-0 loving_carson[447490]: }
Nov 26 02:12:36 compute-0 systemd[1]: libpod-b7fb7d3ca3237cf3e4c98244785992389ec66cedfee80d4db09ba5c7b012b37e.scope: Deactivated successfully.
Nov 26 02:12:36 compute-0 conmon[447490]: conmon b7fb7d3ca3237cf3e4c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7fb7d3ca3237cf3e4c98244785992389ec66cedfee80d4db09ba5c7b012b37e.scope/container/memory.events
Nov 26 02:12:36 compute-0 podman[447458]: 2025-11-26 02:12:36.539489208 +0000 UTC m=+1.091243276 container died b7fb7d3ca3237cf3e4c98244785992389ec66cedfee80d4db09ba5c7b012b37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 02:12:36 compute-0 nova_compute[350387]: 2025-11-26 02:12:36.557 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:12:36 compute-0 nova_compute[350387]: 2025-11-26 02:12:36.558 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3470MB free_disk=59.900882720947266GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:12:36 compute-0 nova_compute[350387]: 2025-11-26 02:12:36.558 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:36 compute-0 nova_compute[350387]: 2025-11-26 02:12:36.559 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-818b760c445f2771acd613b3efad3ecb61655b1c2ac86413248ee3a2efd046bb-merged.mount: Deactivated successfully.
Nov 26 02:12:36 compute-0 podman[447458]: 2025-11-26 02:12:36.626504226 +0000 UTC m=+1.178258284 container remove b7fb7d3ca3237cf3e4c98244785992389ec66cedfee80d4db09ba5c7b012b37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_carson, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:12:36 compute-0 systemd[1]: libpod-conmon-b7fb7d3ca3237cf3e4c98244785992389ec66cedfee80d4db09ba5c7b012b37e.scope: Deactivated successfully.
Nov 26 02:12:36 compute-0 nova_compute[350387]: 2025-11-26 02:12:36.756 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance a6b626e1-3c31-460a-be1a-02b342efbb84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:12:36 compute-0 nova_compute[350387]: 2025-11-26 02:12:36.757 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 74d081af-66cd-4e37-99e4-31f777885766 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:12:36 compute-0 nova_compute[350387]: 2025-11-26 02:12:36.757 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance e897c19f-7590-405d-9e92-ff9e0fd9b366 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:12:36 compute-0 nova_compute[350387]: 2025-11-26 02:12:36.758 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:12:36 compute-0 nova_compute[350387]: 2025-11-26 02:12:36.758 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:12:36 compute-0 podman[447515]: 2025-11-26 02:12:36.788184186 +0000 UTC m=+0.110638481 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., architecture=x86_64, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 02:12:36 compute-0 podman[447557]: 2025-11-26 02:12:36.924488225 +0000 UTC m=+0.106045782 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 26 02:12:36 compute-0 nova_compute[350387]: 2025-11-26 02:12:36.958 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:12:36 compute-0 ovn_controller[89102]: 2025-11-26T02:12:36Z|00124|binding|INFO|Releasing lport 0fdbc9f8-20bb-4f6b-b66d-965099ff6047 from this chassis (sb_readonly=0)
Nov 26 02:12:36 compute-0 ovn_controller[89102]: 2025-11-26T02:12:36Z|00125|binding|INFO|Releasing lport b6066942-f0e5-4ff0-92ae-a027fdd86fa7 from this chassis (sb_readonly=0)
Nov 26 02:12:37 compute-0 nova_compute[350387]: 2025-11-26 02:12:37.059 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:12:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3685331048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:12:37 compute-0 nova_compute[350387]: 2025-11-26 02:12:37.401 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:12:37 compute-0 nova_compute[350387]: 2025-11-26 02:12:37.413 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:12:37 compute-0 nova_compute[350387]: 2025-11-26 02:12:37.441 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:12:37 compute-0 nova_compute[350387]: 2025-11-26 02:12:37.463 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:12:37 compute-0 nova_compute[350387]: 2025-11-26 02:12:37.464 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:12:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1864: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.6 MiB/s wr, 261 op/s
Nov 26 02:12:37 compute-0 podman[447713]: 2025-11-26 02:12:37.670059865 +0000 UTC m=+0.082451071 container create 3c7d1519c7f0a9fe4657e8ea449d67ae47b7cba3357924a920705c288d8fdf73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 02:12:37 compute-0 podman[447713]: 2025-11-26 02:12:37.638703926 +0000 UTC m=+0.051095162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:12:37 compute-0 systemd[1]: Started libpod-conmon-3c7d1519c7f0a9fe4657e8ea449d67ae47b7cba3357924a920705c288d8fdf73.scope.
Nov 26 02:12:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:12:37 compute-0 podman[447713]: 2025-11-26 02:12:37.811652122 +0000 UTC m=+0.224043368 container init 3c7d1519c7f0a9fe4657e8ea449d67ae47b7cba3357924a920705c288d8fdf73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tesla, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 02:12:37 compute-0 podman[447713]: 2025-11-26 02:12:37.830032157 +0000 UTC m=+0.242423363 container start 3c7d1519c7f0a9fe4657e8ea449d67ae47b7cba3357924a920705c288d8fdf73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tesla, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 26 02:12:37 compute-0 podman[447713]: 2025-11-26 02:12:37.835206432 +0000 UTC m=+0.247597758 container attach 3c7d1519c7f0a9fe4657e8ea449d67ae47b7cba3357924a920705c288d8fdf73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tesla, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 02:12:37 compute-0 sleepy_tesla[447728]: 167 167
Nov 26 02:12:37 compute-0 systemd[1]: libpod-3c7d1519c7f0a9fe4657e8ea449d67ae47b7cba3357924a920705c288d8fdf73.scope: Deactivated successfully.
Nov 26 02:12:37 compute-0 podman[447713]: 2025-11-26 02:12:37.845247813 +0000 UTC m=+0.257639019 container died 3c7d1519c7f0a9fe4657e8ea449d67ae47b7cba3357924a920705c288d8fdf73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tesla, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:12:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6fe14fa05a30bc6122b60fc9d8e68e2c173dca97cc9c4779627bf44ff565b68-merged.mount: Deactivated successfully.
Nov 26 02:12:37 compute-0 podman[447713]: 2025-11-26 02:12:37.90794826 +0000 UTC m=+0.320339456 container remove 3c7d1519c7f0a9fe4657e8ea449d67ae47b7cba3357924a920705c288d8fdf73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tesla, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 02:12:37 compute-0 systemd[1]: libpod-conmon-3c7d1519c7f0a9fe4657e8ea449d67ae47b7cba3357924a920705c288d8fdf73.scope: Deactivated successfully.
Nov 26 02:12:38 compute-0 podman[447750]: 2025-11-26 02:12:38.213197403 +0000 UTC m=+0.088514231 container create 5b1343f64fe66a93518277ccb3d9463709c095d37b30e636f07961bf17a0e167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_vaughan, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 02:12:38 compute-0 podman[447750]: 2025-11-26 02:12:38.185143257 +0000 UTC m=+0.060460125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:12:38 compute-0 systemd[1]: Started libpod-conmon-5b1343f64fe66a93518277ccb3d9463709c095d37b30e636f07961bf17a0e167.scope.
Nov 26 02:12:38 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52f1ae654df084de915d18f1802c6215a24fd64ab28e07833cb04a4907597b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52f1ae654df084de915d18f1802c6215a24fd64ab28e07833cb04a4907597b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52f1ae654df084de915d18f1802c6215a24fd64ab28e07833cb04a4907597b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52f1ae654df084de915d18f1802c6215a24fd64ab28e07833cb04a4907597b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:12:38 compute-0 podman[447750]: 2025-11-26 02:12:38.357588887 +0000 UTC m=+0.232905745 container init 5b1343f64fe66a93518277ccb3d9463709c095d37b30e636f07961bf17a0e167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 02:12:38 compute-0 podman[447750]: 2025-11-26 02:12:38.381422445 +0000 UTC m=+0.256739283 container start 5b1343f64fe66a93518277ccb3d9463709c095d37b30e636f07961bf17a0e167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_vaughan, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:12:38 compute-0 podman[447750]: 2025-11-26 02:12:38.386720653 +0000 UTC m=+0.262037501 container attach 5b1343f64fe66a93518277ccb3d9463709c095d37b30e636f07961bf17a0e167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_vaughan, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:12:38 compute-0 nova_compute[350387]: 2025-11-26 02:12:38.441 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:38 compute-0 nova_compute[350387]: 2025-11-26 02:12:38.463 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:12:38 compute-0 nova_compute[350387]: 2025-11-26 02:12:38.463 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:12:39 compute-0 charming_vaughan[447765]: {
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:        "osd_id": 0,
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:        "type": "bluestore"
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:    },
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:        "osd_id": 2,
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:        "type": "bluestore"
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:    },
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:        "osd_id": 1,
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:        "type": "bluestore"
Nov 26 02:12:39 compute-0 charming_vaughan[447765]:    }
Nov 26 02:12:39 compute-0 charming_vaughan[447765]: }
Nov 26 02:12:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 164 op/s
Nov 26 02:12:39 compute-0 systemd[1]: libpod-5b1343f64fe66a93518277ccb3d9463709c095d37b30e636f07961bf17a0e167.scope: Deactivated successfully.
Nov 26 02:12:39 compute-0 podman[447750]: 2025-11-26 02:12:39.525565082 +0000 UTC m=+1.400881940 container died 5b1343f64fe66a93518277ccb3d9463709c095d37b30e636f07961bf17a0e167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_vaughan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:12:39 compute-0 systemd[1]: libpod-5b1343f64fe66a93518277ccb3d9463709c095d37b30e636f07961bf17a0e167.scope: Consumed 1.125s CPU time.
Nov 26 02:12:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b52f1ae654df084de915d18f1802c6215a24fd64ab28e07833cb04a4907597b2-merged.mount: Deactivated successfully.
Nov 26 02:12:39 compute-0 podman[447750]: 2025-11-26 02:12:39.614686479 +0000 UTC m=+1.490003317 container remove 5b1343f64fe66a93518277ccb3d9463709c095d37b30e636f07961bf17a0e167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_vaughan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:12:39 compute-0 systemd[1]: libpod-conmon-5b1343f64fe66a93518277ccb3d9463709c095d37b30e636f07961bf17a0e167.scope: Deactivated successfully.
Nov 26 02:12:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:12:39 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:12:39 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:12:39 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:12:39 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 3e20317a-fa79-422f-af57-ba5795687236 does not exist
Nov 26 02:12:39 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 95bb1202-faac-4b5c-a623-114a2c1efc33 does not exist
Nov 26 02:12:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:12:40 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:12:40 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:12:40 compute-0 nova_compute[350387]: 2025-11-26 02:12:40.803 350391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764123145.7411103, 7f2d249d-4d0b-4ee7-ac66-deb2637c906d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:12:40 compute-0 nova_compute[350387]: 2025-11-26 02:12:40.805 350391 INFO nova.compute.manager [-] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] VM Stopped (Lifecycle Event)#033[00m
Nov 26 02:12:40 compute-0 nova_compute[350387]: 2025-11-26 02:12:40.823 350391 DEBUG nova.compute.manager [None req-d1e836b2-507e-47ee-8f04-ed94de4d10b2 - - - - - -] [instance: 7f2d249d-4d0b-4ee7-ac66-deb2637c906d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:12:41
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'vms', 'backups', '.mgr', 'default.rgw.log']
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:12:41 compute-0 nova_compute[350387]: 2025-11-26 02:12:41.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:12:41 compute-0 nova_compute[350387]: 2025-11-26 02:12:41.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:12:41 compute-0 nova_compute[350387]: 2025-11-26 02:12:41.309 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1866: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 164 op/s
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:12:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:12:42 compute-0 nova_compute[350387]: 2025-11-26 02:12:42.370 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-a6b626e1-3c31-460a-be1a-02b342efbb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:12:42 compute-0 nova_compute[350387]: 2025-11-26 02:12:42.370 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-a6b626e1-3c31-460a-be1a-02b342efbb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:12:42 compute-0 nova_compute[350387]: 2025-11-26 02:12:42.370 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.873 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.874 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.874 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.875 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.882 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 74d081af-66cd-4e37-99e4-31f777885766 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 02:12:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:42.884 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/74d081af-66cd-4e37-99e4-31f777885766 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}4e94a0ede5bb893797130fc39ee992faf1803b43b6582353b5619a442e3adefc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 02:12:43 compute-0 nova_compute[350387]: 2025-11-26 02:12:43.445 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:43.448 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1831 Content-Type: application/json Date: Wed, 26 Nov 2025 02:12:42 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-3f9afac7-bf58-499d-b3cf-2039a431cc6e x-openstack-request-id: req-3f9afac7-bf58-499d-b3cf-2039a431cc6e _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 02:12:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:43.448 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "74d081af-66cd-4e37-99e4-31f777885766", "name": "te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n", "status": "ACTIVE", "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "user_id": "3a9710ede02d47cbb016ff596d936633", "metadata": {"metering.server_group": "bd820598-acdd-4f42-8252-1f5951161b01"}, "hostId": "0514aa3466932c9e7b93e3dcd39fcbb186e60af35850a79a2e38f108", "image": {"id": "dbaf181e-c7da-4938-bfef-7ab3aa9a19bc", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/dbaf181e-c7da-4938-bfef-7ab3aa9a19bc"}]}, "flavor": {"id": "6db4d080-ab1e-4a78-a6d9-858137b0ba8b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/6db4d080-ab1e-4a78-a6d9-858137b0ba8b"}]}, "created": "2025-11-26T02:12:18Z", "updated": "2025-11-26T02:12:29Z", "addresses": {"": [{"version": 4, "addr": "10.100.2.57", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:91:80:c9"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/74d081af-66cd-4e37-99e4-31f777885766"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/74d081af-66cd-4e37-99e4-31f777885766"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-26T02:12:29.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 02:12:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:43.448 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/74d081af-66cd-4e37-99e4-31f777885766 used request id req-3f9afac7-bf58-499d-b3cf-2039a431cc6e request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 02:12:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:43.452 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '74d081af-66cd-4e37-99e4-31f777885766', 'name': 'te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'dbaf181e-c7da-4938-bfef-7ab3aa9a19bc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'user_id': '3a9710ede02d47cbb016ff596d936633', 'hostId': '0514aa3466932c9e7b93e3dcd39fcbb186e60af35850a79a2e38f108', 'status': 'active', 'metadata': {'metering.server_group': 'bd820598-acdd-4f42-8252-1f5951161b01'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:12:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:43.459 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a6b626e1-3c31-460a-be1a-02b342efbb84 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 02:12:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:43.461 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a6b626e1-3c31-460a-be1a-02b342efbb84 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}4e94a0ede5bb893797130fc39ee992faf1803b43b6582353b5619a442e3adefc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 02:12:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1867: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.7 KiB/s wr, 122 op/s
Nov 26 02:12:43 compute-0 podman[447861]: 2025-11-26 02:12:43.566323488 +0000 UTC m=+0.107645448 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:12:43 compute-0 podman[447860]: 2025-11-26 02:12:43.571560814 +0000 UTC m=+0.113166882 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, maintainer=Red Hat, Inc.)
Nov 26 02:12:43 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:43.778 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:12:43 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:43.780 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 02:12:43 compute-0 nova_compute[350387]: 2025-11-26 02:12:43.787 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:43 compute-0 nova_compute[350387]: 2025-11-26 02:12:43.935 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Updating instance_info_cache with network_info: [{"id": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "address": "fa:16:3e:a9:2c:51", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422f5ef7-f0", "ovs_interfaceid": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:12:43 compute-0 nova_compute[350387]: 2025-11-26 02:12:43.951 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-a6b626e1-3c31-460a-be1a-02b342efbb84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:12:43 compute-0 nova_compute[350387]: 2025-11-26 02:12:43.952 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:12:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:44.139 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1852 Content-Type: application/json Date: Wed, 26 Nov 2025 02:12:43 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-a63478b5-1a36-4e6c-8936-dcea5fc3ab24 x-openstack-request-id: req-a63478b5-1a36-4e6c-8936-dcea5fc3ab24 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 02:12:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:44.139 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "a6b626e1-3c31-460a-be1a-02b342efbb84", "name": "tempest-TestNetworkBasicOps-server-1631385969", "status": "ACTIVE", "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "user_id": "a7102c5716b644e9a49ae0b2b6d2bd04", "metadata": {}, "hostId": "4f953f48991c9b2159688d6e2e47a27b2a9421d2937ca6b3e2b6c8bc", "image": {"id": "4728a8a0-1107-4816-98c6-74482d53f92c", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/4728a8a0-1107-4816-98c6-74482d53f92c"}]}, "flavor": {"id": "6db4d080-ab1e-4a78-a6d9-858137b0ba8b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/6db4d080-ab1e-4a78-a6d9-858137b0ba8b"}]}, "created": "2025-11-26T02:11:04Z", "updated": "2025-11-26T02:11:26Z", "addresses": {"tempest-network-smoke--212368833": [{"version": 4, "addr": "10.100.0.13", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a9:2c:51"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/a6b626e1-3c31-460a-be1a-02b342efbb84"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/a6b626e1-3c31-460a-be1a-02b342efbb84"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-280692433", "OS-SRV-USG:launched_at": "2025-11-26T02:11:26.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-612482367"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 02:12:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:44.139 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a6b626e1-3c31-460a-be1a-02b342efbb84 used request id req-a63478b5-1a36-4e6c-8936-dcea5fc3ab24 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 02:12:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:44.145 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6b626e1-3c31-460a-be1a-02b342efbb84', 'name': 'tempest-TestNetworkBasicOps-server-1631385969', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '4728a8a0-1107-4816-98c6-74482d53f92c'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '66fdcaf8e71a4c809ab9cab4c64ca9d5', 'user_id': 'a7102c5716b644e9a49ae0b2b6d2bd04', 'hostId': '4f953f48991c9b2159688d6e2e47a27b2a9421d2937ca6b3e2b6c8bc', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:12:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:44.151 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance e897c19f-7590-405d-9e92-ff9e0fd9b366 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 02:12:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:44.152 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/e897c19f-7590-405d-9e92-ff9e0fd9b366 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}4e94a0ede5bb893797130fc39ee992faf1803b43b6582353b5619a442e3adefc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 02:12:44 compute-0 nova_compute[350387]: 2025-11-26 02:12:44.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:12:44 compute-0 nova_compute[350387]: 2025-11-26 02:12:44.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.045 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1974 Content-Type: application/json Date: Wed, 26 Nov 2025 02:12:44 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-cb8190e8-e638-4fd6-80cd-02c785f6d92e x-openstack-request-id: req-cb8190e8-e638-4fd6-80cd-02c785f6d92e _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.045 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "e897c19f-7590-405d-9e92-ff9e0fd9b366", "name": "tempest-TestNetworkBasicOps-server-1078684613", "status": "ACTIVE", "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "user_id": "a7102c5716b644e9a49ae0b2b6d2bd04", "metadata": {}, "hostId": "4f953f48991c9b2159688d6e2e47a27b2a9421d2937ca6b3e2b6c8bc", "image": {"id": "4728a8a0-1107-4816-98c6-74482d53f92c", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/4728a8a0-1107-4816-98c6-74482d53f92c"}]}, "flavor": {"id": "6db4d080-ab1e-4a78-a6d9-858137b0ba8b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/6db4d080-ab1e-4a78-a6d9-858137b0ba8b"}]}, "created": "2025-11-26T02:12:20Z", "updated": "2025-11-26T02:12:28Z", "addresses": {"tempest-network-smoke--212368833": [{"version": 4, "addr": "10.100.0.4", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:49:31:0c"}, {"version": 4, "addr": "192.168.122.237", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:49:31:0c"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/e897c19f-7590-405d-9e92-ff9e0fd9b366"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/e897c19f-7590-405d-9e92-ff9e0fd9b366"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-281693536", "OS-SRV-USG:launched_at": "2025-11-26T02:12:28.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-2011973456"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000c", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.047 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/e897c19f-7590-405d-9e92-ff9e0fd9b366 used request id req-cb8190e8-e638-4fd6-80cd-02c785f6d92e request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.050 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e897c19f-7590-405d-9e92-ff9e0fd9b366', 'name': 'tempest-TestNetworkBasicOps-server-1078684613', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '4728a8a0-1107-4816-98c6-74482d53f92c'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '66fdcaf8e71a4c809ab9cab4c64ca9d5', 'user_id': 'a7102c5716b644e9a49ae0b2b6d2bd04', 'hostId': '4f953f48991c9b2159688d6e2e47a27b2a9421d2937ca6b3e2b6c8bc', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.051 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.051 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.052 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.054 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.055 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T02:12:45.052920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.056 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.057 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.057 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.058 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.059 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T02:12:45.059658) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.060 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.069 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 74d081af-66cd-4e37-99e4-31f777885766 / tap0659d4f2-a7 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.070 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.077 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for a6b626e1-3c31-460a-be1a-02b342efbb84 / tap422f5ef7-f0 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.078 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/network.incoming.packets volume: 117 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.084 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for e897c19f-7590-405d-9e92-ff9e0fd9b366 / tap03ba18c7-39 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.085 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.087 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.088 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.088 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.088 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.089 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T02:12:45.090168) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.091 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.093 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.093 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.094 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.094 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.095 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.097 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T02:12:45.096525) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.096 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.098 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.099 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.100 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.101 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.103 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.103 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.104 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T02:12:45.104714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.105 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.106 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.107 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.108 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.109 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.110 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.111 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.111 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.113 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T02:12:45.113537) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.114 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.115 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.116 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/network.outgoing.bytes volume: 15704 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.116 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.117 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.117 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.118 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.118 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.119 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.119 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.120 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T02:12:45.119478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.157 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/cpu volume: 14310000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.180 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/cpu volume: 37500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.202 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/cpu volume: 16280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.202 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.203 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.203 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.203 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.203 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.204 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.205 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T02:12:45.204207) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.205 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.206 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.207 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.207 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.207 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.207 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.208 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.208 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T02:12:45.208473) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.209 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.209 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 74d081af-66cd-4e37-99e4-31f777885766: ceilometer.compute.pollsters.NoVolumeException
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.210 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/memory.usage volume: 42.21484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.210 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.210 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance e897c19f-7590-405d-9e92-ff9e0fd9b366: ceilometer.compute.pollsters.NoVolumeException
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.211 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.212 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.212 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.212 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.213 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-26T02:12:45.212716) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.213 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.213 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1631385969>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1078684613>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1631385969>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1078684613>]
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.214 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.214 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.215 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.215 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.216 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T02:12:45.215915) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.216 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.216 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.217 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/network.incoming.bytes volume: 20318 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.218 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.219 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.219 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.219 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.219 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.219 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.220 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.221 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T02:12:45.220367) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.221 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.222 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.222 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.223 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.223 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.224 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.224 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.225 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T02:12:45.225940) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.226 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.227 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.228 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/network.outgoing.packets volume: 104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.229 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.230 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.230 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.230 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.231 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.231 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.232 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.232 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T02:12:45.231772) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.232 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.233 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.234 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.235 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.235 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.235 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.235 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.235 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.236 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.236 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T02:12:45.235815) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.236 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.237 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.237 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.238 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.238 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.238 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.238 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.238 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.239 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.239 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T02:12:45.239023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.254 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.254 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.270 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.271 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.285 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.285 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.286 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.286 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.286 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.287 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.287 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.287 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.288 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T02:12:45.287538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.342 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.342 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.413 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.414 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.473 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.474 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.475 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.475 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.475 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.475 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.476 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.476 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.476 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.477 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1631385969>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1078684613>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1631385969>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-1078684613>]
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.478 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.478 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.478 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.479 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.479 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.479 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.latency volume: 1677112690 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.480 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.latency volume: 676780562 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.480 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.read.latency volume: 3101674160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.481 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.read.latency volume: 193162079 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.481 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-26T02:12:45.476247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.482 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.read.latency volume: 2504209668 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.482 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.read.latency volume: 4755383 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.483 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.484 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T02:12:45.479232) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.484 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.484 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.484 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.484 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.485 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T02:12:45.484977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.485 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.486 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.486 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.487 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.487 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.487 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.488 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.489 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.489 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.489 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.490 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.490 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.490 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.490 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T02:12:45.490359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.491 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.491 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.492 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.492 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.493 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.494 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.494 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.494 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.494 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.494 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.495 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.495 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.495 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.496 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.write.bytes volume: 72994816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.496 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T02:12:45.494992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.497 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.497 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.498 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.499 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.499 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.499 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.499 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.499 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.500 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.500 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.501 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.501 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.502 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.502 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.502 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.502 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.502 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.503 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.503 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T02:12:45.499772) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T02:12:45.502938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.504 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.write.latency volume: 8201957046 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.504 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.505 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.505 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.506 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.506 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.507 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.507 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.507 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.507 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.508 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.508 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T02:12:45.507656) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.509 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.write.requests volume: 305 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.509 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.510 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.510 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.511 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.511 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.512 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.512 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.512 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.512 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.512 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.513 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.513 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.514 15 DEBUG ceilometer.compute.pollsters [-] a6b626e1-3c31-460a-be1a-02b342efbb84/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.514 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T02:12:45.512506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.515 15 DEBUG ceilometer.compute.pollsters [-] e897c19f-7590-405d-9e92-ff9e0fd9b366/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.516 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.516 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.517 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.518 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.518 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.518 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.518 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.518 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.518 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.518 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.518 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.518 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.518 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.518 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.518 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.519 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.519 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:12:45.519 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:12:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.7 KiB/s wr, 110 op/s
Nov 26 02:12:45 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:12:45.787 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:12:46 compute-0 nova_compute[350387]: 2025-11-26 02:12:46.314 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:46 compute-0 nova_compute[350387]: 2025-11-26 02:12:46.794 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 KiB/s wr, 73 op/s
Nov 26 02:12:48 compute-0 nova_compute[350387]: 2025-11-26 02:12:48.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:12:48 compute-0 nova_compute[350387]: 2025-11-26 02:12:48.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:12:48 compute-0 nova_compute[350387]: 2025-11-26 02:12:48.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:12:48 compute-0 nova_compute[350387]: 2025-11-26 02:12:48.450 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Nov 26 02:12:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:12:51 compute-0 nova_compute[350387]: 2025-11-26 02:12:51.318 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014569902975841635 of space, bias 1.0, pg target 0.43709708927524904 quantized to 32 (current 32)
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:12:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Nov 26 02:12:51 compute-0 nova_compute[350387]: 2025-11-26 02:12:51.922 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:53 compute-0 nova_compute[350387]: 2025-11-26 02:12:53.453 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1872: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Nov 26 02:12:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:12:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Nov 26 02:12:56 compute-0 nova_compute[350387]: 2025-11-26 02:12:56.324 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Nov 26 02:12:57 compute-0 podman[447907]: 2025-11-26 02:12:57.561251141 +0000 UTC m=+0.106159055 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 26 02:12:57 compute-0 podman[447908]: 2025-11-26 02:12:57.567380073 +0000 UTC m=+0.098916412 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 26 02:12:57 compute-0 podman[447909]: 2025-11-26 02:12:57.585752228 +0000 UTC m=+0.119151150 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 02:12:58 compute-0 nova_compute[350387]: 2025-11-26 02:12:58.455 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:12:59 compute-0 nova_compute[350387]: 2025-11-26 02:12:59.321 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:12:59 compute-0 nova_compute[350387]: 2025-11-26 02:12:59.322 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 02:12:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Nov 26 02:12:59 compute-0 podman[158021]: time="2025-11-26T02:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:12:59 compute-0 nova_compute[350387]: 2025-11-26 02:12:59.749 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Acquiring lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:59 compute-0 nova_compute[350387]: 2025-11-26 02:12:59.749 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45046 "" "Go-http-client/1.1"
Nov 26 02:12:59 compute-0 nova_compute[350387]: 2025-11-26 02:12:59.772 350391 DEBUG nova.compute.manager [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 02:12:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9110 "" "Go-http-client/1.1"
Nov 26 02:12:59 compute-0 nova_compute[350387]: 2025-11-26 02:12:59.909 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:12:59 compute-0 nova_compute[350387]: 2025-11-26 02:12:59.910 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:12:59 compute-0 nova_compute[350387]: 2025-11-26 02:12:59.924 350391 DEBUG nova.virt.hardware [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 02:12:59 compute-0 nova_compute[350387]: 2025-11-26 02:12:59.925 350391 INFO nova.compute.claims [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 02:13:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:13:00 compute-0 nova_compute[350387]: 2025-11-26 02:13:00.123 350391 DEBUG oslo_concurrency.processutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:13:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:13:00 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713388728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:13:00 compute-0 nova_compute[350387]: 2025-11-26 02:13:00.598 350391 DEBUG oslo_concurrency.processutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:13:00 compute-0 nova_compute[350387]: 2025-11-26 02:13:00.608 350391 DEBUG nova.compute.provider_tree [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:13:00 compute-0 nova_compute[350387]: 2025-11-26 02:13:00.631 350391 DEBUG nova.scheduler.client.report [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:13:00 compute-0 nova_compute[350387]: 2025-11-26 02:13:00.667 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:00 compute-0 nova_compute[350387]: 2025-11-26 02:13:00.668 350391 DEBUG nova.compute.manager [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 02:13:00 compute-0 nova_compute[350387]: 2025-11-26 02:13:00.735 350391 DEBUG nova.compute.manager [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 02:13:00 compute-0 nova_compute[350387]: 2025-11-26 02:13:00.737 350391 DEBUG nova.network.neutron [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 02:13:00 compute-0 nova_compute[350387]: 2025-11-26 02:13:00.760 350391 INFO nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 02:13:00 compute-0 nova_compute[350387]: 2025-11-26 02:13:00.779 350391 DEBUG nova.compute.manager [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 02:13:00 compute-0 nova_compute[350387]: 2025-11-26 02:13:00.894 350391 DEBUG nova.compute.manager [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 02:13:00 compute-0 nova_compute[350387]: 2025-11-26 02:13:00.897 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 02:13:00 compute-0 nova_compute[350387]: 2025-11-26 02:13:00.898 350391 INFO nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Creating image(s)#033[00m
Nov 26 02:13:00 compute-0 nova_compute[350387]: 2025-11-26 02:13:00.972 350391 DEBUG nova.storage.rbd_utils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] rbd image bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:13:01 compute-0 nova_compute[350387]: 2025-11-26 02:13:01.042 350391 DEBUG nova.storage.rbd_utils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] rbd image bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:13:01 compute-0 nova_compute[350387]: 2025-11-26 02:13:01.117 350391 DEBUG nova.storage.rbd_utils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] rbd image bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:13:01 compute-0 nova_compute[350387]: 2025-11-26 02:13:01.132 350391 DEBUG oslo_concurrency.processutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:13:01 compute-0 nova_compute[350387]: 2025-11-26 02:13:01.157 350391 DEBUG nova.policy [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b8a1343dbab4fa693b622013d763897', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e0ff318c290040838d6133cda861268a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 02:13:01 compute-0 nova_compute[350387]: 2025-11-26 02:13:01.216 350391 DEBUG oslo_concurrency.processutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:13:01 compute-0 nova_compute[350387]: 2025-11-26 02:13:01.218 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Acquiring lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:01 compute-0 nova_compute[350387]: 2025-11-26 02:13:01.219 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:01 compute-0 nova_compute[350387]: 2025-11-26 02:13:01.220 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:01 compute-0 nova_compute[350387]: 2025-11-26 02:13:01.277 350391 DEBUG nova.storage.rbd_utils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] rbd image bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:13:01 compute-0 nova_compute[350387]: 2025-11-26 02:13:01.287 350391 DEBUG oslo_concurrency.processutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:13:01 compute-0 nova_compute[350387]: 2025-11-26 02:13:01.327 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:01 compute-0 openstack_network_exporter[367323]: ERROR   02:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:13:01 compute-0 openstack_network_exporter[367323]: ERROR   02:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:13:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:13:01 compute-0 openstack_network_exporter[367323]: ERROR   02:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:13:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:13:01 compute-0 openstack_network_exporter[367323]: ERROR   02:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:13:01 compute-0 openstack_network_exporter[367323]: ERROR   02:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:13:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 250 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Nov 26 02:13:01 compute-0 nova_compute[350387]: 2025-11-26 02:13:01.680 350391 DEBUG oslo_concurrency.processutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:13:01 compute-0 nova_compute[350387]: 2025-11-26 02:13:01.861 350391 DEBUG nova.storage.rbd_utils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] resizing rbd image bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 26 02:13:01 compute-0 nova_compute[350387]: 2025-11-26 02:13:01.958 350391 DEBUG nova.network.neutron [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Successfully created port: d4404ee6-7244-483c-99ba-127555e6ee3b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 02:13:02 compute-0 nova_compute[350387]: 2025-11-26 02:13:02.132 350391 DEBUG nova.objects.instance [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lazy-loading 'migration_context' on Instance uuid bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:13:02 compute-0 nova_compute[350387]: 2025-11-26 02:13:02.150 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 02:13:02 compute-0 nova_compute[350387]: 2025-11-26 02:13:02.150 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Ensure instance console log exists: /var/lib/nova/instances/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 02:13:02 compute-0 nova_compute[350387]: 2025-11-26 02:13:02.151 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:02 compute-0 nova_compute[350387]: 2025-11-26 02:13:02.151 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:02 compute-0 nova_compute[350387]: 2025-11-26 02:13:02.151 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:03 compute-0 nova_compute[350387]: 2025-11-26 02:13:03.186 350391 DEBUG nova.network.neutron [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Successfully updated port: d4404ee6-7244-483c-99ba-127555e6ee3b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 02:13:03 compute-0 nova_compute[350387]: 2025-11-26 02:13:03.209 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Acquiring lock "refresh_cache-bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:13:03 compute-0 nova_compute[350387]: 2025-11-26 02:13:03.209 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Acquired lock "refresh_cache-bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:13:03 compute-0 nova_compute[350387]: 2025-11-26 02:13:03.210 350391 DEBUG nova.network.neutron [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 02:13:03 compute-0 nova_compute[350387]: 2025-11-26 02:13:03.310 350391 DEBUG nova.compute.manager [req-f866273f-96a5-4f87-af6f-4e405e376a3e req-c6de344b-c51c-45c7-8e6a-74767a37bf9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received event network-changed-d4404ee6-7244-483c-99ba-127555e6ee3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:13:03 compute-0 nova_compute[350387]: 2025-11-26 02:13:03.311 350391 DEBUG nova.compute.manager [req-f866273f-96a5-4f87-af6f-4e405e376a3e req-c6de344b-c51c-45c7-8e6a-74767a37bf9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Refreshing instance network info cache due to event network-changed-d4404ee6-7244-483c-99ba-127555e6ee3b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:13:03 compute-0 nova_compute[350387]: 2025-11-26 02:13:03.312 350391 DEBUG oslo_concurrency.lockutils [req-f866273f-96a5-4f87-af6f-4e405e376a3e req-c6de344b-c51c-45c7-8e6a-74767a37bf9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:13:03 compute-0 nova_compute[350387]: 2025-11-26 02:13:03.456 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 272 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1005 KiB/s wr, 8 op/s
Nov 26 02:13:04 compute-0 nova_compute[350387]: 2025-11-26 02:13:04.268 350391 DEBUG nova.network.neutron [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 02:13:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:13:05 compute-0 ovn_controller[89102]: 2025-11-26T02:13:05Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:31:0c 10.100.0.4
Nov 26 02:13:05 compute-0 ovn_controller[89102]: 2025-11-26T02:13:05Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:31:0c 10.100.0.4
Nov 26 02:13:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 301 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 2.5 MiB/s wr, 35 op/s
Nov 26 02:13:05 compute-0 podman[448153]: 2025-11-26 02:13:05.555139687 +0000 UTC m=+0.111976088 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm)
Nov 26 02:13:05 compute-0 podman[448154]: 2025-11-26 02:13:05.611082795 +0000 UTC m=+0.151808045 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 26 02:13:06 compute-0 nova_compute[350387]: 2025-11-26 02:13:06.331 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:06 compute-0 ovn_controller[89102]: 2025-11-26T02:13:06Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:91:80:c9 10.100.2.57
Nov 26 02:13:06 compute-0 ovn_controller[89102]: 2025-11-26T02:13:06Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:91:80:c9 10.100.2.57
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.437 350391 DEBUG nova.network.neutron [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Updating instance_info_cache with network_info: [{"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.530 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Releasing lock "refresh_cache-bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:13:07 compute-0 podman[448195]: 2025-11-26 02:13:07.531308666 +0000 UTC m=+0.088227673 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.533 350391 DEBUG nova.compute.manager [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Instance network_info: |[{"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.535 350391 DEBUG oslo_concurrency.lockutils [req-f866273f-96a5-4f87-af6f-4e405e376a3e req-c6de344b-c51c-45c7-8e6a-74767a37bf9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:13:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 340 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 504 KiB/s rd, 4.7 MiB/s wr, 111 op/s
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.536 350391 DEBUG nova.network.neutron [req-f866273f-96a5-4f87-af6f-4e405e376a3e req-c6de344b-c51c-45c7-8e6a-74767a37bf9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Refreshing network info cache for port d4404ee6-7244-483c-99ba-127555e6ee3b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.544 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Start _get_guest_xml network_info=[{"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': '4728a8a0-1107-4816-98c6-74482d53f92c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.561 350391 WARNING nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:13:07 compute-0 podman[448196]: 2025-11-26 02:13:07.567788488 +0000 UTC m=+0.111100494 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.570 350391 DEBUG nova.virt.libvirt.host [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.571 350391 DEBUG nova.virt.libvirt.host [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.582 350391 DEBUG nova.virt.libvirt.host [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.583 350391 DEBUG nova.virt.libvirt.host [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.584 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.585 350391 DEBUG nova.virt.hardware [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T02:09:05Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6db4d080-ab1e-4a78-a6d9-858137b0ba8b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.586 350391 DEBUG nova.virt.hardware [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.586 350391 DEBUG nova.virt.hardware [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.587 350391 DEBUG nova.virt.hardware [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.588 350391 DEBUG nova.virt.hardware [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.588 350391 DEBUG nova.virt.hardware [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.589 350391 DEBUG nova.virt.hardware [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.590 350391 DEBUG nova.virt.hardware [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.591 350391 DEBUG nova.virt.hardware [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.593 350391 DEBUG nova.virt.hardware [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.594 350391 DEBUG nova.virt.hardware [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 02:13:07 compute-0 nova_compute[350387]: 2025-11-26 02:13:07.597 350391 DEBUG oslo_concurrency.processutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:13:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:13:08 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/766921829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.078 350391 DEBUG oslo_concurrency.processutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.130 350391 DEBUG nova.storage.rbd_utils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] rbd image bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.141 350391 DEBUG oslo_concurrency.processutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.322 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.324 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.356 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.459 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:13:08 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3499616606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.616 350391 DEBUG oslo_concurrency.processutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.618 350391 DEBUG nova.virt.libvirt.vif [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:12:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-824419160',display_name='tempest-ServerActionsTestJSON-server-824419160',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-824419160',id=13,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGOUC98EN8hXycvhDt+xkn1avlrGbOp5ZypZ/FC9FWbfZj4H71JpSUmspsuEJl9YVQFHAmKxvB9zaiq05i2wC+MbwLZ87985MOXdrZIPoo0BLwHbkHW4LlqojeJFtrF82A==',key_name='tempest-keypair-396503000',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e0ff318c290040838d6133cda861268a',ramdisk_id='',reservation_id='r-pb5w045d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1777809074',owner_user_name='tempest-ServerActionsTestJSON-1777809074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:13:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b8a1343dbab4fa693b622013d763897',uuid=bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.618 350391 DEBUG nova.network.os_vif_util [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Converting VIF {"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.619 350391 DEBUG nova.network.os_vif_util [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:03:6c,bridge_name='br-int',has_traffic_filtering=True,id=d4404ee6-7244-483c-99ba-127555e6ee3b,network=Network(e2c25548-a42e-4a7d-850c-bdecd264a753),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4404ee6-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.620 350391 DEBUG nova.objects.instance [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lazy-loading 'pci_devices' on Instance uuid bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.636 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] End _get_guest_xml xml=<domain type="kvm">
Nov 26 02:13:08 compute-0 nova_compute[350387]:  <uuid>bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9</uuid>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  <name>instance-0000000d</name>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  <memory>131072</memory>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  <metadata>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <nova:name>tempest-ServerActionsTestJSON-server-824419160</nova:name>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 02:13:07</nova:creationTime>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <nova:flavor name="m1.nano">
Nov 26 02:13:08 compute-0 nova_compute[350387]:        <nova:memory>128</nova:memory>
Nov 26 02:13:08 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 02:13:08 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 02:13:08 compute-0 nova_compute[350387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 02:13:08 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 02:13:08 compute-0 nova_compute[350387]:        <nova:user uuid="3b8a1343dbab4fa693b622013d763897">tempest-ServerActionsTestJSON-1777809074-project-member</nova:user>
Nov 26 02:13:08 compute-0 nova_compute[350387]:        <nova:project uuid="e0ff318c290040838d6133cda861268a">tempest-ServerActionsTestJSON-1777809074</nova:project>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="4728a8a0-1107-4816-98c6-74482d53f92c"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <nova:ports>
Nov 26 02:13:08 compute-0 nova_compute[350387]:        <nova:port uuid="d4404ee6-7244-483c-99ba-127555e6ee3b">
Nov 26 02:13:08 compute-0 nova_compute[350387]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:        </nova:port>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      </nova:ports>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  </metadata>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <system>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <entry name="serial">bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9</entry>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <entry name="uuid">bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9</entry>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    </system>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  <os>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  </os>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  <features>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <apic/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  </features>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  </clock>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  </cpu>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  <devices>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk">
Nov 26 02:13:08 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      </source>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:13:08 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk.config">
Nov 26 02:13:08 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      </source>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:13:08 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <interface type="ethernet">
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <mac address="fa:16:3e:68:03:6c"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <mtu size="1442"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <target dev="tapd4404ee6-72"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    </interface>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/console.log" append="off"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    </serial>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <video>
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    </video>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    </rng>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 02:13:08 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 02:13:08 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 02:13:08 compute-0 nova_compute[350387]:  </devices>
Nov 26 02:13:08 compute-0 nova_compute[350387]: </domain>
Nov 26 02:13:08 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.644 350391 DEBUG nova.compute.manager [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Preparing to wait for external event network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.645 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Acquiring lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.645 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.646 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.646 350391 DEBUG nova.virt.libvirt.vif [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:12:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-824419160',display_name='tempest-ServerActionsTestJSON-server-824419160',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-824419160',id=13,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGOUC98EN8hXycvhDt+xkn1avlrGbOp5ZypZ/FC9FWbfZj4H71JpSUmspsuEJl9YVQFHAmKxvB9zaiq05i2wC+MbwLZ87985MOXdrZIPoo0BLwHbkHW4LlqojeJFtrF82A==',key_name='tempest-keypair-396503000',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e0ff318c290040838d6133cda861268a',ramdisk_id='',reservation_id='r-pb5w045d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1777809074',owner_user_name='tempest-ServerActionsTestJSON-1777809074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:13:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b8a1343dbab4fa693b622013d763897',uuid=bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.647 350391 DEBUG nova.network.os_vif_util [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Converting VIF {"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.648 350391 DEBUG nova.network.os_vif_util [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:03:6c,bridge_name='br-int',has_traffic_filtering=True,id=d4404ee6-7244-483c-99ba-127555e6ee3b,network=Network(e2c25548-a42e-4a7d-850c-bdecd264a753),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4404ee6-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.648 350391 DEBUG os_vif [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:03:6c,bridge_name='br-int',has_traffic_filtering=True,id=d4404ee6-7244-483c-99ba-127555e6ee3b,network=Network(e2c25548-a42e-4a7d-850c-bdecd264a753),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4404ee6-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.649 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.650 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.650 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.653 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.654 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4404ee6-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.654 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4404ee6-72, col_values=(('external_ids', {'iface-id': 'd4404ee6-7244-483c-99ba-127555e6ee3b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:03:6c', 'vm-uuid': 'bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:13:08 compute-0 NetworkManager[48886]: <info>  [1764123188.6589] manager: (tapd4404ee6-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.659 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.670 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.671 350391 INFO os_vif [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:03:6c,bridge_name='br-int',has_traffic_filtering=True,id=d4404ee6-7244-483c-99ba-127555e6ee3b,network=Network(e2c25548-a42e-4a7d-850c-bdecd264a753),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4404ee6-72')#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.758 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.759 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.761 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] No VIF found with MAC fa:16:3e:68:03:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.762 350391 INFO nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Using config drive#033[00m
Nov 26 02:13:08 compute-0 nova_compute[350387]: 2025-11-26 02:13:08.812 350391 DEBUG nova.storage.rbd_utils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] rbd image bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:13:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 340 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 504 KiB/s rd, 4.7 MiB/s wr, 111 op/s
Nov 26 02:13:09 compute-0 nova_compute[350387]: 2025-11-26 02:13:09.582 350391 INFO nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Creating config drive at /var/lib/nova/instances/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.config#033[00m
Nov 26 02:13:09 compute-0 nova_compute[350387]: 2025-11-26 02:13:09.596 350391 DEBUG oslo_concurrency.processutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyem7km00 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:13:09 compute-0 nova_compute[350387]: 2025-11-26 02:13:09.747 350391 DEBUG oslo_concurrency.processutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyem7km00" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:13:09 compute-0 nova_compute[350387]: 2025-11-26 02:13:09.805 350391 DEBUG nova.storage.rbd_utils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] rbd image bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:13:09 compute-0 nova_compute[350387]: 2025-11-26 02:13:09.815 350391 DEBUG oslo_concurrency.processutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.config bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.091 350391 DEBUG oslo_concurrency.processutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.config bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.092 350391 INFO nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Deleting local config drive /var/lib/nova/instances/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.config because it was imported into RBD.#033[00m
Nov 26 02:13:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:13:10 compute-0 kernel: tapd4404ee6-72: entered promiscuous mode
Nov 26 02:13:10 compute-0 NetworkManager[48886]: <info>  [1764123190.1834] manager: (tapd4404ee6-72): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Nov 26 02:13:10 compute-0 ovn_controller[89102]: 2025-11-26T02:13:10Z|00126|binding|INFO|Claiming lport d4404ee6-7244-483c-99ba-127555e6ee3b for this chassis.
Nov 26 02:13:10 compute-0 ovn_controller[89102]: 2025-11-26T02:13:10Z|00127|binding|INFO|d4404ee6-7244-483c-99ba-127555e6ee3b: Claiming fa:16:3e:68:03:6c 10.100.0.11
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.195 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.199 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:03:6c 10.100.0.11'], port_security=['fa:16:3e:68:03:6c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e2c25548-a42e-4a7d-850c-bdecd264a753', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e0ff318c290040838d6133cda861268a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '392666de-076f-4a6b-abfe-d6c4dadf08c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4500b2b3-5d5b-4a74-8ac2-4092583234ee, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=d4404ee6-7244-483c-99ba-127555e6ee3b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.201 286844 INFO neutron.agent.ovn.metadata.agent [-] Port d4404ee6-7244-483c-99ba-127555e6ee3b in datapath e2c25548-a42e-4a7d-850c-bdecd264a753 bound to our chassis#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.206 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e2c25548-a42e-4a7d-850c-bdecd264a753#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.223 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[adaa27d6-4103-479c-b89a-c6973a6a54fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.224 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape2c25548-a1 in ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.229 413433 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape2c25548-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.229 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[44c9690a-c3a0-4f01-b54d-a077714aaf77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.234 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[6c92e1c3-06d4-46d7-9c69-d82c1cb0b893]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 ovn_controller[89102]: 2025-11-26T02:13:10Z|00128|binding|INFO|Setting lport d4404ee6-7244-483c-99ba-127555e6ee3b ovn-installed in OVS
Nov 26 02:13:10 compute-0 ovn_controller[89102]: 2025-11-26T02:13:10Z|00129|binding|INFO|Setting lport d4404ee6-7244-483c-99ba-127555e6ee3b up in Southbound
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.241 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.243 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.250 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[d88ecd57-73a3-4d49-9f3c-3c393b07af3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 systemd-machined[138512]: New machine qemu-13-instance-0000000d.
Nov 26 02:13:10 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.279 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[68bae022-7ff2-4d7d-b852-361458ab50bb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 systemd-udevd[448370]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 02:13:10 compute-0 NetworkManager[48886]: <info>  [1764123190.3004] device (tapd4404ee6-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 02:13:10 compute-0 NetworkManager[48886]: <info>  [1764123190.3017] device (tapd4404ee6-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.330 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[9453524a-0bfb-4c57-904b-7462bcc18f4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 NetworkManager[48886]: <info>  [1764123190.3405] manager: (tape2c25548-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/66)
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.339 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[a4431f73-7911-47f2-aa8c-46ea5a285785]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.377 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[311ce462-d13a-41b6-afb5-6a9943d9b0c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.388 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[6a9ebed2-c35b-49ab-9791-05f19daf2c45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 NetworkManager[48886]: <info>  [1764123190.4154] device (tape2c25548-a0): carrier: link connected
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.423 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[49112578-ae6b-4982-a95a-12c35b2f012b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.444 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[bdcbb572-b581-4ffc-aa4b-f0a0d9a99951]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape2c25548-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:d2:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682104, 'reachable_time': 16918, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 448400, 'error': None, 'target': 'ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.465 350391 DEBUG nova.compute.manager [req-0b9b5bfe-08b2-44e1-a162-fd14c1ac1b4c req-0702bd0a-96af-4b7e-a440-542bde3aa111 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received event network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.466 350391 DEBUG oslo_concurrency.lockutils [req-0b9b5bfe-08b2-44e1-a162-fd14c1ac1b4c req-0702bd0a-96af-4b7e-a440-542bde3aa111 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.465 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc832c9-1aa7-41f0-be7a-ee7d94b67d3a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea0:d29c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 682104, 'tstamp': 682104}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 448401, 'error': None, 'target': 'ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.466 350391 DEBUG oslo_concurrency.lockutils [req-0b9b5bfe-08b2-44e1-a162-fd14c1ac1b4c req-0702bd0a-96af-4b7e-a440-542bde3aa111 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.466 350391 DEBUG oslo_concurrency.lockutils [req-0b9b5bfe-08b2-44e1-a162-fd14c1ac1b4c req-0702bd0a-96af-4b7e-a440-542bde3aa111 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.467 350391 DEBUG nova.compute.manager [req-0b9b5bfe-08b2-44e1-a162-fd14c1ac1b4c req-0702bd0a-96af-4b7e-a440-542bde3aa111 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Processing event network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.490 350391 DEBUG nova.network.neutron [req-f866273f-96a5-4f87-af6f-4e405e376a3e req-c6de344b-c51c-45c7-8e6a-74767a37bf9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Updated VIF entry in instance network info cache for port d4404ee6-7244-483c-99ba-127555e6ee3b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.490 350391 DEBUG nova.network.neutron [req-f866273f-96a5-4f87-af6f-4e405e376a3e req-c6de344b-c51c-45c7-8e6a-74767a37bf9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Updating instance_info_cache with network_info: [{"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.498 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae55ffe-dc8c-4168-9f9a-9dd49bf33091]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape2c25548-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:d2:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682104, 'reachable_time': 16918, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 448402, 'error': None, 'target': 'ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.507 350391 DEBUG oslo_concurrency.lockutils [req-f866273f-96a5-4f87-af6f-4e405e376a3e req-c6de344b-c51c-45c7-8e6a-74767a37bf9e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.541 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[a5fcc6b9-643f-45bb-a591-6821c7dee56e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.627 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[fc8313f0-b72f-4dc3-bcde-1ef76876ea00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.629 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2c25548-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.630 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.630 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape2c25548-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:13:10 compute-0 NetworkManager[48886]: <info>  [1764123190.6365] manager: (tape2c25548-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.635 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:10 compute-0 kernel: tape2c25548-a0: entered promiscuous mode
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.642 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.655 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape2c25548-a0, col_values=(('external_ids', {'iface-id': '3e4f4a4e-c5ed-4544-9ad9-aa5c0fc87ea7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:13:10 compute-0 ovn_controller[89102]: 2025-11-26T02:13:10Z|00130|binding|INFO|Releasing lport 3e4f4a4e-c5ed-4544-9ad9-aa5c0fc87ea7 from this chassis (sb_readonly=0)
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.657 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.683 286844 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e2c25548-a42e-4a7d-850c-bdecd264a753.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e2c25548-a42e-4a7d-850c-bdecd264a753.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 02:13:10 compute-0 nova_compute[350387]: 2025-11-26 02:13:10.684 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.685 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[f24b1082-36fa-4224-8e5d-54403202a4b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.687 286844 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: global
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    log         /dev/log local0 debug
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    log-tag     haproxy-metadata-proxy-e2c25548-a42e-4a7d-850c-bdecd264a753
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    user        root
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    group       root
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    maxconn     1024
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    pidfile     /var/lib/neutron/external/pids/e2c25548-a42e-4a7d-850c-bdecd264a753.pid.haproxy
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    daemon
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: defaults
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    log global
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    mode http
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    option httplog
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    option dontlognull
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    option http-server-close
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    option forwardfor
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    retries                 3
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    timeout http-request    30s
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    timeout connect         30s
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    timeout client          32s
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    timeout server          32s
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    timeout http-keep-alive 30s
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: listen listener
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    bind 169.254.169.254:80
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]:    http-request add-header X-OVN-Network-ID e2c25548-a42e-4a7d-850c-bdecd264a753
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 02:13:10 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:10.689 286844 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753', 'env', 'PROCESS_TAG=haproxy-e2c25548-a42e-4a7d-850c-bdecd264a753', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e2c25548-a42e-4a7d-850c-bdecd264a753.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.019 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123191.0186622, bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.020 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] VM Started (Lifecycle Event)#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.022 350391 DEBUG nova.compute.manager [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.028 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.035 350391 INFO nova.virt.libvirt.driver [-] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Instance spawned successfully.#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.036 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.041 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.049 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.066 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.067 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.069 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.069 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.070 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.071 350391 DEBUG nova.virt.libvirt.driver [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.078 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.079 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123191.0187893, bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.080 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] VM Paused (Lifecycle Event)#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.121 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.128 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123191.0271647, bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.129 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] VM Resumed (Lifecycle Event)#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.152 350391 INFO nova.compute.manager [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Took 10.26 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.152 350391 DEBUG nova.compute.manager [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.154 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.165 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:13:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:13:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:13:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:13:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:13:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:13:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.206 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.229 350391 INFO nova.compute.manager [None req-8550f2db-4672-4ca2-85b4-e0e2a0a9b730 a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Get console output#033[00m
Nov 26 02:13:11 compute-0 podman[448473]: 2025-11-26 02:13:11.23190307 +0000 UTC m=+0.097993266 container create 3d38e342cacaa3497a8bdc145080046447970367413fdb99612bbacb8d918709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.232 350391 INFO nova.compute.manager [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Took 11.38 seconds to build instance.#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.254 350391 DEBUG oslo_concurrency.lockutils [None req-10c0a19d-66e4-4706-b172-b75837ab6475 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.255 445032 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 26 02:13:11 compute-0 podman[448473]: 2025-11-26 02:13:11.192801265 +0000 UTC m=+0.058891511 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 02:13:11 compute-0 systemd[1]: Started libpod-conmon-3d38e342cacaa3497a8bdc145080046447970367413fdb99612bbacb8d918709.scope.
Nov 26 02:13:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:13:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2d00f9aeacdc74f00bcd969bdee161efce34c2906db39f269fcfb22bc5feb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 02:13:11 compute-0 podman[448473]: 2025-11-26 02:13:11.356346877 +0000 UTC m=+0.222437093 container init 3d38e342cacaa3497a8bdc145080046447970367413fdb99612bbacb8d918709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 02:13:11 compute-0 podman[448473]: 2025-11-26 02:13:11.37786973 +0000 UTC m=+0.243959936 container start 3d38e342cacaa3497a8bdc145080046447970367413fdb99612bbacb8d918709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:13:11 compute-0 neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753[448488]: [NOTICE]   (448492) : New worker (448494) forked
Nov 26 02:13:11 compute-0 neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753[448488]: [NOTICE]   (448492) : Loading success.
Nov 26 02:13:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 357 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 643 KiB/s rd, 6.0 MiB/s wr, 152 op/s
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.726 350391 DEBUG oslo_concurrency.lockutils [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "e897c19f-7590-405d-9e92-ff9e0fd9b366" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.727 350391 DEBUG oslo_concurrency.lockutils [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.727 350391 DEBUG oslo_concurrency.lockutils [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.727 350391 DEBUG oslo_concurrency.lockutils [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.728 350391 DEBUG oslo_concurrency.lockutils [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.729 350391 INFO nova.compute.manager [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Terminating instance#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.731 350391 DEBUG nova.compute.manager [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 02:13:11 compute-0 kernel: tap03ba18c7-39 (unregistering): left promiscuous mode
Nov 26 02:13:11 compute-0 NetworkManager[48886]: <info>  [1764123191.8507] device (tap03ba18c7-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.868 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:11 compute-0 ovn_controller[89102]: 2025-11-26T02:13:11Z|00131|binding|INFO|Releasing lport 03ba18c7-398e-48f9-9269-730aa0ea6368 from this chassis (sb_readonly=0)
Nov 26 02:13:11 compute-0 ovn_controller[89102]: 2025-11-26T02:13:11Z|00132|binding|INFO|Setting lport 03ba18c7-398e-48f9-9269-730aa0ea6368 down in Southbound
Nov 26 02:13:11 compute-0 ovn_controller[89102]: 2025-11-26T02:13:11Z|00133|binding|INFO|Removing iface tap03ba18c7-39 ovn-installed in OVS
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.874 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:11 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:11.879 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:31:0c 10.100.0.4'], port_security=['fa:16:3e:49:31:0c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e897c19f-7590-405d-9e92-ff9e0fd9b366', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '66fdcaf8e71a4c809ab9cab4c64ca9d5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '60e683b1-41d9-43e8-8fca-b523d72cc1fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=995a63f2-436e-4878-a062-61a1cd67b7e2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=03ba18c7-398e-48f9-9269-730aa0ea6368) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:13:11 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:11.883 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 03ba18c7-398e-48f9-9269-730aa0ea6368 in datapath 6006a9a5-9f5c-48b2-8574-7469a748b2e4 unbound from our chassis#033[00m
Nov 26 02:13:11 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:11.888 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6006a9a5-9f5c-48b2-8574-7469a748b2e4#033[00m
Nov 26 02:13:11 compute-0 nova_compute[350387]: 2025-11-26 02:13:11.895 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:11 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:11.915 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[aab4a2bc-c26e-4ecd-8d81-28e5c03c5782]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:11 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 26 02:13:11 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 38.861s CPU time.
Nov 26 02:13:11 compute-0 systemd-machined[138512]: Machine qemu-12-instance-0000000c terminated.
Nov 26 02:13:11 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:11.970 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[61aa513a-c824-44a2-91ad-d342d42aeec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:11 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:11.975 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[024e6196-ab2b-4d83-ab40-a108194f6890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:12.033 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[6f91ff95-ee91-4d2f-bbfa-6fe0f8ad9bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:12.068 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[2d7d3f7b-b96b-490e-ae9d-f8a683f75af7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6006a9a5-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:62:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670633, 'reachable_time': 43533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 448512, 'error': None, 'target': 'ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:12.098 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[164def51-ca9c-4277-9243-b9c376f0c24c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6006a9a5-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670651, 'tstamp': 670651}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 448513, 'error': None, 'target': 'ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6006a9a5-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670655, 'tstamp': 670655}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 448513, 'error': None, 'target': 'ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:12.102 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6006a9a5-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.105 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.114 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:12.115 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6006a9a5-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:13:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:12.116 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:13:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:12.117 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6006a9a5-90, col_values=(('external_ids', {'iface-id': '0fdbc9f8-20bb-4f6b-b66d-965099ff6047'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:13:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:12.119 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.174 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.195 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.196 350391 INFO nova.virt.libvirt.driver [-] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Instance destroyed successfully.#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.196 350391 DEBUG nova.objects.instance [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lazy-loading 'resources' on Instance uuid e897c19f-7590-405d-9e92-ff9e0fd9b366 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.213 350391 DEBUG nova.virt.libvirt.vif [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T02:12:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1078684613',display_name='tempest-TestNetworkBasicOps-server-1078684613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1078684613',id=12,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFyDWvQdidKliIH+HIM7JVqsdDWQeY4BVkCwHvJcJLGUWAll4CaOk+2wkf46FTVDdHANhS0iRBWBKyNFCHlN5GDxGFhUaMWUW4q21XCkvMkhXsFc+huMMpeYvIKQhZN2Gg==',key_name='tempest-TestNetworkBasicOps-281693536',keypairs=<?>,launch_index=0,launched_at=2025-11-26T02:12:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='66fdcaf8e71a4c809ab9cab4c64ca9d5',ramdisk_id='',reservation_id='r-0ccuj94c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-345735252',owner_user_name='tempest-TestNetworkBasicOps-345735252-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T02:12:28Z,user_data=None,user_id='a7102c5716b644e9a49ae0b2b6d2bd04',uuid=e897c19f-7590-405d-9e92-ff9e0fd9b366,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "03ba18c7-398e-48f9-9269-730aa0ea6368", "address": "fa:16:3e:49:31:0c", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03ba18c7-39", "ovs_interfaceid": "03ba18c7-398e-48f9-9269-730aa0ea6368", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.214 350391 DEBUG nova.network.os_vif_util [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Converting VIF {"id": "03ba18c7-398e-48f9-9269-730aa0ea6368", "address": "fa:16:3e:49:31:0c", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03ba18c7-39", "ovs_interfaceid": "03ba18c7-398e-48f9-9269-730aa0ea6368", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.215 350391 DEBUG nova.network.os_vif_util [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:31:0c,bridge_name='br-int',has_traffic_filtering=True,id=03ba18c7-398e-48f9-9269-730aa0ea6368,network=Network(6006a9a5-9f5c-48b2-8574-7469a748b2e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03ba18c7-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.216 350391 DEBUG os_vif [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:31:0c,bridge_name='br-int',has_traffic_filtering=True,id=03ba18c7-398e-48f9-9269-730aa0ea6368,network=Network(6006a9a5-9f5c-48b2-8574-7469a748b2e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03ba18c7-39') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.219 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.220 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03ba18c7-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.227 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.230 350391 INFO os_vif [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:31:0c,bridge_name='br-int',has_traffic_filtering=True,id=03ba18c7-398e-48f9-9269-730aa0ea6368,network=Network(6006a9a5-9f5c-48b2-8574-7469a748b2e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03ba18c7-39')#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.602 350391 DEBUG nova.compute.manager [req-751c10d8-e952-4f01-b174-b3ab82580a56 req-3b1f8426-4bb7-401a-a0c4-54ada7ee658e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received event network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.603 350391 DEBUG oslo_concurrency.lockutils [req-751c10d8-e952-4f01-b174-b3ab82580a56 req-3b1f8426-4bb7-401a-a0c4-54ada7ee658e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.604 350391 DEBUG oslo_concurrency.lockutils [req-751c10d8-e952-4f01-b174-b3ab82580a56 req-3b1f8426-4bb7-401a-a0c4-54ada7ee658e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.605 350391 DEBUG oslo_concurrency.lockutils [req-751c10d8-e952-4f01-b174-b3ab82580a56 req-3b1f8426-4bb7-401a-a0c4-54ada7ee658e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.606 350391 DEBUG nova.compute.manager [req-751c10d8-e952-4f01-b174-b3ab82580a56 req-3b1f8426-4bb7-401a-a0c4-54ada7ee658e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] No waiting events found dispatching network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.607 350391 WARNING nova.compute.manager [req-751c10d8-e952-4f01-b174-b3ab82580a56 req-3b1f8426-4bb7-401a-a0c4-54ada7ee658e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received unexpected event network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b for instance with vm_state active and task_state None.#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.608 350391 DEBUG nova.compute.manager [req-751c10d8-e952-4f01-b174-b3ab82580a56 req-3b1f8426-4bb7-401a-a0c4-54ada7ee658e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Received event network-vif-unplugged-03ba18c7-398e-48f9-9269-730aa0ea6368 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.608 350391 DEBUG oslo_concurrency.lockutils [req-751c10d8-e952-4f01-b174-b3ab82580a56 req-3b1f8426-4bb7-401a-a0c4-54ada7ee658e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.609 350391 DEBUG oslo_concurrency.lockutils [req-751c10d8-e952-4f01-b174-b3ab82580a56 req-3b1f8426-4bb7-401a-a0c4-54ada7ee658e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.610 350391 DEBUG oslo_concurrency.lockutils [req-751c10d8-e952-4f01-b174-b3ab82580a56 req-3b1f8426-4bb7-401a-a0c4-54ada7ee658e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.610 350391 DEBUG nova.compute.manager [req-751c10d8-e952-4f01-b174-b3ab82580a56 req-3b1f8426-4bb7-401a-a0c4-54ada7ee658e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] No waiting events found dispatching network-vif-unplugged-03ba18c7-398e-48f9-9269-730aa0ea6368 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:13:12 compute-0 nova_compute[350387]: 2025-11-26 02:13:12.611 350391 DEBUG nova.compute.manager [req-751c10d8-e952-4f01-b174-b3ab82580a56 req-3b1f8426-4bb7-401a-a0c4-54ada7ee658e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Received event network-vif-unplugged-03ba18c7-398e-48f9-9269-730aa0ea6368 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 02:13:13 compute-0 nova_compute[350387]: 2025-11-26 02:13:13.299 350391 INFO nova.virt.libvirt.driver [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Deleting instance files /var/lib/nova/instances/e897c19f-7590-405d-9e92-ff9e0fd9b366_del#033[00m
Nov 26 02:13:13 compute-0 nova_compute[350387]: 2025-11-26 02:13:13.300 350391 INFO nova.virt.libvirt.driver [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Deletion of /var/lib/nova/instances/e897c19f-7590-405d-9e92-ff9e0fd9b366_del complete#033[00m
Nov 26 02:13:13 compute-0 nova_compute[350387]: 2025-11-26 02:13:13.386 350391 INFO nova.compute.manager [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Took 1.65 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 02:13:13 compute-0 nova_compute[350387]: 2025-11-26 02:13:13.387 350391 DEBUG oslo.service.loopingcall [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 02:13:13 compute-0 nova_compute[350387]: 2025-11-26 02:13:13.388 350391 DEBUG nova.compute.manager [-] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 02:13:13 compute-0 nova_compute[350387]: 2025-11-26 02:13:13.389 350391 DEBUG nova.network.neutron [-] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 02:13:13 compute-0 nova_compute[350387]: 2025-11-26 02:13:13.463 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 362 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 719 KiB/s rd, 6.0 MiB/s wr, 158 op/s
Nov 26 02:13:14 compute-0 podman[448543]: 2025-11-26 02:13:14.570438771 +0000 UTC m=+0.106137785 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:13:14 compute-0 nova_compute[350387]: 2025-11-26 02:13:14.587 350391 DEBUG nova.network.neutron [-] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:13:14 compute-0 nova_compute[350387]: 2025-11-26 02:13:14.607 350391 INFO nova.compute.manager [-] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Took 1.22 seconds to deallocate network for instance.#033[00m
Nov 26 02:13:14 compute-0 podman[448542]: 2025-11-26 02:13:14.608752134 +0000 UTC m=+0.141648750 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, release=1755695350, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41)
Nov 26 02:13:14 compute-0 nova_compute[350387]: 2025-11-26 02:13:14.664 350391 DEBUG oslo_concurrency.lockutils [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:14 compute-0 nova_compute[350387]: 2025-11-26 02:13:14.664 350391 DEBUG oslo_concurrency.lockutils [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:14 compute-0 nova_compute[350387]: 2025-11-26 02:13:14.692 350391 DEBUG nova.compute.manager [req-d315b037-2260-4b64-93af-1095e5af6917 req-b518f8ba-91f7-4872-96e4-c9fe659af2ac 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Received event network-vif-deleted-03ba18c7-398e-48f9-9269-730aa0ea6368 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:13:14 compute-0 nova_compute[350387]: 2025-11-26 02:13:14.800 350391 DEBUG oslo_concurrency.processutils [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:13:14 compute-0 nova_compute[350387]: 2025-11-26 02:13:14.904 350391 DEBUG nova.compute.manager [req-53975bc7-d4cc-4eca-8a05-ae25a700b51e req-74be7ab1-c670-48c0-b338-7c4d49619a10 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Received event network-vif-plugged-03ba18c7-398e-48f9-9269-730aa0ea6368 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:13:14 compute-0 nova_compute[350387]: 2025-11-26 02:13:14.905 350391 DEBUG oslo_concurrency.lockutils [req-53975bc7-d4cc-4eca-8a05-ae25a700b51e req-74be7ab1-c670-48c0-b338-7c4d49619a10 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:14 compute-0 nova_compute[350387]: 2025-11-26 02:13:14.905 350391 DEBUG oslo_concurrency.lockutils [req-53975bc7-d4cc-4eca-8a05-ae25a700b51e req-74be7ab1-c670-48c0-b338-7c4d49619a10 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:14 compute-0 nova_compute[350387]: 2025-11-26 02:13:14.905 350391 DEBUG oslo_concurrency.lockutils [req-53975bc7-d4cc-4eca-8a05-ae25a700b51e req-74be7ab1-c670-48c0-b338-7c4d49619a10 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:14 compute-0 nova_compute[350387]: 2025-11-26 02:13:14.906 350391 DEBUG nova.compute.manager [req-53975bc7-d4cc-4eca-8a05-ae25a700b51e req-74be7ab1-c670-48c0-b338-7c4d49619a10 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] No waiting events found dispatching network-vif-plugged-03ba18c7-398e-48f9-9269-730aa0ea6368 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:13:14 compute-0 nova_compute[350387]: 2025-11-26 02:13:14.906 350391 WARNING nova.compute.manager [req-53975bc7-d4cc-4eca-8a05-ae25a700b51e req-74be7ab1-c670-48c0-b338-7c4d49619a10 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Received unexpected event network-vif-plugged-03ba18c7-398e-48f9-9269-730aa0ea6368 for instance with vm_state deleted and task_state None.#033[00m
Nov 26 02:13:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:13:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:13:15 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2454223913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:13:15 compute-0 nova_compute[350387]: 2025-11-26 02:13:15.270 350391 DEBUG oslo_concurrency.processutils [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:13:15 compute-0 nova_compute[350387]: 2025-11-26 02:13:15.279 350391 DEBUG nova.compute.provider_tree [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:13:15 compute-0 nova_compute[350387]: 2025-11-26 02:13:15.453 350391 DEBUG nova.scheduler.client.report [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:13:15 compute-0 nova_compute[350387]: 2025-11-26 02:13:15.478 350391 DEBUG oslo_concurrency.lockutils [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:15 compute-0 nova_compute[350387]: 2025-11-26 02:13:15.514 350391 INFO nova.scheduler.client.report [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Deleted allocations for instance e897c19f-7590-405d-9e92-ff9e0fd9b366#033[00m
Nov 26 02:13:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 336 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.1 MiB/s wr, 180 op/s
Nov 26 02:13:15 compute-0 nova_compute[350387]: 2025-11-26 02:13:15.588 350391 DEBUG oslo_concurrency.lockutils [None req-6200a03e-c174-4388-a455-35ae5c859b9a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "e897c19f-7590-405d-9e92-ff9e0fd9b366" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:16 compute-0 nova_compute[350387]: 2025-11-26 02:13:16.803 350391 DEBUG nova.compute.manager [req-39b49710-e587-4235-9ec4-5f7621b02a39 req-df59e98c-ff03-454f-b843-5049a7f4c6cd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received event network-changed-d4404ee6-7244-483c-99ba-127555e6ee3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:13:16 compute-0 nova_compute[350387]: 2025-11-26 02:13:16.803 350391 DEBUG nova.compute.manager [req-39b49710-e587-4235-9ec4-5f7621b02a39 req-df59e98c-ff03-454f-b843-5049a7f4c6cd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Refreshing instance network info cache due to event network-changed-d4404ee6-7244-483c-99ba-127555e6ee3b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:13:16 compute-0 nova_compute[350387]: 2025-11-26 02:13:16.804 350391 DEBUG oslo_concurrency.lockutils [req-39b49710-e587-4235-9ec4-5f7621b02a39 req-df59e98c-ff03-454f-b843-5049a7f4c6cd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:13:16 compute-0 nova_compute[350387]: 2025-11-26 02:13:16.804 350391 DEBUG oslo_concurrency.lockutils [req-39b49710-e587-4235-9ec4-5f7621b02a39 req-df59e98c-ff03-454f-b843-5049a7f4c6cd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:13:16 compute-0 nova_compute[350387]: 2025-11-26 02:13:16.805 350391 DEBUG nova.network.neutron [req-39b49710-e587-4235-9ec4-5f7621b02a39 req-df59e98c-ff03-454f-b843-5049a7f4c6cd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Refreshing network info cache for port d4404ee6-7244-483c-99ba-127555e6ee3b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:13:17 compute-0 nova_compute[350387]: 2025-11-26 02:13:17.223 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 282 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 216 op/s
Nov 26 02:13:18 compute-0 nova_compute[350387]: 2025-11-26 02:13:18.467 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:18 compute-0 nova_compute[350387]: 2025-11-26 02:13:18.763 350391 DEBUG oslo_concurrency.lockutils [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "a6b626e1-3c31-460a-be1a-02b342efbb84" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:18 compute-0 nova_compute[350387]: 2025-11-26 02:13:18.764 350391 DEBUG oslo_concurrency.lockutils [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:18 compute-0 nova_compute[350387]: 2025-11-26 02:13:18.765 350391 DEBUG oslo_concurrency.lockutils [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:18 compute-0 nova_compute[350387]: 2025-11-26 02:13:18.765 350391 DEBUG oslo_concurrency.lockutils [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:18 compute-0 nova_compute[350387]: 2025-11-26 02:13:18.766 350391 DEBUG oslo_concurrency.lockutils [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:18 compute-0 nova_compute[350387]: 2025-11-26 02:13:18.767 350391 INFO nova.compute.manager [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Terminating instance#033[00m
Nov 26 02:13:18 compute-0 nova_compute[350387]: 2025-11-26 02:13:18.769 350391 DEBUG nova.compute.manager [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 02:13:18 compute-0 kernel: tap422f5ef7-f0 (unregistering): left promiscuous mode
Nov 26 02:13:18 compute-0 NetworkManager[48886]: <info>  [1764123198.8787] device (tap422f5ef7-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 02:13:18 compute-0 nova_compute[350387]: 2025-11-26 02:13:18.892 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:18 compute-0 ovn_controller[89102]: 2025-11-26T02:13:18Z|00134|binding|INFO|Releasing lport 422f5ef7-f048-4c83-a300-8b5942aafb8f from this chassis (sb_readonly=0)
Nov 26 02:13:18 compute-0 ovn_controller[89102]: 2025-11-26T02:13:18Z|00135|binding|INFO|Setting lport 422f5ef7-f048-4c83-a300-8b5942aafb8f down in Southbound
Nov 26 02:13:18 compute-0 ovn_controller[89102]: 2025-11-26T02:13:18Z|00136|binding|INFO|Removing iface tap422f5ef7-f0 ovn-installed in OVS
Nov 26 02:13:18 compute-0 nova_compute[350387]: 2025-11-26 02:13:18.916 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:18 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:18.922 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:2c:51 10.100.0.13'], port_security=['fa:16:3e:a9:2c:51 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a6b626e1-3c31-460a-be1a-02b342efbb84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '66fdcaf8e71a4c809ab9cab4c64ca9d5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f8b6275f-0b2c-431d-b2a1-cb057a9f12fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=995a63f2-436e-4878-a062-61a1cd67b7e2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=422f5ef7-f048-4c83-a300-8b5942aafb8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:13:18 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:18.924 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 422f5ef7-f048-4c83-a300-8b5942aafb8f in datapath 6006a9a5-9f5c-48b2-8574-7469a748b2e4 unbound from our chassis#033[00m
Nov 26 02:13:18 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:18.925 286844 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6006a9a5-9f5c-48b2-8574-7469a748b2e4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 02:13:18 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:18.928 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[1e95a25a-4884-4c58-9420-0f8873320d31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:18 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:18.929 286844 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4 namespace which is not needed anymore#033[00m
Nov 26 02:13:18 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 26 02:13:18 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 52.136s CPU time.
Nov 26 02:13:18 compute-0 systemd-machined[138512]: Machine qemu-9-instance-00000009 terminated.
Nov 26 02:13:19 compute-0 kernel: tap422f5ef7-f0: entered promiscuous mode
Nov 26 02:13:19 compute-0 kernel: tap422f5ef7-f0 (unregistering): left promiscuous mode
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.004 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:19 compute-0 ovn_controller[89102]: 2025-11-26T02:13:19Z|00137|binding|INFO|Claiming lport 422f5ef7-f048-4c83-a300-8b5942aafb8f for this chassis.
Nov 26 02:13:19 compute-0 ovn_controller[89102]: 2025-11-26T02:13:19Z|00138|binding|INFO|422f5ef7-f048-4c83-a300-8b5942aafb8f: Claiming fa:16:3e:a9:2c:51 10.100.0.13
Nov 26 02:13:19 compute-0 NetworkManager[48886]: <info>  [1764123199.0104] manager: (tap422f5ef7-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.023 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:2c:51 10.100.0.13'], port_security=['fa:16:3e:a9:2c:51 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a6b626e1-3c31-460a-be1a-02b342efbb84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '66fdcaf8e71a4c809ab9cab4c64ca9d5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f8b6275f-0b2c-431d-b2a1-cb057a9f12fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=995a63f2-436e-4878-a062-61a1cd67b7e2, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=422f5ef7-f048-4c83-a300-8b5942aafb8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.034 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:19 compute-0 ovn_controller[89102]: 2025-11-26T02:13:19Z|00139|binding|INFO|Releasing lport 422f5ef7-f048-4c83-a300-8b5942aafb8f from this chassis (sb_readonly=0)
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.046 350391 INFO nova.virt.libvirt.driver [-] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Instance destroyed successfully.#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.046 350391 DEBUG nova.objects.instance [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lazy-loading 'resources' on Instance uuid a6b626e1-3c31-460a-be1a-02b342efbb84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.049 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:2c:51 10.100.0.13'], port_security=['fa:16:3e:a9:2c:51 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a6b626e1-3c31-460a-be1a-02b342efbb84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '66fdcaf8e71a4c809ab9cab4c64ca9d5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f8b6275f-0b2c-431d-b2a1-cb057a9f12fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=995a63f2-436e-4878-a062-61a1cd67b7e2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=422f5ef7-f048-4c83-a300-8b5942aafb8f) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.058 350391 DEBUG nova.virt.libvirt.vif [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T02:11:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1631385969',display_name='tempest-TestNetworkBasicOps-server-1631385969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1631385969',id=9,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVShY87yzlQOWe2u5ta5RUz1JTn9hlbCsCTuoOM49NKuxjE+WriVj7MZBmGhYZn3KtsgUeQW4ny49nFDDbEDIaBG+pCU+fOKCpWz3oR3Z1j5AqqbJOXWrfIzpHCXMzVNA==',key_name='tempest-TestNetworkBasicOps-280692433',keypairs=<?>,launch_index=0,launched_at=2025-11-26T02:11:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='66fdcaf8e71a4c809ab9cab4c64ca9d5',ramdisk_id='',reservation_id='r-eah10bx0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-345735252',owner_user_name='tempest-TestNetworkBasicOps-345735252-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T02:11:26Z,user_data=None,user_id='a7102c5716b644e9a49ae0b2b6d2bd04',uuid=a6b626e1-3c31-460a-be1a-02b342efbb84,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "address": "fa:16:3e:a9:2c:51", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422f5ef7-f0", "ovs_interfaceid": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.059 350391 DEBUG nova.network.os_vif_util [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Converting VIF {"id": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "address": "fa:16:3e:a9:2c:51", "network": {"id": "6006a9a5-9f5c-48b2-8574-7469a748b2e4", "bridge": "br-int", "label": "tempest-network-smoke--212368833", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "66fdcaf8e71a4c809ab9cab4c64ca9d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap422f5ef7-f0", "ovs_interfaceid": "422f5ef7-f048-4c83-a300-8b5942aafb8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.059 350391 DEBUG nova.network.os_vif_util [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:2c:51,bridge_name='br-int',has_traffic_filtering=True,id=422f5ef7-f048-4c83-a300-8b5942aafb8f,network=Network(6006a9a5-9f5c-48b2-8574-7469a748b2e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap422f5ef7-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.059 350391 DEBUG os_vif [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:2c:51,bridge_name='br-int',has_traffic_filtering=True,id=422f5ef7-f048-4c83-a300-8b5942aafb8f,network=Network(6006a9a5-9f5c-48b2-8574-7469a748b2e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap422f5ef7-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.061 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.061 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap422f5ef7-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.064 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.065 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.067 350391 INFO os_vif [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:2c:51,bridge_name='br-int',has_traffic_filtering=True,id=422f5ef7-f048-4c83-a300-8b5942aafb8f,network=Network(6006a9a5-9f5c-48b2-8574-7469a748b2e4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap422f5ef7-f0')#033[00m
Nov 26 02:13:19 compute-0 neutron-haproxy-ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4[443776]: [NOTICE]   (443798) : haproxy version is 2.8.14-c23fe91
Nov 26 02:13:19 compute-0 neutron-haproxy-ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4[443776]: [NOTICE]   (443798) : path to executable is /usr/sbin/haproxy
Nov 26 02:13:19 compute-0 neutron-haproxy-ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4[443776]: [WARNING]  (443798) : Exiting Master process...
Nov 26 02:13:19 compute-0 neutron-haproxy-ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4[443776]: [ALERT]    (443798) : Current worker (443807) exited with code 143 (Terminated)
Nov 26 02:13:19 compute-0 neutron-haproxy-ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4[443776]: [WARNING]  (443798) : All workers exited. Exiting... (0)
Nov 26 02:13:19 compute-0 systemd[1]: libpod-233e965bf809b82f1de538910d77139824ee23680c73715bc29898bf0462ea6f.scope: Deactivated successfully.
Nov 26 02:13:19 compute-0 podman[448635]: 2025-11-26 02:13:19.244269613 +0000 UTC m=+0.169562711 container died 233e965bf809b82f1de538910d77139824ee23680c73715bc29898bf0462ea6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3)
Nov 26 02:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-233e965bf809b82f1de538910d77139824ee23680c73715bc29898bf0462ea6f-userdata-shm.mount: Deactivated successfully.
Nov 26 02:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-82b87993993d907c4c1282e1161ef9c6579d6143f9cee71cdea5072bafb3dbf4-merged.mount: Deactivated successfully.
Nov 26 02:13:19 compute-0 podman[448635]: 2025-11-26 02:13:19.413404672 +0000 UTC m=+0.338697730 container cleanup 233e965bf809b82f1de538910d77139824ee23680c73715bc29898bf0462ea6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.416 350391 DEBUG nova.network.neutron [req-39b49710-e587-4235-9ec4-5f7621b02a39 req-df59e98c-ff03-454f-b843-5049a7f4c6cd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Updated VIF entry in instance network info cache for port d4404ee6-7244-483c-99ba-127555e6ee3b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.417 350391 DEBUG nova.network.neutron [req-39b49710-e587-4235-9ec4-5f7621b02a39 req-df59e98c-ff03-454f-b843-5049a7f4c6cd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Updating instance_info_cache with network_info: [{"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:13:19 compute-0 systemd[1]: libpod-conmon-233e965bf809b82f1de538910d77139824ee23680c73715bc29898bf0462ea6f.scope: Deactivated successfully.
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.453 350391 DEBUG oslo_concurrency.lockutils [req-39b49710-e587-4235-9ec4-5f7621b02a39 req-df59e98c-ff03-454f-b843-5049a7f4c6cd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:13:19 compute-0 podman[448680]: 2025-11-26 02:13:19.543749754 +0000 UTC m=+0.089419886 container remove 233e965bf809b82f1de538910d77139824ee23680c73715bc29898bf0462ea6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:13:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 282 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 140 op/s
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.554 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[e546edfe-053a-456a-9dea-e2785867ab16]: (4, ('Wed Nov 26 02:13:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4 (233e965bf809b82f1de538910d77139824ee23680c73715bc29898bf0462ea6f)\n233e965bf809b82f1de538910d77139824ee23680c73715bc29898bf0462ea6f\nWed Nov 26 02:13:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4 (233e965bf809b82f1de538910d77139824ee23680c73715bc29898bf0462ea6f)\n233e965bf809b82f1de538910d77139824ee23680c73715bc29898bf0462ea6f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.564 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[2e8adc41-8a7b-4b77-aa98-6e62331b773f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.567 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6006a9a5-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.570 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:19 compute-0 kernel: tap6006a9a5-90: left promiscuous mode
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.590 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[4153fcb9-30fb-4776-a91e-48487bf37a59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.597 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.610 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[59403b9d-6b3a-4902-b25e-29bcb0f39af1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.612 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c830b7-304b-40a0-b851-81640bd68637]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.633 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[13f439da-b3ab-4c54-b9db-fc5296a5ad04]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670623, 'reachable_time': 39561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 448696, 'error': None, 'target': 'ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d6006a9a5\x2d9f5c\x2d48b2\x2d8574\x2d7469a748b2e4.mount: Deactivated successfully.
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.641 287175 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6006a9a5-9f5c-48b2-8574-7469a748b2e4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.641 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe546e5-40db-46e7-bad3-ce0eafb7b3f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.642 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 422f5ef7-f048-4c83-a300-8b5942aafb8f in datapath 6006a9a5-9f5c-48b2-8574-7469a748b2e4 unbound from our chassis#033[00m
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.644 286844 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6006a9a5-9f5c-48b2-8574-7469a748b2e4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.645 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[2addfcd9-5347-42b8-aea0-9b41951605ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.646 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 422f5ef7-f048-4c83-a300-8b5942aafb8f in datapath 6006a9a5-9f5c-48b2-8574-7469a748b2e4 unbound from our chassis#033[00m
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.648 286844 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6006a9a5-9f5c-48b2-8574-7469a748b2e4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 02:13:19 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:19.649 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[0bf83088-652c-4553-8543-514f13b56fb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.679 350391 DEBUG nova.compute.manager [req-af3ea887-7146-4f0b-9346-6f53fe968074 req-76b38329-f57c-414c-a1ea-1bbb10174613 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Received event network-vif-unplugged-422f5ef7-f048-4c83-a300-8b5942aafb8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.679 350391 DEBUG oslo_concurrency.lockutils [req-af3ea887-7146-4f0b-9346-6f53fe968074 req-76b38329-f57c-414c-a1ea-1bbb10174613 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.680 350391 DEBUG oslo_concurrency.lockutils [req-af3ea887-7146-4f0b-9346-6f53fe968074 req-76b38329-f57c-414c-a1ea-1bbb10174613 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.680 350391 DEBUG oslo_concurrency.lockutils [req-af3ea887-7146-4f0b-9346-6f53fe968074 req-76b38329-f57c-414c-a1ea-1bbb10174613 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.681 350391 DEBUG nova.compute.manager [req-af3ea887-7146-4f0b-9346-6f53fe968074 req-76b38329-f57c-414c-a1ea-1bbb10174613 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] No waiting events found dispatching network-vif-unplugged-422f5ef7-f048-4c83-a300-8b5942aafb8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:13:19 compute-0 nova_compute[350387]: 2025-11-26 02:13:19.681 350391 DEBUG nova.compute.manager [req-af3ea887-7146-4f0b-9346-6f53fe968074 req-76b38329-f57c-414c-a1ea-1bbb10174613 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Received event network-vif-unplugged-422f5ef7-f048-4c83-a300-8b5942aafb8f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 02:13:20 compute-0 nova_compute[350387]: 2025-11-26 02:13:20.029 350391 INFO nova.virt.libvirt.driver [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Deleting instance files /var/lib/nova/instances/a6b626e1-3c31-460a-be1a-02b342efbb84_del#033[00m
Nov 26 02:13:20 compute-0 nova_compute[350387]: 2025-11-26 02:13:20.030 350391 INFO nova.virt.libvirt.driver [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Deletion of /var/lib/nova/instances/a6b626e1-3c31-460a-be1a-02b342efbb84_del complete#033[00m
Nov 26 02:13:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:13:20 compute-0 nova_compute[350387]: 2025-11-26 02:13:20.133 350391 INFO nova.compute.manager [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Took 1.36 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 02:13:20 compute-0 nova_compute[350387]: 2025-11-26 02:13:20.133 350391 DEBUG oslo.service.loopingcall [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 02:13:20 compute-0 nova_compute[350387]: 2025-11-26 02:13:20.134 350391 DEBUG nova.compute.manager [-] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 02:13:20 compute-0 nova_compute[350387]: 2025-11-26 02:13:20.134 350391 DEBUG nova.network.neutron [-] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 02:13:20 compute-0 nova_compute[350387]: 2025-11-26 02:13:20.824 350391 DEBUG nova.network.neutron [-] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:13:20 compute-0 nova_compute[350387]: 2025-11-26 02:13:20.846 350391 INFO nova.compute.manager [-] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Took 0.71 seconds to deallocate network for instance.#033[00m
Nov 26 02:13:20 compute-0 nova_compute[350387]: 2025-11-26 02:13:20.888 350391 DEBUG nova.compute.manager [req-1c108796-ab28-4980-a1df-90224f9938da req-9cd3d19d-26ff-415b-91cc-9778db0416a9 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Received event network-vif-deleted-422f5ef7-f048-4c83-a300-8b5942aafb8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:13:20 compute-0 nova_compute[350387]: 2025-11-26 02:13:20.895 350391 DEBUG oslo_concurrency.lockutils [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:20 compute-0 nova_compute[350387]: 2025-11-26 02:13:20.895 350391 DEBUG oslo_concurrency.lockutils [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:21 compute-0 nova_compute[350387]: 2025-11-26 02:13:21.006 350391 DEBUG oslo_concurrency.processutils [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:13:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:13:21 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1448097732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:13:21 compute-0 nova_compute[350387]: 2025-11-26 02:13:21.525 350391 DEBUG oslo_concurrency.processutils [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:13:21 compute-0 nova_compute[350387]: 2025-11-26 02:13:21.535 350391 DEBUG nova.compute.provider_tree [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:13:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 219 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 156 op/s
Nov 26 02:13:21 compute-0 nova_compute[350387]: 2025-11-26 02:13:21.555 350391 DEBUG nova.scheduler.client.report [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:13:21 compute-0 nova_compute[350387]: 2025-11-26 02:13:21.577 350391 DEBUG oslo_concurrency.lockutils [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:21 compute-0 nova_compute[350387]: 2025-11-26 02:13:21.597 350391 INFO nova.scheduler.client.report [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Deleted allocations for instance a6b626e1-3c31-460a-be1a-02b342efbb84#033[00m
Nov 26 02:13:21 compute-0 nova_compute[350387]: 2025-11-26 02:13:21.686 350391 DEBUG oslo_concurrency.lockutils [None req-4ff89803-050f-45a9-a420-a6b3f888cc3a a7102c5716b644e9a49ae0b2b6d2bd04 66fdcaf8e71a4c809ab9cab4c64ca9d5 - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:21 compute-0 nova_compute[350387]: 2025-11-26 02:13:21.764 350391 DEBUG nova.compute.manager [req-ed6c1015-5347-456e-9d24-a63a102f0bd2 req-8fece208-79f1-4651-ae51-06e7cb1e9d3a 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Received event network-vif-plugged-422f5ef7-f048-4c83-a300-8b5942aafb8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:13:21 compute-0 nova_compute[350387]: 2025-11-26 02:13:21.765 350391 DEBUG oslo_concurrency.lockutils [req-ed6c1015-5347-456e-9d24-a63a102f0bd2 req-8fece208-79f1-4651-ae51-06e7cb1e9d3a 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:21 compute-0 nova_compute[350387]: 2025-11-26 02:13:21.777 350391 DEBUG oslo_concurrency.lockutils [req-ed6c1015-5347-456e-9d24-a63a102f0bd2 req-8fece208-79f1-4651-ae51-06e7cb1e9d3a 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:21 compute-0 nova_compute[350387]: 2025-11-26 02:13:21.777 350391 DEBUG oslo_concurrency.lockutils [req-ed6c1015-5347-456e-9d24-a63a102f0bd2 req-8fece208-79f1-4651-ae51-06e7cb1e9d3a 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "a6b626e1-3c31-460a-be1a-02b342efbb84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:21 compute-0 nova_compute[350387]: 2025-11-26 02:13:21.778 350391 DEBUG nova.compute.manager [req-ed6c1015-5347-456e-9d24-a63a102f0bd2 req-8fece208-79f1-4651-ae51-06e7cb1e9d3a 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] No waiting events found dispatching network-vif-plugged-422f5ef7-f048-4c83-a300-8b5942aafb8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:13:21 compute-0 nova_compute[350387]: 2025-11-26 02:13:21.778 350391 WARNING nova.compute.manager [req-ed6c1015-5347-456e-9d24-a63a102f0bd2 req-8fece208-79f1-4651-ae51-06e7cb1e9d3a 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Received unexpected event network-vif-plugged-422f5ef7-f048-4c83-a300-8b5942aafb8f for instance with vm_state deleted and task_state None.#033[00m
Nov 26 02:13:23 compute-0 nova_compute[350387]: 2025-11-26 02:13:23.470 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 40 KiB/s wr, 127 op/s
Nov 26 02:13:24 compute-0 nova_compute[350387]: 2025-11-26 02:13:24.063 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:24.998 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:24.999 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:25.000 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:13:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 121 op/s
Nov 26 02:13:26 compute-0 ovn_controller[89102]: 2025-11-26T02:13:26Z|00140|binding|INFO|Releasing lport 3e4f4a4e-c5ed-4544-9ad9-aa5c0fc87ea7 from this chassis (sb_readonly=0)
Nov 26 02:13:26 compute-0 ovn_controller[89102]: 2025-11-26T02:13:26Z|00141|binding|INFO|Releasing lport b6066942-f0e5-4ff0-92ae-a027fdd86fa7 from this chassis (sb_readonly=0)
Nov 26 02:13:26 compute-0 nova_compute[350387]: 2025-11-26 02:13:26.658 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:13:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2290183702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:13:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:13:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2290183702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:13:27 compute-0 nova_compute[350387]: 2025-11-26 02:13:27.188 350391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764123192.1876097, e897c19f-7590-405d-9e92-ff9e0fd9b366 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:13:27 compute-0 nova_compute[350387]: 2025-11-26 02:13:27.189 350391 INFO nova.compute.manager [-] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] VM Stopped (Lifecycle Event)#033[00m
Nov 26 02:13:27 compute-0 nova_compute[350387]: 2025-11-26 02:13:27.216 350391 DEBUG nova.compute.manager [None req-14453760-29b2-40ef-aa1c-896e0aaf2486 - - - - - -] [instance: e897c19f-7590-405d-9e92-ff9e0fd9b366] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:13:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 91 op/s
Nov 26 02:13:28 compute-0 nova_compute[350387]: 2025-11-26 02:13:28.475 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:28 compute-0 podman[448720]: 2025-11-26 02:13:28.561406713 +0000 UTC m=+0.116457244 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 02:13:28 compute-0 podman[448722]: 2025-11-26 02:13:28.569400647 +0000 UTC m=+0.109388756 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:13:28 compute-0 podman[448721]: 2025-11-26 02:13:28.589765038 +0000 UTC m=+0.129375826 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 02:13:29 compute-0 nova_compute[350387]: 2025-11-26 02:13:29.067 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 26 02:13:29 compute-0 podman[158021]: time="2025-11-26T02:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:13:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45046 "" "Go-http-client/1.1"
Nov 26 02:13:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9114 "" "Go-http-client/1.1"
Nov 26 02:13:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:13:31 compute-0 openstack_network_exporter[367323]: ERROR   02:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:13:31 compute-0 openstack_network_exporter[367323]: ERROR   02:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:13:31 compute-0 openstack_network_exporter[367323]: ERROR   02:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:13:31 compute-0 openstack_network_exporter[367323]: ERROR   02:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:13:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:13:31 compute-0 openstack_network_exporter[367323]: ERROR   02:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:13:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:13:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Nov 26 02:13:33 compute-0 nova_compute[350387]: 2025-11-26 02:13:33.480 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 8.8 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Nov 26 02:13:34 compute-0 nova_compute[350387]: 2025-11-26 02:13:34.042 350391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764123199.040346, a6b626e1-3c31-460a-be1a-02b342efbb84 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:13:34 compute-0 nova_compute[350387]: 2025-11-26 02:13:34.043 350391 INFO nova.compute.manager [-] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] VM Stopped (Lifecycle Event)#033[00m
Nov 26 02:13:34 compute-0 nova_compute[350387]: 2025-11-26 02:13:34.068 350391 DEBUG nova.compute.manager [None req-a9d75e58-25d4-4427-a623-b198684498b8 - - - - - -] [instance: a6b626e1-3c31-460a-be1a-02b342efbb84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:13:34 compute-0 nova_compute[350387]: 2025-11-26 02:13:34.070 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:13:35 compute-0 nova_compute[350387]: 2025-11-26 02:13:35.334 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:13:35 compute-0 nova_compute[350387]: 2025-11-26 02:13:35.335 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:13:35 compute-0 nova_compute[350387]: 2025-11-26 02:13:35.376 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:35 compute-0 nova_compute[350387]: 2025-11-26 02:13:35.376 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:35 compute-0 nova_compute[350387]: 2025-11-26 02:13:35.376 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:35 compute-0 nova_compute[350387]: 2025-11-26 02:13:35.377 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:13:35 compute-0 nova_compute[350387]: 2025-11-26 02:13:35.377 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:13:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Nov 26 02:13:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:13:35 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/932565590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:13:35 compute-0 nova_compute[350387]: 2025-11-26 02:13:35.889 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:13:36 compute-0 nova_compute[350387]: 2025-11-26 02:13:36.010 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:13:36 compute-0 nova_compute[350387]: 2025-11-26 02:13:36.011 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:13:36 compute-0 nova_compute[350387]: 2025-11-26 02:13:36.022 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:13:36 compute-0 nova_compute[350387]: 2025-11-26 02:13:36.023 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:13:36 compute-0 podman[448798]: 2025-11-26 02:13:36.031278676 +0000 UTC m=+0.083593143 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm)
Nov 26 02:13:36 compute-0 podman[448799]: 2025-11-26 02:13:36.129233961 +0000 UTC m=+0.164302975 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 02:13:36 compute-0 nova_compute[350387]: 2025-11-26 02:13:36.499 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:13:36 compute-0 nova_compute[350387]: 2025-11-26 02:13:36.500 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3618MB free_disk=59.921905517578125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:13:36 compute-0 nova_compute[350387]: 2025-11-26 02:13:36.501 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:13:36 compute-0 nova_compute[350387]: 2025-11-26 02:13:36.501 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:13:36 compute-0 nova_compute[350387]: 2025-11-26 02:13:36.660 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 74d081af-66cd-4e37-99e4-31f777885766 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:13:36 compute-0 nova_compute[350387]: 2025-11-26 02:13:36.660 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:13:36 compute-0 nova_compute[350387]: 2025-11-26 02:13:36.661 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:13:36 compute-0 nova_compute[350387]: 2025-11-26 02:13:36.661 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:13:36 compute-0 nova_compute[350387]: 2025-11-26 02:13:36.717 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:13:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:13:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2296145533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:13:37 compute-0 nova_compute[350387]: 2025-11-26 02:13:37.213 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:13:37 compute-0 nova_compute[350387]: 2025-11-26 02:13:37.225 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:13:37 compute-0 nova_compute[350387]: 2025-11-26 02:13:37.248 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:13:37 compute-0 nova_compute[350387]: 2025-11-26 02:13:37.288 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:13:37 compute-0 nova_compute[350387]: 2025-11-26 02:13:37.289 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:13:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Nov 26 02:13:38 compute-0 nova_compute[350387]: 2025-11-26 02:13:38.255 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:13:38 compute-0 nova_compute[350387]: 2025-11-26 02:13:38.256 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:13:38 compute-0 nova_compute[350387]: 2025-11-26 02:13:38.483 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:38 compute-0 podman[448860]: 2025-11-26 02:13:38.505038168 +0000 UTC m=+0.071760642 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=base rhel9, version=9.4)
Nov 26 02:13:38 compute-0 podman[448861]: 2025-11-26 02:13:38.538138825 +0000 UTC m=+0.095867457 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:13:39 compute-0 nova_compute[350387]: 2025-11-26 02:13:39.074 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Nov 26 02:13:39 compute-0 nova_compute[350387]: 2025-11-26 02:13:39.636 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:13:40 compute-0 nova_compute[350387]: 2025-11-26 02:13:40.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:13:41
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['images', '.mgr', 'volumes', 'default.rgw.control', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta']
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:13:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:13:41 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:13:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:13:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:13:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:13:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev bb7af78a-451b-48f5-bc82-e72559c0015d does not exist
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 17587593-9d1f-4e73-8701-0af47814ec0b does not exist
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 3854bc9d-10d1-4735-be23-90ae7aa39236 does not exist
Nov 26 02:13:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:13:41 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:13:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:13:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:13:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:13:41 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.7 KiB/s wr, 0 op/s
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:13:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:13:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:13:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:13:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:13:42 compute-0 nova_compute[350387]: 2025-11-26 02:13:42.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:13:42 compute-0 nova_compute[350387]: 2025-11-26 02:13:42.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:13:42 compute-0 nova_compute[350387]: 2025-11-26 02:13:42.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:13:42 compute-0 podman[449161]: 2025-11-26 02:13:42.46286551 +0000 UTC m=+0.085145527 container create 9d324c15885f633917aaa4279f37a5680746c5131154b9853a29d2a6a51b73e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldwasser, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 02:13:42 compute-0 podman[449161]: 2025-11-26 02:13:42.434393712 +0000 UTC m=+0.056673799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:13:42 compute-0 nova_compute[350387]: 2025-11-26 02:13:42.541 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:13:42 compute-0 nova_compute[350387]: 2025-11-26 02:13:42.541 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:13:42 compute-0 nova_compute[350387]: 2025-11-26 02:13:42.541 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:13:42 compute-0 nova_compute[350387]: 2025-11-26 02:13:42.542 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 74d081af-66cd-4e37-99e4-31f777885766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:13:42 compute-0 systemd[1]: Started libpod-conmon-9d324c15885f633917aaa4279f37a5680746c5131154b9853a29d2a6a51b73e5.scope.
Nov 26 02:13:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:13:42 compute-0 podman[449161]: 2025-11-26 02:13:42.643546482 +0000 UTC m=+0.265826579 container init 9d324c15885f633917aaa4279f37a5680746c5131154b9853a29d2a6a51b73e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldwasser, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:13:42 compute-0 podman[449161]: 2025-11-26 02:13:42.661528086 +0000 UTC m=+0.283808133 container start 9d324c15885f633917aaa4279f37a5680746c5131154b9853a29d2a6a51b73e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldwasser, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:13:42 compute-0 podman[449161]: 2025-11-26 02:13:42.668405979 +0000 UTC m=+0.290686076 container attach 9d324c15885f633917aaa4279f37a5680746c5131154b9853a29d2a6a51b73e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 02:13:42 compute-0 nova_compute[350387]: 2025-11-26 02:13:42.672 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:42 compute-0 awesome_goldwasser[449177]: 167 167
Nov 26 02:13:42 compute-0 systemd[1]: libpod-9d324c15885f633917aaa4279f37a5680746c5131154b9853a29d2a6a51b73e5.scope: Deactivated successfully.
Nov 26 02:13:42 compute-0 podman[449182]: 2025-11-26 02:13:42.762455063 +0000 UTC m=+0.059861157 container died 9d324c15885f633917aaa4279f37a5680746c5131154b9853a29d2a6a51b73e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:13:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9d654b0b2899af0ad83f684003c3c662e0b8ae675f4eec59ecb8828aa031bb9-merged.mount: Deactivated successfully.
Nov 26 02:13:42 compute-0 podman[449182]: 2025-11-26 02:13:42.843182585 +0000 UTC m=+0.140588639 container remove 9d324c15885f633917aaa4279f37a5680746c5131154b9853a29d2a6a51b73e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:13:42 compute-0 systemd[1]: libpod-conmon-9d324c15885f633917aaa4279f37a5680746c5131154b9853a29d2a6a51b73e5.scope: Deactivated successfully.
Nov 26 02:13:43 compute-0 podman[449201]: 2025-11-26 02:13:43.152243434 +0000 UTC m=+0.115015863 container create f54f7e468151fdd89fcc3e58fde092d77b86686b04729e292cf741abe700eafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sanderson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 02:13:43 compute-0 podman[449201]: 2025-11-26 02:13:43.100925526 +0000 UTC m=+0.063697955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:13:43 compute-0 systemd[1]: Started libpod-conmon-f54f7e468151fdd89fcc3e58fde092d77b86686b04729e292cf741abe700eafb.scope.
Nov 26 02:13:43 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b47996799d410c9fd5374a16bbeaf74fa7b4cbe096f39e9d3dbfb5bc17919443/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b47996799d410c9fd5374a16bbeaf74fa7b4cbe096f39e9d3dbfb5bc17919443/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b47996799d410c9fd5374a16bbeaf74fa7b4cbe096f39e9d3dbfb5bc17919443/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b47996799d410c9fd5374a16bbeaf74fa7b4cbe096f39e9d3dbfb5bc17919443/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b47996799d410c9fd5374a16bbeaf74fa7b4cbe096f39e9d3dbfb5bc17919443/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:13:43 compute-0 podman[449201]: 2025-11-26 02:13:43.371914599 +0000 UTC m=+0.334686998 container init f54f7e468151fdd89fcc3e58fde092d77b86686b04729e292cf741abe700eafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sanderson, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 02:13:43 compute-0 podman[449201]: 2025-11-26 02:13:43.391889069 +0000 UTC m=+0.354661458 container start f54f7e468151fdd89fcc3e58fde092d77b86686b04729e292cf741abe700eafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sanderson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:13:43 compute-0 podman[449201]: 2025-11-26 02:13:43.40407547 +0000 UTC m=+0.366847899 container attach f54f7e468151fdd89fcc3e58fde092d77b86686b04729e292cf741abe700eafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 02:13:43 compute-0 nova_compute[350387]: 2025-11-26 02:13:43.485 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Nov 26 02:13:44 compute-0 nova_compute[350387]: 2025-11-26 02:13:44.077 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:44 compute-0 nova_compute[350387]: 2025-11-26 02:13:44.136 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updating instance_info_cache with network_info: [{"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:13:44 compute-0 nova_compute[350387]: 2025-11-26 02:13:44.268 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:13:44 compute-0 nova_compute[350387]: 2025-11-26 02:13:44.270 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:13:44 compute-0 elastic_sanderson[449218]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:13:44 compute-0 elastic_sanderson[449218]: --> relative data size: 1.0
Nov 26 02:13:44 compute-0 elastic_sanderson[449218]: --> All data devices are unavailable
Nov 26 02:13:44 compute-0 systemd[1]: libpod-f54f7e468151fdd89fcc3e58fde092d77b86686b04729e292cf741abe700eafb.scope: Deactivated successfully.
Nov 26 02:13:44 compute-0 systemd[1]: libpod-f54f7e468151fdd89fcc3e58fde092d77b86686b04729e292cf741abe700eafb.scope: Consumed 1.192s CPU time.
Nov 26 02:13:44 compute-0 podman[449201]: 2025-11-26 02:13:44.698527709 +0000 UTC m=+1.661300108 container died f54f7e468151fdd89fcc3e58fde092d77b86686b04729e292cf741abe700eafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sanderson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 26 02:13:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b47996799d410c9fd5374a16bbeaf74fa7b4cbe096f39e9d3dbfb5bc17919443-merged.mount: Deactivated successfully.
Nov 26 02:13:44 compute-0 podman[449201]: 2025-11-26 02:13:44.95053754 +0000 UTC m=+1.913309939 container remove f54f7e468151fdd89fcc3e58fde092d77b86686b04729e292cf741abe700eafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sanderson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 02:13:44 compute-0 systemd[1]: libpod-conmon-f54f7e468151fdd89fcc3e58fde092d77b86686b04729e292cf741abe700eafb.scope: Deactivated successfully.
Nov 26 02:13:45 compute-0 podman[449247]: 2025-11-26 02:13:45.005566232 +0000 UTC m=+0.296268132 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, distribution-scope=public, container_name=openstack_network_exporter)
Nov 26 02:13:45 compute-0 podman[449248]: 2025-11-26 02:13:45.034139292 +0000 UTC m=+0.317953009 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 02:13:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.133518) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123225133539, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1290, "num_deletes": 250, "total_data_size": 1897140, "memory_usage": 1929920, "flush_reason": "Manual Compaction"}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123225146997, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 1131422, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37809, "largest_seqno": 39098, "table_properties": {"data_size": 1126762, "index_size": 2056, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12525, "raw_average_key_size": 20, "raw_value_size": 1116494, "raw_average_value_size": 1854, "num_data_blocks": 93, "num_entries": 602, "num_filter_entries": 602, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764123101, "oldest_key_time": 1764123101, "file_creation_time": 1764123225, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 13535 microseconds, and 4294 cpu microseconds.
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.147049) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 1131422 bytes OK
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.147068) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.150442) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.150458) EVENT_LOG_v1 {"time_micros": 1764123225150452, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.150475) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1891328, prev total WAL file size 1902639, number of live WAL files 2.
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.151756) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353036' seq:72057594037927935, type:22 .. '6D6772737461740031373537' seq:0, type:0; will stop at (end)
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(1104KB)], [86(9673KB)]
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123225151814, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11037510, "oldest_snapshot_seqno": -1}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 5836 keys, 8473513 bytes, temperature: kUnknown
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123225218965, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 8473513, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8436398, "index_size": 21453, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 147637, "raw_average_key_size": 25, "raw_value_size": 8332907, "raw_average_value_size": 1427, "num_data_blocks": 885, "num_entries": 5836, "num_filter_entries": 5836, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764123225, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.219343) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 8473513 bytes
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.223654) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.9 rd, 125.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.4 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(17.2) write-amplify(7.5) OK, records in: 6296, records dropped: 460 output_compression: NoCompression
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.223692) EVENT_LOG_v1 {"time_micros": 1764123225223677, "job": 50, "event": "compaction_finished", "compaction_time_micros": 67346, "compaction_time_cpu_micros": 34279, "output_level": 6, "num_output_files": 1, "total_output_size": 8473513, "num_input_records": 6296, "num_output_records": 5836, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123225224544, "job": 50, "event": "table_file_deletion", "file_number": 88}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123225226765, "job": 50, "event": "table_file_deletion", "file_number": 86}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.151542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.227016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.227021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.227023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.227024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.227026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.227756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123225227810, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 263, "num_deletes": 251, "total_data_size": 14596, "memory_usage": 20872, "flush_reason": "Manual Compaction"}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123225233005, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 14554, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39099, "largest_seqno": 39361, "table_properties": {"data_size": 12717, "index_size": 70, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4731, "raw_average_key_size": 18, "raw_value_size": 9236, "raw_average_value_size": 35, "num_data_blocks": 3, "num_entries": 260, "num_filter_entries": 260, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764123225, "oldest_key_time": 1764123225, "file_creation_time": 1764123225, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 5321 microseconds, and 1200 cpu microseconds.
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.233075) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 14554 bytes OK
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.233094) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.235373) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.235392) EVENT_LOG_v1 {"time_micros": 1764123225235387, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.235408) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 12552, prev total WAL file size 12552, number of live WAL files 2.
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.236366) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(14KB)], [89(8274KB)]
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123225236409, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 8488067, "oldest_snapshot_seqno": -1}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 5589 keys, 6772701 bytes, temperature: kUnknown
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123225272330, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 6772701, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6738896, "index_size": 18719, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14021, "raw_key_size": 143201, "raw_average_key_size": 25, "raw_value_size": 6641375, "raw_average_value_size": 1188, "num_data_blocks": 760, "num_entries": 5589, "num_filter_entries": 5589, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764123225, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.273093) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 6772701 bytes
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.275037) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.5 rd, 187.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 8.1 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(1048.6) write-amplify(465.3) OK, records in: 6096, records dropped: 507 output_compression: NoCompression
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.275064) EVENT_LOG_v1 {"time_micros": 1764123225275052, "job": 52, "event": "compaction_finished", "compaction_time_micros": 36049, "compaction_time_cpu_micros": 18671, "output_level": 6, "num_output_files": 1, "total_output_size": 6772701, "num_input_records": 6096, "num_output_records": 5589, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123225275170, "job": 52, "event": "table_file_deletion", "file_number": 91}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123225276673, "job": 52, "event": "table_file_deletion", "file_number": 89}
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.236246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.276902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.276906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.276908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.276910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:13:45 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:13:45.276912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:13:45 compute-0 nova_compute[350387]: 2025-11-26 02:13:45.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:13:45 compute-0 nova_compute[350387]: 2025-11-26 02:13:45.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:13:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 26 02:13:45 compute-0 podman[449436]: 2025-11-26 02:13:45.9313059 +0000 UTC m=+0.077507273 container create 0ab85d84b913f42adb88f2caa30f67dc82669856aca3d08455d5ee56598c09b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:13:45 compute-0 podman[449436]: 2025-11-26 02:13:45.893954713 +0000 UTC m=+0.040156146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:13:45 compute-0 systemd[1]: Started libpod-conmon-0ab85d84b913f42adb88f2caa30f67dc82669856aca3d08455d5ee56598c09b4.scope.
Nov 26 02:13:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:13:46 compute-0 podman[449436]: 2025-11-26 02:13:46.053046691 +0000 UTC m=+0.199248044 container init 0ab85d84b913f42adb88f2caa30f67dc82669856aca3d08455d5ee56598c09b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:13:46 compute-0 podman[449436]: 2025-11-26 02:13:46.061525778 +0000 UTC m=+0.207727131 container start 0ab85d84b913f42adb88f2caa30f67dc82669856aca3d08455d5ee56598c09b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_perlman, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:13:46 compute-0 podman[449436]: 2025-11-26 02:13:46.067145796 +0000 UTC m=+0.213347159 container attach 0ab85d84b913f42adb88f2caa30f67dc82669856aca3d08455d5ee56598c09b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:13:46 compute-0 determined_perlman[449450]: 167 167
Nov 26 02:13:46 compute-0 systemd[1]: libpod-0ab85d84b913f42adb88f2caa30f67dc82669856aca3d08455d5ee56598c09b4.scope: Deactivated successfully.
Nov 26 02:13:46 compute-0 podman[449436]: 2025-11-26 02:13:46.072736902 +0000 UTC m=+0.218938245 container died 0ab85d84b913f42adb88f2caa30f67dc82669856aca3d08455d5ee56598c09b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_perlman, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 02:13:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d5d29439740bb480d40e2b00b5055399d557dae112db92a8b1f422fcac5b618-merged.mount: Deactivated successfully.
Nov 26 02:13:46 compute-0 podman[449436]: 2025-11-26 02:13:46.119391649 +0000 UTC m=+0.265592992 container remove 0ab85d84b913f42adb88f2caa30f67dc82669856aca3d08455d5ee56598c09b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 02:13:46 compute-0 systemd[1]: libpod-conmon-0ab85d84b913f42adb88f2caa30f67dc82669856aca3d08455d5ee56598c09b4.scope: Deactivated successfully.
Nov 26 02:13:46 compute-0 podman[449473]: 2025-11-26 02:13:46.36856401 +0000 UTC m=+0.077803440 container create 8d35e34a9bf4d84495baf51079363563908a65e5557be0b082bd1a54f3c8e4a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 02:13:46 compute-0 podman[449473]: 2025-11-26 02:13:46.335500483 +0000 UTC m=+0.044739963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:13:46 compute-0 systemd[1]: Started libpod-conmon-8d35e34a9bf4d84495baf51079363563908a65e5557be0b082bd1a54f3c8e4a2.scope.
Nov 26 02:13:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/994eac440f27f2c0cac856d5e8c32f650827c9ff5c90de4b791c6b2890e2e127/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/994eac440f27f2c0cac856d5e8c32f650827c9ff5c90de4b791c6b2890e2e127/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/994eac440f27f2c0cac856d5e8c32f650827c9ff5c90de4b791c6b2890e2e127/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/994eac440f27f2c0cac856d5e8c32f650827c9ff5c90de4b791c6b2890e2e127/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:13:46 compute-0 podman[449473]: 2025-11-26 02:13:46.527802952 +0000 UTC m=+0.237042412 container init 8d35e34a9bf4d84495baf51079363563908a65e5557be0b082bd1a54f3c8e4a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 02:13:46 compute-0 podman[449473]: 2025-11-26 02:13:46.545634841 +0000 UTC m=+0.254874261 container start 8d35e34a9bf4d84495baf51079363563908a65e5557be0b082bd1a54f3c8e4a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hermann, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:13:46 compute-0 podman[449473]: 2025-11-26 02:13:46.551773603 +0000 UTC m=+0.261013033 container attach 8d35e34a9bf4d84495baf51079363563908a65e5557be0b082bd1a54f3c8e4a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 02:13:46 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:46.696 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:13:46 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:46.698 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 02:13:46 compute-0 nova_compute[350387]: 2025-11-26 02:13:46.699 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:47 compute-0 frosty_hermann[449490]: {
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:    "0": [
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:        {
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "devices": [
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "/dev/loop3"
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            ],
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "lv_name": "ceph_lv0",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "lv_size": "21470642176",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "name": "ceph_lv0",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "tags": {
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.cluster_name": "ceph",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.crush_device_class": "",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.encrypted": "0",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.osd_id": "0",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.type": "block",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.vdo": "0"
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            },
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "type": "block",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "vg_name": "ceph_vg0"
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:        }
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:    ],
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:    "1": [
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:        {
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "devices": [
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "/dev/loop4"
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            ],
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "lv_name": "ceph_lv1",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "lv_size": "21470642176",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "name": "ceph_lv1",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "tags": {
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.cluster_name": "ceph",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.crush_device_class": "",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.encrypted": "0",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.osd_id": "1",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.type": "block",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.vdo": "0"
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            },
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "type": "block",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "vg_name": "ceph_vg1"
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:        }
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:    ],
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:    "2": [
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:        {
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "devices": [
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "/dev/loop5"
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            ],
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "lv_name": "ceph_lv2",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "lv_size": "21470642176",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "name": "ceph_lv2",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "tags": {
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.cluster_name": "ceph",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.crush_device_class": "",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.encrypted": "0",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.osd_id": "2",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.type": "block",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:                "ceph.vdo": "0"
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            },
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "type": "block",
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:            "vg_name": "ceph_vg2"
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:        }
Nov 26 02:13:47 compute-0 frosty_hermann[449490]:    ]
Nov 26 02:13:47 compute-0 frosty_hermann[449490]: }
Nov 26 02:13:47 compute-0 systemd[1]: libpod-8d35e34a9bf4d84495baf51079363563908a65e5557be0b082bd1a54f3c8e4a2.scope: Deactivated successfully.
Nov 26 02:13:47 compute-0 podman[449473]: 2025-11-26 02:13:47.369272978 +0000 UTC m=+1.078512398 container died 8d35e34a9bf4d84495baf51079363563908a65e5557be0b082bd1a54f3c8e4a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hermann, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 02:13:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-994eac440f27f2c0cac856d5e8c32f650827c9ff5c90de4b791c6b2890e2e127-merged.mount: Deactivated successfully.
Nov 26 02:13:47 compute-0 podman[449473]: 2025-11-26 02:13:47.495207657 +0000 UTC m=+1.204447057 container remove 8d35e34a9bf4d84495baf51079363563908a65e5557be0b082bd1a54f3c8e4a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:13:47 compute-0 systemd[1]: libpod-conmon-8d35e34a9bf4d84495baf51079363563908a65e5557be0b082bd1a54f3c8e4a2.scope: Deactivated successfully.
Nov 26 02:13:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 224 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 1.6 MiB/s wr, 40 op/s
Nov 26 02:13:47 compute-0 nova_compute[350387]: 2025-11-26 02:13:47.562 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:48 compute-0 ovn_controller[89102]: 2025-11-26T02:13:48Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:03:6c 10.100.0.11
Nov 26 02:13:48 compute-0 ovn_controller[89102]: 2025-11-26T02:13:48Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:03:6c 10.100.0.11
Nov 26 02:13:48 compute-0 nova_compute[350387]: 2025-11-26 02:13:48.293 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:13:48 compute-0 nova_compute[350387]: 2025-11-26 02:13:48.488 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:48 compute-0 podman[449649]: 2025-11-26 02:13:48.692806402 +0000 UTC m=+0.096784463 container create 08934292d24dcb008b7094fcaebac05c3ef22f199bc3190ceae17270c2c35c8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:13:48 compute-0 podman[449649]: 2025-11-26 02:13:48.661539586 +0000 UTC m=+0.065517707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:13:48 compute-0 systemd[1]: Started libpod-conmon-08934292d24dcb008b7094fcaebac05c3ef22f199bc3190ceae17270c2c35c8f.scope.
Nov 26 02:13:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:13:48 compute-0 podman[449649]: 2025-11-26 02:13:48.844649016 +0000 UTC m=+0.248627047 container init 08934292d24dcb008b7094fcaebac05c3ef22f199bc3190ceae17270c2c35c8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 26 02:13:48 compute-0 podman[449649]: 2025-11-26 02:13:48.859591855 +0000 UTC m=+0.263569926 container start 08934292d24dcb008b7094fcaebac05c3ef22f199bc3190ceae17270c2c35c8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_neumann, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:13:48 compute-0 blissful_neumann[449665]: 167 167
Nov 26 02:13:48 compute-0 systemd[1]: libpod-08934292d24dcb008b7094fcaebac05c3ef22f199bc3190ceae17270c2c35c8f.scope: Deactivated successfully.
Nov 26 02:13:48 compute-0 conmon[449665]: conmon 08934292d24dcb008b70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-08934292d24dcb008b7094fcaebac05c3ef22f199bc3190ceae17270c2c35c8f.scope/container/memory.events
Nov 26 02:13:48 compute-0 podman[449649]: 2025-11-26 02:13:48.866282132 +0000 UTC m=+0.270260163 container attach 08934292d24dcb008b7094fcaebac05c3ef22f199bc3190ceae17270c2c35c8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:13:48 compute-0 podman[449649]: 2025-11-26 02:13:48.87405164 +0000 UTC m=+0.278029711 container died 08934292d24dcb008b7094fcaebac05c3ef22f199bc3190ceae17270c2c35c8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 02:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-060bd72ead302d1cfc7d810420108377c72c7aee3197ecb196c2268e8f870748-merged.mount: Deactivated successfully.
Nov 26 02:13:48 compute-0 podman[449649]: 2025-11-26 02:13:48.959908886 +0000 UTC m=+0.363886927 container remove 08934292d24dcb008b7094fcaebac05c3ef22f199bc3190ceae17270c2c35c8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_neumann, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 02:13:48 compute-0 systemd[1]: libpod-conmon-08934292d24dcb008b7094fcaebac05c3ef22f199bc3190ceae17270c2c35c8f.scope: Deactivated successfully.
Nov 26 02:13:49 compute-0 nova_compute[350387]: 2025-11-26 02:13:49.080 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:49 compute-0 podman[449690]: 2025-11-26 02:13:49.270622811 +0000 UTC m=+0.117550874 container create 9f64fc8a134daee171c355941113b0de97b829ad98434aa4bdf20ead844955f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 02:13:49 compute-0 podman[449690]: 2025-11-26 02:13:49.230081956 +0000 UTC m=+0.077010059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:13:49 compute-0 systemd[1]: Started libpod-conmon-9f64fc8a134daee171c355941113b0de97b829ad98434aa4bdf20ead844955f7.scope.
Nov 26 02:13:49 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09de3a74f2c93abc6e0ddfe7c5f702767f4d0d0cfb99716fce6ca57d138a9c33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09de3a74f2c93abc6e0ddfe7c5f702767f4d0d0cfb99716fce6ca57d138a9c33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09de3a74f2c93abc6e0ddfe7c5f702767f4d0d0cfb99716fce6ca57d138a9c33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:13:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09de3a74f2c93abc6e0ddfe7c5f702767f4d0d0cfb99716fce6ca57d138a9c33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:13:49 compute-0 podman[449690]: 2025-11-26 02:13:49.469759021 +0000 UTC m=+0.316687094 container init 9f64fc8a134daee171c355941113b0de97b829ad98434aa4bdf20ead844955f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 02:13:49 compute-0 podman[449690]: 2025-11-26 02:13:49.486179141 +0000 UTC m=+0.333107204 container start 9f64fc8a134daee171c355941113b0de97b829ad98434aa4bdf20ead844955f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 26 02:13:49 compute-0 podman[449690]: 2025-11-26 02:13:49.492055416 +0000 UTC m=+0.338983469 container attach 9f64fc8a134daee171c355941113b0de97b829ad98434aa4bdf20ead844955f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:13:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 224 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 1.6 MiB/s wr, 40 op/s
Nov 26 02:13:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:13:50 compute-0 nova_compute[350387]: 2025-11-26 02:13:50.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:13:50 compute-0 nova_compute[350387]: 2025-11-26 02:13:50.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:13:50 compute-0 infallible_brown[449706]: {
Nov 26 02:13:50 compute-0 infallible_brown[449706]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:13:50 compute-0 infallible_brown[449706]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:13:50 compute-0 infallible_brown[449706]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:13:50 compute-0 infallible_brown[449706]:        "osd_id": 0,
Nov 26 02:13:50 compute-0 infallible_brown[449706]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:13:50 compute-0 infallible_brown[449706]:        "type": "bluestore"
Nov 26 02:13:50 compute-0 infallible_brown[449706]:    },
Nov 26 02:13:50 compute-0 infallible_brown[449706]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:13:50 compute-0 infallible_brown[449706]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:13:50 compute-0 infallible_brown[449706]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:13:50 compute-0 infallible_brown[449706]:        "osd_id": 2,
Nov 26 02:13:50 compute-0 infallible_brown[449706]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:13:50 compute-0 infallible_brown[449706]:        "type": "bluestore"
Nov 26 02:13:50 compute-0 infallible_brown[449706]:    },
Nov 26 02:13:50 compute-0 infallible_brown[449706]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:13:50 compute-0 infallible_brown[449706]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:13:50 compute-0 infallible_brown[449706]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:13:50 compute-0 infallible_brown[449706]:        "osd_id": 1,
Nov 26 02:13:50 compute-0 infallible_brown[449706]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:13:50 compute-0 infallible_brown[449706]:        "type": "bluestore"
Nov 26 02:13:50 compute-0 infallible_brown[449706]:    }
Nov 26 02:13:50 compute-0 infallible_brown[449706]: }
Nov 26 02:13:50 compute-0 systemd[1]: libpod-9f64fc8a134daee171c355941113b0de97b829ad98434aa4bdf20ead844955f7.scope: Deactivated successfully.
Nov 26 02:13:50 compute-0 systemd[1]: libpod-9f64fc8a134daee171c355941113b0de97b829ad98434aa4bdf20ead844955f7.scope: Consumed 1.077s CPU time.
Nov 26 02:13:50 compute-0 podman[449690]: 2025-11-26 02:13:50.563925337 +0000 UTC m=+1.410853370 container died 9f64fc8a134daee171c355941113b0de97b829ad98434aa4bdf20ead844955f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 02:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-09de3a74f2c93abc6e0ddfe7c5f702767f4d0d0cfb99716fce6ca57d138a9c33-merged.mount: Deactivated successfully.
Nov 26 02:13:50 compute-0 podman[449690]: 2025-11-26 02:13:50.646415508 +0000 UTC m=+1.493343541 container remove 9f64fc8a134daee171c355941113b0de97b829ad98434aa4bdf20ead844955f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:13:50 compute-0 systemd[1]: libpod-conmon-9f64fc8a134daee171c355941113b0de97b829ad98434aa4bdf20ead844955f7.scope: Deactivated successfully.
Nov 26 02:13:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:13:50 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:13:50.700 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:13:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:13:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:13:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:13:50 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 2eb9ef9d-0c69-4204-8a96-62ca965fbf2e does not exist
Nov 26 02:13:50 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev b1bc26df-e986-44b0-a51b-2a74c7b3931c does not exist
Nov 26 02:13:51 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:13:51 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:13:51 compute-0 nova_compute[350387]: 2025-11-26 02:13:51.484 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001413303481311454 of space, bias 1.0, pg target 0.4239910443934362 quantized to 32 (current 32)
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:13:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 234 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Nov 26 02:13:53 compute-0 nova_compute[350387]: 2025-11-26 02:13:53.491 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 26 02:13:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:13:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 8677 writes, 39K keys, 8677 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s#012Cumulative WAL: 8677 writes, 8677 syncs, 1.00 writes per sync, written: 0.05 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1388 writes, 6739 keys, 1388 commit groups, 1.0 writes per commit group, ingest: 8.78 MB, 0.01 MB/s#012Interval WAL: 1388 writes, 1388 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    112.7      0.42              0.23        26    0.016       0      0       0.0       0.0#012  L6      1/0    6.46 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9    169.8    139.0      1.33              0.79        25    0.053    128K    13K       0.0       0.0#012 Sum      1/0    6.46 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9    129.0    132.7      1.76              1.02        51    0.034    128K    13K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.7    139.0    134.8      0.48              0.26        14    0.034     42K   3541       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    169.8    139.0      1.33              0.79        25    0.053    128K    13K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    114.5      0.42              0.23        25    0.017       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.046, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.23 GB write, 0.06 MB/s write, 0.22 GB read, 0.06 MB/s read, 1.8 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5636b955b1f0#2 capacity: 308.00 MB usage: 27.37 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000243 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1746,26.41 MB,8.57424%) FilterBlock(52,364.67 KB,0.115625%) IndexBlock(52,621.56 KB,0.197076%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 26 02:13:54 compute-0 nova_compute[350387]: 2025-11-26 02:13:54.085 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:13:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1903: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 26 02:13:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 26 02:13:58 compute-0 nova_compute[350387]: 2025-11-26 02:13:58.495 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:59 compute-0 nova_compute[350387]: 2025-11-26 02:13:59.089 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:13:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 573 KiB/s wr, 20 op/s
Nov 26 02:13:59 compute-0 podman[449803]: 2025-11-26 02:13:59.567317867 +0000 UTC m=+0.107787781 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:13:59 compute-0 podman[449804]: 2025-11-26 02:13:59.577294537 +0000 UTC m=+0.107808142 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:13:59 compute-0 podman[449802]: 2025-11-26 02:13:59.582635056 +0000 UTC m=+0.129551101 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Nov 26 02:13:59 compute-0 podman[158021]: time="2025-11-26T02:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:13:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45046 "" "Go-http-client/1.1"
Nov 26 02:13:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9115 "" "Go-http-client/1.1"
Nov 26 02:14:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:14:01 compute-0 openstack_network_exporter[367323]: ERROR   02:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:14:01 compute-0 openstack_network_exporter[367323]: ERROR   02:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:14:01 compute-0 openstack_network_exporter[367323]: ERROR   02:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:14:01 compute-0 openstack_network_exporter[367323]: ERROR   02:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:14:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:14:01 compute-0 openstack_network_exporter[367323]: ERROR   02:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:14:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:14:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 573 KiB/s wr, 20 op/s
Nov 26 02:14:03 compute-0 nova_compute[350387]: 2025-11-26 02:14:03.530 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 84 KiB/s wr, 6 op/s
Nov 26 02:14:04 compute-0 nova_compute[350387]: 2025-11-26 02:14:04.092 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:04 compute-0 nova_compute[350387]: 2025-11-26 02:14:04.164 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Acquiring lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:14:04 compute-0 nova_compute[350387]: 2025-11-26 02:14:04.165 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:14:04 compute-0 nova_compute[350387]: 2025-11-26 02:14:04.182 350391 DEBUG nova.compute.manager [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 02:14:04 compute-0 nova_compute[350387]: 2025-11-26 02:14:04.269 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:14:04 compute-0 nova_compute[350387]: 2025-11-26 02:14:04.270 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:14:04 compute-0 nova_compute[350387]: 2025-11-26 02:14:04.283 350391 DEBUG nova.virt.hardware [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 02:14:04 compute-0 nova_compute[350387]: 2025-11-26 02:14:04.284 350391 INFO nova.compute.claims [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 02:14:04 compute-0 nova_compute[350387]: 2025-11-26 02:14:04.443 350391 DEBUG oslo_concurrency.processutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:14:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:14:04 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3830503369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:14:04 compute-0 nova_compute[350387]: 2025-11-26 02:14:04.977 350391 DEBUG oslo_concurrency.processutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:14:04 compute-0 nova_compute[350387]: 2025-11-26 02:14:04.992 350391 DEBUG nova.compute.provider_tree [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.021 350391 DEBUG nova.scheduler.client.report [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.052 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.054 350391 DEBUG nova.compute.manager [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 02:14:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.157 350391 DEBUG nova.compute.manager [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.158 350391 DEBUG nova.network.neutron [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.183 350391 INFO nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.208 350391 DEBUG nova.compute.manager [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.302 350391 DEBUG nova.compute.manager [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.304 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.305 350391 INFO nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Creating image(s)#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.371 350391 DEBUG nova.storage.rbd_utils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] rbd image 8f12f2a2-6379-4fcb-b93e-eac05f10f599_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.437 350391 DEBUG nova.storage.rbd_utils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] rbd image 8f12f2a2-6379-4fcb-b93e-eac05f10f599_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.495 350391 DEBUG nova.storage.rbd_utils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] rbd image 8f12f2a2-6379-4fcb-b93e-eac05f10f599_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.510 350391 DEBUG oslo_concurrency.processutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.548 350391 DEBUG nova.policy [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '236e06cd46874605a18288ba033ee875', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8fc101eeda814bb98f1a44c789c8958f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 02:14:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.605 350391 DEBUG oslo_concurrency.processutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.606 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Acquiring lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.608 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.608 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "beedb32a5f0393b3b7ca21cf7409d6e587060a17" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.657 350391 DEBUG nova.storage.rbd_utils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] rbd image 8f12f2a2-6379-4fcb-b93e-eac05f10f599_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:14:05 compute-0 nova_compute[350387]: 2025-11-26 02:14:05.669 350391 DEBUG oslo_concurrency.processutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 8f12f2a2-6379-4fcb-b93e-eac05f10f599_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:14:06 compute-0 nova_compute[350387]: 2025-11-26 02:14:06.119 350391 DEBUG oslo_concurrency.processutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17 8f12f2a2-6379-4fcb-b93e-eac05f10f599_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:14:06 compute-0 nova_compute[350387]: 2025-11-26 02:14:06.310 350391 DEBUG nova.network.neutron [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Successfully created port: 20b2d898-f324-4aae-ae7e-59312c845d00 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 02:14:06 compute-0 nova_compute[350387]: 2025-11-26 02:14:06.331 350391 DEBUG nova.storage.rbd_utils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] resizing rbd image 8f12f2a2-6379-4fcb-b93e-eac05f10f599_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 26 02:14:06 compute-0 nova_compute[350387]: 2025-11-26 02:14:06.567 350391 DEBUG nova.objects.instance [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lazy-loading 'migration_context' on Instance uuid 8f12f2a2-6379-4fcb-b93e-eac05f10f599 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:14:06 compute-0 nova_compute[350387]: 2025-11-26 02:14:06.580 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 02:14:06 compute-0 nova_compute[350387]: 2025-11-26 02:14:06.581 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Ensure instance console log exists: /var/lib/nova/instances/8f12f2a2-6379-4fcb-b93e-eac05f10f599/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 02:14:06 compute-0 nova_compute[350387]: 2025-11-26 02:14:06.581 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:14:06 compute-0 nova_compute[350387]: 2025-11-26 02:14:06.581 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:14:06 compute-0 nova_compute[350387]: 2025-11-26 02:14:06.582 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:14:06 compute-0 podman[450032]: 2025-11-26 02:14:06.596801631 +0000 UTC m=+0.146821235 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 02:14:06 compute-0 podman[450033]: 2025-11-26 02:14:06.639672602 +0000 UTC m=+0.192330060 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 02:14:07 compute-0 nova_compute[350387]: 2025-11-26 02:14:07.008 350391 DEBUG nova.network.neutron [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Successfully updated port: 20b2d898-f324-4aae-ae7e-59312c845d00 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 02:14:07 compute-0 nova_compute[350387]: 2025-11-26 02:14:07.027 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Acquiring lock "refresh_cache-8f12f2a2-6379-4fcb-b93e-eac05f10f599" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:14:07 compute-0 nova_compute[350387]: 2025-11-26 02:14:07.027 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Acquired lock "refresh_cache-8f12f2a2-6379-4fcb-b93e-eac05f10f599" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:14:07 compute-0 nova_compute[350387]: 2025-11-26 02:14:07.028 350391 DEBUG nova.network.neutron [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 02:14:07 compute-0 nova_compute[350387]: 2025-11-26 02:14:07.269 350391 DEBUG nova.compute.manager [req-da4f8809-5931-45fb-a932-94d0df5aea27 req-1f90848e-aec8-4b88-9cf4-15d686e0e1dd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Received event network-changed-20b2d898-f324-4aae-ae7e-59312c845d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:14:07 compute-0 nova_compute[350387]: 2025-11-26 02:14:07.270 350391 DEBUG nova.compute.manager [req-da4f8809-5931-45fb-a932-94d0df5aea27 req-1f90848e-aec8-4b88-9cf4-15d686e0e1dd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Refreshing instance network info cache due to event network-changed-20b2d898-f324-4aae-ae7e-59312c845d00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:14:07 compute-0 nova_compute[350387]: 2025-11-26 02:14:07.271 350391 DEBUG oslo_concurrency.lockutils [req-da4f8809-5931-45fb-a932-94d0df5aea27 req-1f90848e-aec8-4b88-9cf4-15d686e0e1dd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-8f12f2a2-6379-4fcb-b93e-eac05f10f599" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:14:07 compute-0 nova_compute[350387]: 2025-11-26 02:14:07.352 350391 DEBUG nova.network.neutron [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 02:14:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 256 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 852 KiB/s wr, 14 op/s
Nov 26 02:14:08 compute-0 nova_compute[350387]: 2025-11-26 02:14:08.533 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:09 compute-0 nova_compute[350387]: 2025-11-26 02:14:09.094 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:09 compute-0 podman[450091]: 2025-11-26 02:14:09.537982268 +0000 UTC m=+0.090989011 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.component=ubi9-container, release-0.7.12=, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, vcs-type=git, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 26 02:14:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 256 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 842 KiB/s wr, 14 op/s
Nov 26 02:14:09 compute-0 podman[450092]: 2025-11-26 02:14:09.576791175 +0000 UTC m=+0.119825798 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.100 350391 DEBUG nova.network.neutron [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Updating instance_info_cache with network_info: [{"id": "20b2d898-f324-4aae-ae7e-59312c845d00", "address": "fa:16:3e:04:0d:fa", "network": {"id": "d28058d3-5123-44dd-9839-1c451b6aed46", "bridge": "br-int", "label": "tempest-TestServerBasicOps-996320676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fc101eeda814bb98f1a44c789c8958f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b2d898-f3", "ovs_interfaceid": "20b2d898-f324-4aae-ae7e-59312c845d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:14:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.137 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Releasing lock "refresh_cache-8f12f2a2-6379-4fcb-b93e-eac05f10f599" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.139 350391 DEBUG nova.compute.manager [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Instance network_info: |[{"id": "20b2d898-f324-4aae-ae7e-59312c845d00", "address": "fa:16:3e:04:0d:fa", "network": {"id": "d28058d3-5123-44dd-9839-1c451b6aed46", "bridge": "br-int", "label": "tempest-TestServerBasicOps-996320676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fc101eeda814bb98f1a44c789c8958f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b2d898-f3", "ovs_interfaceid": "20b2d898-f324-4aae-ae7e-59312c845d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.141 350391 DEBUG oslo_concurrency.lockutils [req-da4f8809-5931-45fb-a932-94d0df5aea27 req-1f90848e-aec8-4b88-9cf4-15d686e0e1dd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-8f12f2a2-6379-4fcb-b93e-eac05f10f599" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.142 350391 DEBUG nova.network.neutron [req-da4f8809-5931-45fb-a932-94d0df5aea27 req-1f90848e-aec8-4b88-9cf4-15d686e0e1dd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Refreshing network info cache for port 20b2d898-f324-4aae-ae7e-59312c845d00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.149 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Start _get_guest_xml network_info=[{"id": "20b2d898-f324-4aae-ae7e-59312c845d00", "address": "fa:16:3e:04:0d:fa", "network": {"id": "d28058d3-5123-44dd-9839-1c451b6aed46", "bridge": "br-int", "label": "tempest-TestServerBasicOps-996320676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fc101eeda814bb98f1a44c789c8958f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b2d898-f3", "ovs_interfaceid": "20b2d898-f324-4aae-ae7e-59312c845d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': '4728a8a0-1107-4816-98c6-74482d53f92c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.175 350391 WARNING nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.186 350391 DEBUG nova.virt.libvirt.host [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.187 350391 DEBUG nova.virt.libvirt.host [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.194 350391 DEBUG nova.virt.libvirt.host [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.195 350391 DEBUG nova.virt.libvirt.host [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.197 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.198 350391 DEBUG nova.virt.hardware [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T02:09:05Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6db4d080-ab1e-4a78-a6d9-858137b0ba8b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:09:07Z,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d902f6105ab4c81a51a4751fa89a83e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:09:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.199 350391 DEBUG nova.virt.hardware [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.200 350391 DEBUG nova.virt.hardware [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.201 350391 DEBUG nova.virt.hardware [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.202 350391 DEBUG nova.virt.hardware [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.203 350391 DEBUG nova.virt.hardware [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.204 350391 DEBUG nova.virt.hardware [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.205 350391 DEBUG nova.virt.hardware [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.206 350391 DEBUG nova.virt.hardware [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.207 350391 DEBUG nova.virt.hardware [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.208 350391 DEBUG nova.virt.hardware [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.214 350391 DEBUG oslo_concurrency.processutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:14:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:14:10 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3288743822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.742 350391 DEBUG oslo_concurrency.processutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.788 350391 DEBUG nova.storage.rbd_utils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] rbd image 8f12f2a2-6379-4fcb-b93e-eac05f10f599_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:14:10 compute-0 nova_compute[350387]: 2025-11-26 02:14:10.799 350391 DEBUG oslo_concurrency.processutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:14:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:14:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:14:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:14:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:14:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:14:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:14:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:14:11 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3902218767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.348 350391 DEBUG oslo_concurrency.processutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.354 350391 DEBUG nova.virt.libvirt.vif [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T02:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1676766604',display_name='tempest-TestServerBasicOps-server-1676766604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1676766604',id=14,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbE8la1gGbjQiMcSfF/XigEXCELNDkg7Bg++ChqSdPSjpeMvCOTzJudEtOKmieBCaeA40kk3ByO6Qz/g2P2LT+PPC7W+fCyL+638Mcm5qJam9Lyn3htqyGvZHvxNtPzpg==',key_name='tempest-TestServerBasicOps-638417550',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8fc101eeda814bb98f1a44c789c8958f',ramdisk_id='',reservation_id='r-51jplxlq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-969259594',owner_user_name='tempest-TestServerBasicOps-969259594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:14:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='236e06cd46874605a18288ba033ee875',uuid=8f12f2a2-6379-4fcb-b93e-eac05f10f599,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20b2d898-f324-4aae-ae7e-59312c845d00", "address": "fa:16:3e:04:0d:fa", "network": {"id": "d28058d3-5123-44dd-9839-1c451b6aed46", "bridge": "br-int", "label": "tempest-TestServerBasicOps-996320676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fc101eeda814bb98f1a44c789c8958f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b2d898-f3", "ovs_interfaceid": "20b2d898-f324-4aae-ae7e-59312c845d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.357 350391 DEBUG nova.network.os_vif_util [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Converting VIF {"id": "20b2d898-f324-4aae-ae7e-59312c845d00", "address": "fa:16:3e:04:0d:fa", "network": {"id": "d28058d3-5123-44dd-9839-1c451b6aed46", "bridge": "br-int", "label": "tempest-TestServerBasicOps-996320676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fc101eeda814bb98f1a44c789c8958f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b2d898-f3", "ovs_interfaceid": "20b2d898-f324-4aae-ae7e-59312c845d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.360 350391 DEBUG nova.network.os_vif_util [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:0d:fa,bridge_name='br-int',has_traffic_filtering=True,id=20b2d898-f324-4aae-ae7e-59312c845d00,network=Network(d28058d3-5123-44dd-9839-1c451b6aed46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20b2d898-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.363 350391 DEBUG nova.objects.instance [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lazy-loading 'pci_devices' on Instance uuid 8f12f2a2-6379-4fcb-b93e-eac05f10f599 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.387 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] End _get_guest_xml xml=<domain type="kvm">
Nov 26 02:14:11 compute-0 nova_compute[350387]:  <uuid>8f12f2a2-6379-4fcb-b93e-eac05f10f599</uuid>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  <name>instance-0000000e</name>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  <memory>131072</memory>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  <metadata>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <nova:name>tempest-TestServerBasicOps-server-1676766604</nova:name>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 02:14:10</nova:creationTime>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <nova:flavor name="m1.nano">
Nov 26 02:14:11 compute-0 nova_compute[350387]:        <nova:memory>128</nova:memory>
Nov 26 02:14:11 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 02:14:11 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 02:14:11 compute-0 nova_compute[350387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 02:14:11 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 02:14:11 compute-0 nova_compute[350387]:        <nova:user uuid="236e06cd46874605a18288ba033ee875">tempest-TestServerBasicOps-969259594-project-member</nova:user>
Nov 26 02:14:11 compute-0 nova_compute[350387]:        <nova:project uuid="8fc101eeda814bb98f1a44c789c8958f">tempest-TestServerBasicOps-969259594</nova:project>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="4728a8a0-1107-4816-98c6-74482d53f92c"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <nova:ports>
Nov 26 02:14:11 compute-0 nova_compute[350387]:        <nova:port uuid="20b2d898-f324-4aae-ae7e-59312c845d00">
Nov 26 02:14:11 compute-0 nova_compute[350387]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:        </nova:port>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      </nova:ports>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  </metadata>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <system>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <entry name="serial">8f12f2a2-6379-4fcb-b93e-eac05f10f599</entry>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <entry name="uuid">8f12f2a2-6379-4fcb-b93e-eac05f10f599</entry>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    </system>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  <os>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  </os>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  <features>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <apic/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  </features>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  </clock>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  </cpu>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  <devices>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/8f12f2a2-6379-4fcb-b93e-eac05f10f599_disk">
Nov 26 02:14:11 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      </source>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:14:11 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/8f12f2a2-6379-4fcb-b93e-eac05f10f599_disk.config">
Nov 26 02:14:11 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      </source>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:14:11 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <interface type="ethernet">
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <mac address="fa:16:3e:04:0d:fa"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <mtu size="1442"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <target dev="tap20b2d898-f3"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    </interface>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/8f12f2a2-6379-4fcb-b93e-eac05f10f599/console.log" append="off"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    </serial>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <video>
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    </video>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    </rng>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 02:14:11 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 02:14:11 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 02:14:11 compute-0 nova_compute[350387]:  </devices>
Nov 26 02:14:11 compute-0 nova_compute[350387]: </domain>
Nov 26 02:14:11 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.404 350391 DEBUG nova.compute.manager [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Preparing to wait for external event network-vif-plugged-20b2d898-f324-4aae-ae7e-59312c845d00 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.405 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Acquiring lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.405 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.406 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.407 350391 DEBUG nova.virt.libvirt.vif [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T02:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1676766604',display_name='tempest-TestServerBasicOps-server-1676766604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1676766604',id=14,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbE8la1gGbjQiMcSfF/XigEXCELNDkg7Bg++ChqSdPSjpeMvCOTzJudEtOKmieBCaeA40kk3ByO6Qz/g2P2LT+PPC7W+fCyL+638Mcm5qJam9Lyn3htqyGvZHvxNtPzpg==',key_name='tempest-TestServerBasicOps-638417550',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8fc101eeda814bb98f1a44c789c8958f',ramdisk_id='',reservation_id='r-51jplxlq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-969259594',owner_user_name='tempest-TestServerBasicOps-969259594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:14:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='236e06cd46874605a18288ba033ee875',uuid=8f12f2a2-6379-4fcb-b93e-eac05f10f599,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20b2d898-f324-4aae-ae7e-59312c845d00", "address": "fa:16:3e:04:0d:fa", "network": {"id": "d28058d3-5123-44dd-9839-1c451b6aed46", "bridge": "br-int", "label": "tempest-TestServerBasicOps-996320676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fc101eeda814bb98f1a44c789c8958f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b2d898-f3", "ovs_interfaceid": "20b2d898-f324-4aae-ae7e-59312c845d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.408 350391 DEBUG nova.network.os_vif_util [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Converting VIF {"id": "20b2d898-f324-4aae-ae7e-59312c845d00", "address": "fa:16:3e:04:0d:fa", "network": {"id": "d28058d3-5123-44dd-9839-1c451b6aed46", "bridge": "br-int", "label": "tempest-TestServerBasicOps-996320676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fc101eeda814bb98f1a44c789c8958f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b2d898-f3", "ovs_interfaceid": "20b2d898-f324-4aae-ae7e-59312c845d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.409 350391 DEBUG nova.network.os_vif_util [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:0d:fa,bridge_name='br-int',has_traffic_filtering=True,id=20b2d898-f324-4aae-ae7e-59312c845d00,network=Network(d28058d3-5123-44dd-9839-1c451b6aed46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20b2d898-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.409 350391 DEBUG os_vif [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:0d:fa,bridge_name='br-int',has_traffic_filtering=True,id=20b2d898-f324-4aae-ae7e-59312c845d00,network=Network(d28058d3-5123-44dd-9839-1c451b6aed46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20b2d898-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.411 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.412 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.413 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.418 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.419 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20b2d898-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.420 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20b2d898-f3, col_values=(('external_ids', {'iface-id': '20b2d898-f324-4aae-ae7e-59312c845d00', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:0d:fa', 'vm-uuid': '8f12f2a2-6379-4fcb-b93e-eac05f10f599'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.423 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:11 compute-0 NetworkManager[48886]: <info>  [1764123251.4267] manager: (tap20b2d898-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.427 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.434 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.434 350391 INFO os_vif [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:0d:fa,bridge_name='br-int',has_traffic_filtering=True,id=20b2d898-f324-4aae-ae7e-59312c845d00,network=Network(d28058d3-5123-44dd-9839-1c451b6aed46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20b2d898-f3')#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.522 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.523 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.524 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] No VIF found with MAC fa:16:3e:04:0d:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.525 350391 INFO nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Using config drive#033[00m
Nov 26 02:14:11 compute-0 nova_compute[350387]: 2025-11-26 02:14:11.571 350391 DEBUG nova.storage.rbd_utils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] rbd image 8f12f2a2-6379-4fcb-b93e-eac05f10f599_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:14:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 282 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 26 02:14:12 compute-0 nova_compute[350387]: 2025-11-26 02:14:12.563 350391 INFO nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Creating config drive at /var/lib/nova/instances/8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.config#033[00m
Nov 26 02:14:12 compute-0 nova_compute[350387]: 2025-11-26 02:14:12.577 350391 DEBUG oslo_concurrency.processutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1cgdhxtg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:14:12 compute-0 nova_compute[350387]: 2025-11-26 02:14:12.650 350391 DEBUG nova.network.neutron [req-da4f8809-5931-45fb-a932-94d0df5aea27 req-1f90848e-aec8-4b88-9cf4-15d686e0e1dd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Updated VIF entry in instance network info cache for port 20b2d898-f324-4aae-ae7e-59312c845d00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:14:12 compute-0 nova_compute[350387]: 2025-11-26 02:14:12.651 350391 DEBUG nova.network.neutron [req-da4f8809-5931-45fb-a932-94d0df5aea27 req-1f90848e-aec8-4b88-9cf4-15d686e0e1dd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Updating instance_info_cache with network_info: [{"id": "20b2d898-f324-4aae-ae7e-59312c845d00", "address": "fa:16:3e:04:0d:fa", "network": {"id": "d28058d3-5123-44dd-9839-1c451b6aed46", "bridge": "br-int", "label": "tempest-TestServerBasicOps-996320676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fc101eeda814bb98f1a44c789c8958f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b2d898-f3", "ovs_interfaceid": "20b2d898-f324-4aae-ae7e-59312c845d00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:14:12 compute-0 nova_compute[350387]: 2025-11-26 02:14:12.688 350391 DEBUG oslo_concurrency.lockutils [req-da4f8809-5931-45fb-a932-94d0df5aea27 req-1f90848e-aec8-4b88-9cf4-15d686e0e1dd 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-8f12f2a2-6379-4fcb-b93e-eac05f10f599" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:14:12 compute-0 nova_compute[350387]: 2025-11-26 02:14:12.729 350391 DEBUG oslo_concurrency.processutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1cgdhxtg" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:14:12 compute-0 nova_compute[350387]: 2025-11-26 02:14:12.802 350391 DEBUG nova.storage.rbd_utils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] rbd image 8f12f2a2-6379-4fcb-b93e-eac05f10f599_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:14:12 compute-0 nova_compute[350387]: 2025-11-26 02:14:12.816 350391 DEBUG oslo_concurrency.processutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.config 8f12f2a2-6379-4fcb-b93e-eac05f10f599_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:14:13 compute-0 nova_compute[350387]: 2025-11-26 02:14:13.119 350391 DEBUG oslo_concurrency.processutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.config 8f12f2a2-6379-4fcb-b93e-eac05f10f599_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:14:13 compute-0 nova_compute[350387]: 2025-11-26 02:14:13.123 350391 INFO nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Deleting local config drive /var/lib/nova/instances/8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.config because it was imported into RBD.#033[00m
Nov 26 02:14:13 compute-0 kernel: tap20b2d898-f3: entered promiscuous mode
Nov 26 02:14:13 compute-0 NetworkManager[48886]: <info>  [1764123253.2236] manager: (tap20b2d898-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Nov 26 02:14:13 compute-0 ovn_controller[89102]: 2025-11-26T02:14:13Z|00142|binding|INFO|Claiming lport 20b2d898-f324-4aae-ae7e-59312c845d00 for this chassis.
Nov 26 02:14:13 compute-0 ovn_controller[89102]: 2025-11-26T02:14:13Z|00143|binding|INFO|20b2d898-f324-4aae-ae7e-59312c845d00: Claiming fa:16:3e:04:0d:fa 10.100.0.6
Nov 26 02:14:13 compute-0 nova_compute[350387]: 2025-11-26 02:14:13.243 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.265 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:0d:fa 10.100.0.6'], port_security=['fa:16:3e:04:0d:fa 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8f12f2a2-6379-4fcb-b93e-eac05f10f599', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d28058d3-5123-44dd-9839-1c451b6aed46', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8fc101eeda814bb98f1a44c789c8958f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42e95318-726c-4ecf-a3b6-a6d03830d387 eae8d84f-0041-4340-9d86-01ee4f5b7c47', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38707aa4-19c4-4574-af55-4f9c77111de6, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=20b2d898-f324-4aae-ae7e-59312c845d00) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:14:13 compute-0 nova_compute[350387]: 2025-11-26 02:14:13.268 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:13 compute-0 ovn_controller[89102]: 2025-11-26T02:14:13Z|00144|binding|INFO|Setting lport 20b2d898-f324-4aae-ae7e-59312c845d00 ovn-installed in OVS
Nov 26 02:14:13 compute-0 ovn_controller[89102]: 2025-11-26T02:14:13Z|00145|binding|INFO|Setting lport 20b2d898-f324-4aae-ae7e-59312c845d00 up in Southbound
Nov 26 02:14:13 compute-0 nova_compute[350387]: 2025-11-26 02:14:13.271 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.268 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 20b2d898-f324-4aae-ae7e-59312c845d00 in datapath d28058d3-5123-44dd-9839-1c451b6aed46 bound to our chassis#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.271 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d28058d3-5123-44dd-9839-1c451b6aed46#033[00m
Nov 26 02:14:13 compute-0 systemd-udevd[450264]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.284 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[c58e000f-4759-4dda-9850-dc6ae843eeec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.285 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd28058d3-51 in ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.287 413433 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd28058d3-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.287 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[043d93d8-880a-4456-9077-5c4d303ddfab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.290 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[f288d2de-6ea8-4ee1-87ce-ef029dacd876]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 systemd-machined[138512]: New machine qemu-14-instance-0000000e.
Nov 26 02:14:13 compute-0 NetworkManager[48886]: <info>  [1764123253.3057] device (tap20b2d898-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.303 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2f0b69-bdbe-4e46-b405-1ab02561218a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 NetworkManager[48886]: <info>  [1764123253.3103] device (tap20b2d898-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 02:14:13 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.335 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[c3fb626f-f107-408b-aa9d-53d1ca1a0759]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.368 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[c060393b-65d5-401f-9039-dc56dd3d4c1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.380 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[d88182b9-0950-4f97-b1c0-a40b9853a665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 NetworkManager[48886]: <info>  [1764123253.3846] manager: (tapd28058d3-50): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Nov 26 02:14:13 compute-0 systemd-udevd[450268]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.428 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[b25527e1-707e-4745-8f05-c1f695bab320]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.434 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[8041d289-88c3-47c7-81bd-790df2c9cf97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 NetworkManager[48886]: <info>  [1764123253.4662] device (tapd28058d3-50): carrier: link connected
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.478 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[1c75610b-2a81-477f-8207-aa8f4a807afc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.500 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[63401bf1-877c-43dc-a3c2-b4c245e5297a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd28058d3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:12:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688409, 'reachable_time': 36233, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450297, 'error': None, 'target': 'ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.523 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[169da483-d6b6-44dc-94eb-351141ee1651]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:12ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688409, 'tstamp': 688409}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450298, 'error': None, 'target': 'ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 nova_compute[350387]: 2025-11-26 02:14:13.536 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.550 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[8e14258b-8d23-450f-bed4-bbbf297ddec2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd28058d3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:12:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688409, 'reachable_time': 36233, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 450299, 'error': None, 'target': 'ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 282 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.609 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[bba9ac10-615a-4db5-99b0-aa93fce0c153]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.731 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[80d6b7ce-8df1-482d-bb5b-b98d088664bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.734 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd28058d3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.735 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.736 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd28058d3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:14:13 compute-0 nova_compute[350387]: 2025-11-26 02:14:13.740 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:13 compute-0 NetworkManager[48886]: <info>  [1764123253.7431] manager: (tapd28058d3-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Nov 26 02:14:13 compute-0 kernel: tapd28058d3-50: entered promiscuous mode
Nov 26 02:14:13 compute-0 nova_compute[350387]: 2025-11-26 02:14:13.753 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.756 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd28058d3-50, col_values=(('external_ids', {'iface-id': '531173c6-caf0-426f-baae-53346817fcdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:14:13 compute-0 ovn_controller[89102]: 2025-11-26T02:14:13Z|00146|binding|INFO|Releasing lport 531173c6-caf0-426f-baae-53346817fcdd from this chassis (sb_readonly=0)
Nov 26 02:14:13 compute-0 nova_compute[350387]: 2025-11-26 02:14:13.758 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:13 compute-0 nova_compute[350387]: 2025-11-26 02:14:13.785 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.787 286844 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d28058d3-5123-44dd-9839-1c451b6aed46.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d28058d3-5123-44dd-9839-1c451b6aed46.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.790 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[812f18bc-06b0-4656-a20d-cf688dab12e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.793 286844 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: global
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    log         /dev/log local0 debug
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    log-tag     haproxy-metadata-proxy-d28058d3-5123-44dd-9839-1c451b6aed46
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    user        root
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    group       root
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    maxconn     1024
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    pidfile     /var/lib/neutron/external/pids/d28058d3-5123-44dd-9839-1c451b6aed46.pid.haproxy
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    daemon
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: defaults
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    log global
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    mode http
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    option httplog
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    option dontlognull
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    option http-server-close
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    option forwardfor
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    retries                 3
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    timeout http-request    30s
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    timeout connect         30s
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    timeout client          32s
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    timeout server          32s
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    timeout http-keep-alive 30s
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: listen listener
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    bind 169.254.169.254:80
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]:    http-request add-header X-OVN-Network-ID d28058d3-5123-44dd-9839-1c451b6aed46
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 02:14:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:13.799 286844 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46', 'env', 'PROCESS_TAG=haproxy-d28058d3-5123-44dd-9839-1c451b6aed46', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d28058d3-5123-44dd-9839-1c451b6aed46.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.022 350391 DEBUG nova.compute.manager [req-8a5b1007-339b-486f-9eda-da7e45d4bb96 req-0e2376d0-0a1b-4c09-98cc-2052997da12d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Received event network-vif-plugged-20b2d898-f324-4aae-ae7e-59312c845d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.023 350391 DEBUG oslo_concurrency.lockutils [req-8a5b1007-339b-486f-9eda-da7e45d4bb96 req-0e2376d0-0a1b-4c09-98cc-2052997da12d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.023 350391 DEBUG oslo_concurrency.lockutils [req-8a5b1007-339b-486f-9eda-da7e45d4bb96 req-0e2376d0-0a1b-4c09-98cc-2052997da12d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.024 350391 DEBUG oslo_concurrency.lockutils [req-8a5b1007-339b-486f-9eda-da7e45d4bb96 req-0e2376d0-0a1b-4c09-98cc-2052997da12d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.024 350391 DEBUG nova.compute.manager [req-8a5b1007-339b-486f-9eda-da7e45d4bb96 req-0e2376d0-0a1b-4c09-98cc-2052997da12d 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Processing event network-vif-plugged-20b2d898-f324-4aae-ae7e-59312c845d00 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.301 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123254.3008757, 8f12f2a2-6379-4fcb-b93e-eac05f10f599 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.302 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] VM Started (Lifecycle Event)#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.305 350391 DEBUG nova.compute.manager [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.311 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.318 350391 INFO nova.virt.libvirt.driver [-] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Instance spawned successfully.#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.319 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.323 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.329 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.343 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.344 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.346 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.346 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.348 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.350 350391 DEBUG nova.virt.libvirt.driver [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.354 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.355 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123254.3010306, 8f12f2a2-6379-4fcb-b93e-eac05f10f599 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.355 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] VM Paused (Lifecycle Event)#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.385 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.391 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123254.3106313, 8f12f2a2-6379-4fcb-b93e-eac05f10f599 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.392 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] VM Resumed (Lifecycle Event)#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.413 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.420 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.423 350391 INFO nova.compute.manager [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Took 9.12 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.424 350391 DEBUG nova.compute.manager [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:14:14 compute-0 podman[450371]: 2025-11-26 02:14:14.452890436 +0000 UTC m=+0.126215528 container create 6a9dd5cc4ef498c3c7f0c7b3bf7a569e428a58b9eb7383bc878030784e42f61c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.453 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:14:14 compute-0 podman[450371]: 2025-11-26 02:14:14.380568139 +0000 UTC m=+0.053893201 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.500 350391 INFO nova.compute.manager [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Took 10.27 seconds to build instance.#033[00m
Nov 26 02:14:14 compute-0 nova_compute[350387]: 2025-11-26 02:14:14.525 350391 DEBUG oslo_concurrency.lockutils [None req-8c9a3f61-c101-427c-b2dd-ed8c4888c52e 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:14:14 compute-0 systemd[1]: Started libpod-conmon-6a9dd5cc4ef498c3c7f0c7b3bf7a569e428a58b9eb7383bc878030784e42f61c.scope.
Nov 26 02:14:14 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b2f0458d6526fdc8cf0d39ec37dc07c9743fa4b093112d93a107fd2fb68671/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 02:14:14 compute-0 podman[450371]: 2025-11-26 02:14:14.610601914 +0000 UTC m=+0.283927056 container init 6a9dd5cc4ef498c3c7f0c7b3bf7a569e428a58b9eb7383bc878030784e42f61c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 26 02:14:14 compute-0 podman[450371]: 2025-11-26 02:14:14.655335618 +0000 UTC m=+0.328660710 container start 6a9dd5cc4ef498c3c7f0c7b3bf7a569e428a58b9eb7383bc878030784e42f61c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:14:14 compute-0 neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46[450387]: [NOTICE]   (450391) : New worker (450393) forked
Nov 26 02:14:14 compute-0 neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46[450387]: [NOTICE]   (450391) : Loading success.
Nov 26 02:14:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:14:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 282 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 26 02:14:15 compute-0 podman[450402]: 2025-11-26 02:14:15.584265004 +0000 UTC m=+0.124599512 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_id=edpm, architecture=x86_64, distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7)
Nov 26 02:14:15 compute-0 podman[450403]: 2025-11-26 02:14:15.588928745 +0000 UTC m=+0.124882281 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 02:14:16 compute-0 nova_compute[350387]: 2025-11-26 02:14:16.137 350391 DEBUG nova.compute.manager [req-103eb363-86df-418a-a56e-9bea17d72c6d req-1af0ed2b-2bf0-4961-94d0-1469fa76ce09 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Received event network-vif-plugged-20b2d898-f324-4aae-ae7e-59312c845d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:14:16 compute-0 nova_compute[350387]: 2025-11-26 02:14:16.138 350391 DEBUG oslo_concurrency.lockutils [req-103eb363-86df-418a-a56e-9bea17d72c6d req-1af0ed2b-2bf0-4961-94d0-1469fa76ce09 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:14:16 compute-0 nova_compute[350387]: 2025-11-26 02:14:16.138 350391 DEBUG oslo_concurrency.lockutils [req-103eb363-86df-418a-a56e-9bea17d72c6d req-1af0ed2b-2bf0-4961-94d0-1469fa76ce09 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:14:16 compute-0 nova_compute[350387]: 2025-11-26 02:14:16.139 350391 DEBUG oslo_concurrency.lockutils [req-103eb363-86df-418a-a56e-9bea17d72c6d req-1af0ed2b-2bf0-4961-94d0-1469fa76ce09 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:14:16 compute-0 nova_compute[350387]: 2025-11-26 02:14:16.139 350391 DEBUG nova.compute.manager [req-103eb363-86df-418a-a56e-9bea17d72c6d req-1af0ed2b-2bf0-4961-94d0-1469fa76ce09 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] No waiting events found dispatching network-vif-plugged-20b2d898-f324-4aae-ae7e-59312c845d00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:14:16 compute-0 nova_compute[350387]: 2025-11-26 02:14:16.140 350391 WARNING nova.compute.manager [req-103eb363-86df-418a-a56e-9bea17d72c6d req-1af0ed2b-2bf0-4961-94d0-1469fa76ce09 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Received unexpected event network-vif-plugged-20b2d898-f324-4aae-ae7e-59312c845d00 for instance with vm_state active and task_state None.#033[00m
Nov 26 02:14:16 compute-0 nova_compute[350387]: 2025-11-26 02:14:16.423 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:16 compute-0 nova_compute[350387]: 2025-11-26 02:14:16.500 350391 DEBUG nova.compute.manager [req-b790c39e-c1a8-487d-a046-797207957998 req-3b2e8ea8-2159-42c2-b257-985f00f652db 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Received event network-changed-20b2d898-f324-4aae-ae7e-59312c845d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:14:16 compute-0 nova_compute[350387]: 2025-11-26 02:14:16.501 350391 DEBUG nova.compute.manager [req-b790c39e-c1a8-487d-a046-797207957998 req-3b2e8ea8-2159-42c2-b257-985f00f652db 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Refreshing instance network info cache due to event network-changed-20b2d898-f324-4aae-ae7e-59312c845d00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:14:16 compute-0 nova_compute[350387]: 2025-11-26 02:14:16.501 350391 DEBUG oslo_concurrency.lockutils [req-b790c39e-c1a8-487d-a046-797207957998 req-3b2e8ea8-2159-42c2-b257-985f00f652db 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-8f12f2a2-6379-4fcb-b93e-eac05f10f599" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:14:16 compute-0 nova_compute[350387]: 2025-11-26 02:14:16.502 350391 DEBUG oslo_concurrency.lockutils [req-b790c39e-c1a8-487d-a046-797207957998 req-3b2e8ea8-2159-42c2-b257-985f00f652db 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-8f12f2a2-6379-4fcb-b93e-eac05f10f599" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:14:16 compute-0 nova_compute[350387]: 2025-11-26 02:14:16.502 350391 DEBUG nova.network.neutron [req-b790c39e-c1a8-487d-a046-797207957998 req-3b2e8ea8-2159-42c2-b257-985f00f652db 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Refreshing network info cache for port 20b2d898-f324-4aae-ae7e-59312c845d00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:14:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 283 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 215 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Nov 26 02:14:18 compute-0 nova_compute[350387]: 2025-11-26 02:14:18.186 350391 DEBUG nova.network.neutron [req-b790c39e-c1a8-487d-a046-797207957998 req-3b2e8ea8-2159-42c2-b257-985f00f652db 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Updated VIF entry in instance network info cache for port 20b2d898-f324-4aae-ae7e-59312c845d00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:14:18 compute-0 nova_compute[350387]: 2025-11-26 02:14:18.187 350391 DEBUG nova.network.neutron [req-b790c39e-c1a8-487d-a046-797207957998 req-3b2e8ea8-2159-42c2-b257-985f00f652db 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Updating instance_info_cache with network_info: [{"id": "20b2d898-f324-4aae-ae7e-59312c845d00", "address": "fa:16:3e:04:0d:fa", "network": {"id": "d28058d3-5123-44dd-9839-1c451b6aed46", "bridge": "br-int", "label": "tempest-TestServerBasicOps-996320676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fc101eeda814bb98f1a44c789c8958f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b2d898-f3", "ovs_interfaceid": "20b2d898-f324-4aae-ae7e-59312c845d00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:14:18 compute-0 nova_compute[350387]: 2025-11-26 02:14:18.209 350391 DEBUG oslo_concurrency.lockutils [req-b790c39e-c1a8-487d-a046-797207957998 req-3b2e8ea8-2159-42c2-b257-985f00f652db 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-8f12f2a2-6379-4fcb-b93e-eac05f10f599" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:14:18 compute-0 nova_compute[350387]: 2025-11-26 02:14:18.539 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 283 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 992 KiB/s wr, 30 op/s
Nov 26 02:14:19 compute-0 nova_compute[350387]: 2025-11-26 02:14:19.998 350391 DEBUG oslo_concurrency.lockutils [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Acquiring lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:14:20 compute-0 nova_compute[350387]: 2025-11-26 02:14:19.998 350391 DEBUG oslo_concurrency.lockutils [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:14:20 compute-0 nova_compute[350387]: 2025-11-26 02:14:19.998 350391 INFO nova.compute.manager [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Rebooting instance#033[00m
Nov 26 02:14:20 compute-0 nova_compute[350387]: 2025-11-26 02:14:20.017 350391 DEBUG oslo_concurrency.lockutils [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Acquiring lock "refresh_cache-bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:14:20 compute-0 nova_compute[350387]: 2025-11-26 02:14:20.017 350391 DEBUG oslo_concurrency.lockutils [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Acquired lock "refresh_cache-bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:14:20 compute-0 nova_compute[350387]: 2025-11-26 02:14:20.018 350391 DEBUG nova.network.neutron [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 02:14:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:14:21 compute-0 nova_compute[350387]: 2025-11-26 02:14:21.426 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 283 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 993 KiB/s wr, 74 op/s
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.010 350391 DEBUG nova.network.neutron [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Updating instance_info_cache with network_info: [{"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.033 350391 DEBUG oslo_concurrency.lockutils [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Releasing lock "refresh_cache-bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.035 350391 DEBUG nova.compute.manager [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:14:23 compute-0 kernel: tapd4404ee6-72 (unregistering): left promiscuous mode
Nov 26 02:14:23 compute-0 NetworkManager[48886]: <info>  [1764123263.3373] device (tapd4404ee6-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.337 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:23 compute-0 ovn_controller[89102]: 2025-11-26T02:14:23Z|00147|binding|INFO|Releasing lport d4404ee6-7244-483c-99ba-127555e6ee3b from this chassis (sb_readonly=0)
Nov 26 02:14:23 compute-0 ovn_controller[89102]: 2025-11-26T02:14:23Z|00148|binding|INFO|Setting lport d4404ee6-7244-483c-99ba-127555e6ee3b down in Southbound
Nov 26 02:14:23 compute-0 ovn_controller[89102]: 2025-11-26T02:14:23Z|00149|binding|INFO|Removing iface tapd4404ee6-72 ovn-installed in OVS
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.347 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:23 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:23.351 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:03:6c 10.100.0.11'], port_security=['fa:16:3e:68:03:6c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e2c25548-a42e-4a7d-850c-bdecd264a753', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e0ff318c290040838d6133cda861268a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '392666de-076f-4a6b-abfe-d6c4dadf08c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4500b2b3-5d5b-4a74-8ac2-4092583234ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=d4404ee6-7244-483c-99ba-127555e6ee3b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:14:23 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:23.352 286844 INFO neutron.agent.ovn.metadata.agent [-] Port d4404ee6-7244-483c-99ba-127555e6ee3b in datapath e2c25548-a42e-4a7d-850c-bdecd264a753 unbound from our chassis#033[00m
Nov 26 02:14:23 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:23.354 286844 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e2c25548-a42e-4a7d-850c-bdecd264a753, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 02:14:23 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:23.355 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[4abd5521-4ad5-47ab-a1ed-363a38e468f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:23 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:23.356 286844 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753 namespace which is not needed anymore#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.372 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:23 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 26 02:14:23 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 43.972s CPU time.
Nov 26 02:14:23 compute-0 systemd-machined[138512]: Machine qemu-13-instance-0000000d terminated.
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.450 350391 INFO nova.virt.libvirt.driver [-] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Instance destroyed successfully.#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.451 350391 DEBUG nova.objects.instance [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lazy-loading 'resources' on Instance uuid bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.466 350391 DEBUG nova.virt.libvirt.vif [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T02:12:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-824419160',display_name='tempest-ServerActionsTestJSON-server-824419160',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-824419160',id=13,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGOUC98EN8hXycvhDt+xkn1avlrGbOp5ZypZ/FC9FWbfZj4H71JpSUmspsuEJl9YVQFHAmKxvB9zaiq05i2wC+MbwLZ87985MOXdrZIPoo0BLwHbkHW4LlqojeJFtrF82A==',key_name='tempest-keypair-396503000',keypairs=<?>,launch_index=0,launched_at=2025-11-26T02:13:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e0ff318c290040838d6133cda861268a',ramdisk_id='',reservation_id='r-pb5w045d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1777809074',owner_user_name='tempest-ServerActionsTestJSON-1777809074-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T02:14:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b8a1343dbab4fa693b622013d763897',uuid=bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.466 350391 DEBUG nova.network.os_vif_util [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Converting VIF {"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.467 350391 DEBUG nova.network.os_vif_util [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:03:6c,bridge_name='br-int',has_traffic_filtering=True,id=d4404ee6-7244-483c-99ba-127555e6ee3b,network=Network(e2c25548-a42e-4a7d-850c-bdecd264a753),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4404ee6-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.467 350391 DEBUG os_vif [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:03:6c,bridge_name='br-int',has_traffic_filtering=True,id=d4404ee6-7244-483c-99ba-127555e6ee3b,network=Network(e2c25548-a42e-4a7d-850c-bdecd264a753),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4404ee6-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.469 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.469 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4404ee6-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.470 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.472 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.478 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.479 350391 INFO os_vif [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:03:6c,bridge_name='br-int',has_traffic_filtering=True,id=d4404ee6-7244-483c-99ba-127555e6ee3b,network=Network(e2c25548-a42e-4a7d-850c-bdecd264a753),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4404ee6-72')#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.490 350391 DEBUG nova.virt.libvirt.driver [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Start _get_guest_xml network_info=[{"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': '4728a8a0-1107-4816-98c6-74482d53f92c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.496 350391 WARNING nova.virt.libvirt.driver [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.504 350391 DEBUG nova.virt.libvirt.host [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.505 350391 DEBUG nova.virt.libvirt.host [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.509 350391 DEBUG nova.virt.libvirt.host [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.509 350391 DEBUG nova.virt.libvirt.host [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.510 350391 DEBUG nova.virt.libvirt.driver [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.510 350391 DEBUG nova.virt.hardware [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T02:09:05Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6db4d080-ab1e-4a78-a6d9-858137b0ba8b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4728a8a0-1107-4816-98c6-74482d53f92c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.511 350391 DEBUG nova.virt.hardware [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.511 350391 DEBUG nova.virt.hardware [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.511 350391 DEBUG nova.virt.hardware [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.512 350391 DEBUG nova.virt.hardware [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.512 350391 DEBUG nova.virt.hardware [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.513 350391 DEBUG nova.virt.hardware [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.513 350391 DEBUG nova.virt.hardware [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.513 350391 DEBUG nova.virt.hardware [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.514 350391 DEBUG nova.virt.hardware [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.514 350391 DEBUG nova.virt.hardware [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.515 350391 DEBUG nova.objects.instance [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lazy-loading 'vcpu_model' on Instance uuid bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.530 350391 DEBUG oslo_concurrency.processutils [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.552 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:23 compute-0 neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753[448488]: [NOTICE]   (448492) : haproxy version is 2.8.14-c23fe91
Nov 26 02:14:23 compute-0 neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753[448488]: [NOTICE]   (448492) : path to executable is /usr/sbin/haproxy
Nov 26 02:14:23 compute-0 neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753[448488]: [WARNING]  (448492) : Exiting Master process...
Nov 26 02:14:23 compute-0 neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753[448488]: [ALERT]    (448492) : Current worker (448494) exited with code 143 (Terminated)
Nov 26 02:14:23 compute-0 neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753[448488]: [WARNING]  (448492) : All workers exited. Exiting... (0)
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.560 350391 DEBUG nova.compute.manager [req-b98ff2dd-d9a7-4997-ae0a-6b1a961d6e99 req-89110e16-130c-4f96-8d5b-a2a88379354c 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received event network-vif-unplugged-d4404ee6-7244-483c-99ba-127555e6ee3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:14:23 compute-0 systemd[1]: libpod-3d38e342cacaa3497a8bdc145080046447970367413fdb99612bbacb8d918709.scope: Deactivated successfully.
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.561 350391 DEBUG oslo_concurrency.lockutils [req-b98ff2dd-d9a7-4997-ae0a-6b1a961d6e99 req-89110e16-130c-4f96-8d5b-a2a88379354c 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.562 350391 DEBUG oslo_concurrency.lockutils [req-b98ff2dd-d9a7-4997-ae0a-6b1a961d6e99 req-89110e16-130c-4f96-8d5b-a2a88379354c 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.563 350391 DEBUG oslo_concurrency.lockutils [req-b98ff2dd-d9a7-4997-ae0a-6b1a961d6e99 req-89110e16-130c-4f96-8d5b-a2a88379354c 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.563 350391 DEBUG nova.compute.manager [req-b98ff2dd-d9a7-4997-ae0a-6b1a961d6e99 req-89110e16-130c-4f96-8d5b-a2a88379354c 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] No waiting events found dispatching network-vif-unplugged-d4404ee6-7244-483c-99ba-127555e6ee3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.564 350391 WARNING nova.compute.manager [req-b98ff2dd-d9a7-4997-ae0a-6b1a961d6e99 req-89110e16-130c-4f96-8d5b-a2a88379354c 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received unexpected event network-vif-unplugged-d4404ee6-7244-483c-99ba-127555e6ee3b for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 26 02:14:23 compute-0 podman[450477]: 2025-11-26 02:14:23.566546984 +0000 UTC m=+0.067224375 container died 3d38e342cacaa3497a8bdc145080046447970367413fdb99612bbacb8d918709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:14:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 283 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 76 op/s
Nov 26 02:14:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3d38e342cacaa3497a8bdc145080046447970367413fdb99612bbacb8d918709-userdata-shm.mount: Deactivated successfully.
Nov 26 02:14:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b2d00f9aeacdc74f00bcd969bdee161efce34c2906db39f269fcfb22bc5feb6-merged.mount: Deactivated successfully.
Nov 26 02:14:23 compute-0 podman[450477]: 2025-11-26 02:14:23.619967691 +0000 UTC m=+0.120645082 container cleanup 3d38e342cacaa3497a8bdc145080046447970367413fdb99612bbacb8d918709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 26 02:14:23 compute-0 systemd[1]: libpod-conmon-3d38e342cacaa3497a8bdc145080046447970367413fdb99612bbacb8d918709.scope: Deactivated successfully.
Nov 26 02:14:23 compute-0 podman[450504]: 2025-11-26 02:14:23.710504707 +0000 UTC m=+0.059446066 container remove 3d38e342cacaa3497a8bdc145080046447970367413fdb99612bbacb8d918709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 02:14:23 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:23.719 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c8a06b-c888-4b15-a2a1-bf2d744db405]: (4, ('Wed Nov 26 02:14:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753 (3d38e342cacaa3497a8bdc145080046447970367413fdb99612bbacb8d918709)\n3d38e342cacaa3497a8bdc145080046447970367413fdb99612bbacb8d918709\nWed Nov 26 02:14:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753 (3d38e342cacaa3497a8bdc145080046447970367413fdb99612bbacb8d918709)\n3d38e342cacaa3497a8bdc145080046447970367413fdb99612bbacb8d918709\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:23 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:23.721 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[f08ae2f9-7b6d-4a93-8406-a41b6efbae8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:23 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:23.723 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2c25548-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:14:23 compute-0 kernel: tape2c25548-a0: left promiscuous mode
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.728 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:23 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:23.733 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[c572c36a-118b-4c4b-9667-dd1c8da6294a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:23 compute-0 nova_compute[350387]: 2025-11-26 02:14:23.745 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:23 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:23.752 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[dded32ed-b5b3-41ca-bcd9-b374548e0a52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:23 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:23.753 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[ac83f623-fc30-4a05-b9bc-f08a5bf7d2b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:23 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:23.770 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebd4099-bf7d-4680-ae74-76a7a8cb7696]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682095, 'reachable_time': 19502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450537, 'error': None, 'target': 'ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:23 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:23.773 287175 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 02:14:23 compute-0 systemd[1]: run-netns-ovnmeta\x2de2c25548\x2da42e\x2d4a7d\x2d850c\x2dbdecd264a753.mount: Deactivated successfully.
Nov 26 02:14:23 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:23.773 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1c4cf6-b2a0-4be3-8ed2-3d2c5a573078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:14:24 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1317506082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.097 350391 DEBUG oslo_concurrency.processutils [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.187 350391 DEBUG oslo_concurrency.processutils [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:14:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:24.378 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.383 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:24.386 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 02:14:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:24.388 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:14:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:14:24 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710099824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.773 350391 DEBUG oslo_concurrency.processutils [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.776 350391 DEBUG nova.virt.libvirt.vif [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T02:12:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-824419160',display_name='tempest-ServerActionsTestJSON-server-824419160',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-824419160',id=13,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGOUC98EN8hXycvhDt+xkn1avlrGbOp5ZypZ/FC9FWbfZj4H71JpSUmspsuEJl9YVQFHAmKxvB9zaiq05i2wC+MbwLZ87985MOXdrZIPoo0BLwHbkHW4LlqojeJFtrF82A==',key_name='tempest-keypair-396503000',keypairs=<?>,launch_index=0,launched_at=2025-11-26T02:13:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e0ff318c290040838d6133cda861268a',ramdisk_id='',reservation_id='r-pb5w045d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1777809074',owner_user_name='tempest-ServerActionsTestJSON-1777809074-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T02:14:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b8a1343dbab4fa693b622013d763897',uuid=bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.777 350391 DEBUG nova.network.os_vif_util [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Converting VIF {"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.778 350391 DEBUG nova.network.os_vif_util [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:03:6c,bridge_name='br-int',has_traffic_filtering=True,id=d4404ee6-7244-483c-99ba-127555e6ee3b,network=Network(e2c25548-a42e-4a7d-850c-bdecd264a753),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4404ee6-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.781 350391 DEBUG nova.objects.instance [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lazy-loading 'pci_devices' on Instance uuid bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.808 350391 DEBUG nova.virt.libvirt.driver [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] End _get_guest_xml xml=<domain type="kvm">
Nov 26 02:14:24 compute-0 nova_compute[350387]:  <uuid>bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9</uuid>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  <name>instance-0000000d</name>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  <memory>131072</memory>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  <metadata>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <nova:name>tempest-ServerActionsTestJSON-server-824419160</nova:name>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 02:14:23</nova:creationTime>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <nova:flavor name="m1.nano">
Nov 26 02:14:24 compute-0 nova_compute[350387]:        <nova:memory>128</nova:memory>
Nov 26 02:14:24 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 02:14:24 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 02:14:24 compute-0 nova_compute[350387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 02:14:24 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 02:14:24 compute-0 nova_compute[350387]:        <nova:user uuid="3b8a1343dbab4fa693b622013d763897">tempest-ServerActionsTestJSON-1777809074-project-member</nova:user>
Nov 26 02:14:24 compute-0 nova_compute[350387]:        <nova:project uuid="e0ff318c290040838d6133cda861268a">tempest-ServerActionsTestJSON-1777809074</nova:project>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="4728a8a0-1107-4816-98c6-74482d53f92c"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <nova:ports>
Nov 26 02:14:24 compute-0 nova_compute[350387]:        <nova:port uuid="d4404ee6-7244-483c-99ba-127555e6ee3b">
Nov 26 02:14:24 compute-0 nova_compute[350387]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:        </nova:port>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      </nova:ports>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  </metadata>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <system>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <entry name="serial">bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9</entry>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <entry name="uuid">bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9</entry>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    </system>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  <os>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  </os>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  <features>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <apic/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  </features>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  </clock>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  </cpu>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  <devices>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk">
Nov 26 02:14:24 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      </source>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:14:24 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_disk.config">
Nov 26 02:14:24 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      </source>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:14:24 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <interface type="ethernet">
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <mac address="fa:16:3e:68:03:6c"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <mtu size="1442"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <target dev="tapd4404ee6-72"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    </interface>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/console.log" append="off"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    </serial>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <video>
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    </video>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <input type="keyboard" bus="usb"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    </rng>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 02:14:24 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 02:14:24 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 02:14:24 compute-0 nova_compute[350387]:  </devices>
Nov 26 02:14:24 compute-0 nova_compute[350387]: </domain>
Nov 26 02:14:24 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.810 350391 DEBUG nova.virt.libvirt.driver [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.810 350391 DEBUG nova.virt.libvirt.driver [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.812 350391 DEBUG nova.virt.libvirt.vif [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T02:12:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-824419160',display_name='tempest-ServerActionsTestJSON-server-824419160',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-824419160',id=13,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGOUC98EN8hXycvhDt+xkn1avlrGbOp5ZypZ/FC9FWbfZj4H71JpSUmspsuEJl9YVQFHAmKxvB9zaiq05i2wC+MbwLZ87985MOXdrZIPoo0BLwHbkHW4LlqojeJFtrF82A==',key_name='tempest-keypair-396503000',keypairs=<?>,launch_index=0,launched_at=2025-11-26T02:13:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='e0ff318c290040838d6133cda861268a',ramdisk_id='',reservation_id='r-pb5w045d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1777809074',owner_user_name='tempest-ServerActionsTestJSON-1777809074-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T02:14:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b8a1343dbab4fa693b622013d763897',uuid=bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.812 350391 DEBUG nova.network.os_vif_util [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Converting VIF {"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.813 350391 DEBUG nova.network.os_vif_util [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:03:6c,bridge_name='br-int',has_traffic_filtering=True,id=d4404ee6-7244-483c-99ba-127555e6ee3b,network=Network(e2c25548-a42e-4a7d-850c-bdecd264a753),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4404ee6-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.814 350391 DEBUG os_vif [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:03:6c,bridge_name='br-int',has_traffic_filtering=True,id=d4404ee6-7244-483c-99ba-127555e6ee3b,network=Network(e2c25548-a42e-4a7d-850c-bdecd264a753),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4404ee6-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.815 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.815 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.816 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.821 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.822 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4404ee6-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.823 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4404ee6-72, col_values=(('external_ids', {'iface-id': 'd4404ee6-7244-483c-99ba-127555e6ee3b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:03:6c', 'vm-uuid': 'bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:14:24 compute-0 NetworkManager[48886]: <info>  [1764123264.8297] manager: (tapd4404ee6-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.825 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.828 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.837 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.839 350391 INFO os_vif [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:03:6c,bridge_name='br-int',has_traffic_filtering=True,id=d4404ee6-7244-483c-99ba-127555e6ee3b,network=Network(e2c25548-a42e-4a7d-850c-bdecd264a753),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4404ee6-72')#033[00m
Nov 26 02:14:24 compute-0 kernel: tapd4404ee6-72: entered promiscuous mode
Nov 26 02:14:24 compute-0 ovn_controller[89102]: 2025-11-26T02:14:24Z|00150|binding|INFO|Claiming lport d4404ee6-7244-483c-99ba-127555e6ee3b for this chassis.
Nov 26 02:14:24 compute-0 ovn_controller[89102]: 2025-11-26T02:14:24Z|00151|binding|INFO|d4404ee6-7244-483c-99ba-127555e6ee3b: Claiming fa:16:3e:68:03:6c 10.100.0.11
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.952 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:24 compute-0 NetworkManager[48886]: <info>  [1764123264.9565] manager: (tapd4404ee6-72): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Nov 26 02:14:24 compute-0 systemd-udevd[450450]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.959 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:24.969 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:03:6c 10.100.0.11'], port_security=['fa:16:3e:68:03:6c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e2c25548-a42e-4a7d-850c-bdecd264a753', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e0ff318c290040838d6133cda861268a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '392666de-076f-4a6b-abfe-d6c4dadf08c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4500b2b3-5d5b-4a74-8ac2-4092583234ee, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=d4404ee6-7244-483c-99ba-127555e6ee3b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:14:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:24.970 286844 INFO neutron.agent.ovn.metadata.agent [-] Port d4404ee6-7244-483c-99ba-127555e6ee3b in datapath e2c25548-a42e-4a7d-850c-bdecd264a753 bound to our chassis#033[00m
Nov 26 02:14:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:24.971 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e2c25548-a42e-4a7d-850c-bdecd264a753#033[00m
Nov 26 02:14:24 compute-0 NetworkManager[48886]: <info>  [1764123264.9855] device (tapd4404ee6-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 02:14:24 compute-0 NetworkManager[48886]: <info>  [1764123264.9877] device (tapd4404ee6-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 02:14:24 compute-0 nova_compute[350387]: 2025-11-26 02:14:24.992 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:24.992 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[8c6142cf-f508-401c-94f5-01dec093ecf9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:24 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:24.993 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape2c25548-a1 in ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 02:14:24 compute-0 ovn_controller[89102]: 2025-11-26T02:14:24Z|00152|binding|INFO|Setting lport d4404ee6-7244-483c-99ba-127555e6ee3b ovn-installed in OVS
Nov 26 02:14:24 compute-0 ovn_controller[89102]: 2025-11-26T02:14:24Z|00153|binding|INFO|Setting lport d4404ee6-7244-483c-99ba-127555e6ee3b up in Southbound
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:24.998 413433 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape2c25548-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:24.999 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:24.999 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.000 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.001 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:24.999 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[29fb8308-7116-4c1b-9d77-ecb508221f9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.008 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b2ccb2-9adc-42a4-adbc-60beac01bcfd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.024 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[39599ade-bebb-45d8-9cf6-362033005fde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:25 compute-0 systemd-machined[138512]: New machine qemu-15-instance-0000000d.
Nov 26 02:14:25 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000d.
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.052 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[5c08222b-74ed-4dce-89c6-befa963ef5e6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.096 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[f577dc66-10e9-4355-8448-bbfaa645b7a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.107 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[fde57157-1b23-4016-b613-38a777db365c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:25 compute-0 NetworkManager[48886]: <info>  [1764123265.1100] manager: (tape2c25548-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Nov 26 02:14:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.147 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[a6120bc6-e448-4ef6-83bc-a72c637ecee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.151 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[7d075bb0-92c4-466d-915c-e8de7b48376a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:25 compute-0 NetworkManager[48886]: <info>  [1764123265.1766] device (tape2c25548-a0): carrier: link connected
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.184 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[c983c38d-d74f-4f8b-a066-5893266aa156]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.203 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ecc6bb-1213-4909-929d-c82304063e9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape2c25548-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:d2:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689580, 'reachable_time': 38960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450626, 'error': None, 'target': 'ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.219 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[bd80f0da-2e3d-4009-9615-3a91eba425dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea0:d29c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689580, 'tstamp': 689580}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450627, 'error': None, 'target': 'ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.239 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[965a9c3f-2358-46cc-ba97-7ff78e81535a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape2c25548-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:d2:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689580, 'reachable_time': 38960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 450628, 'error': None, 'target': 'ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.281 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[6119bb1d-de77-4503-b22e-3f417cfadba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.378 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[f140d6f2-3ee1-4274-aa2c-f4f30ec4a3c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.379 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2c25548-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.379 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.380 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape2c25548-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.381 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:25 compute-0 kernel: tape2c25548-a0: entered promiscuous mode
Nov 26 02:14:25 compute-0 NetworkManager[48886]: <info>  [1764123265.3894] manager: (tape2c25548-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.394 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.396 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape2c25548-a0, col_values=(('external_ids', {'iface-id': '3e4f4a4e-c5ed-4544-9ad9-aa5c0fc87ea7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:14:25 compute-0 ovn_controller[89102]: 2025-11-26T02:14:25Z|00154|binding|INFO|Releasing lport 3e4f4a4e-c5ed-4544-9ad9-aa5c0fc87ea7 from this chassis (sb_readonly=0)
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.398 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.427 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.428 286844 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e2c25548-a42e-4a7d-850c-bdecd264a753.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e2c25548-a42e-4a7d-850c-bdecd264a753.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.432 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.429 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[f4549982-6e2d-4f7d-ac79-77a3be0ec59e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.435 286844 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: global
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    log         /dev/log local0 debug
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    log-tag     haproxy-metadata-proxy-e2c25548-a42e-4a7d-850c-bdecd264a753
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    user        root
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    group       root
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    maxconn     1024
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    pidfile     /var/lib/neutron/external/pids/e2c25548-a42e-4a7d-850c-bdecd264a753.pid.haproxy
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    daemon
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: defaults
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    log global
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    mode http
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    option httplog
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    option dontlognull
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    option http-server-close
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    option forwardfor
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    retries                 3
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    timeout http-request    30s
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    timeout connect         30s
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    timeout client          32s
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    timeout server          32s
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    timeout http-keep-alive 30s
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: listen listener
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    bind 169.254.169.254:80
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]:    http-request add-header X-OVN-Network-ID e2c25548-a42e-4a7d-850c-bdecd264a753
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 02:14:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:14:25.436 286844 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753', 'env', 'PROCESS_TAG=haproxy-e2c25548-a42e-4a7d-850c-bdecd264a753', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e2c25548-a42e-4a7d-850c-bdecd264a753.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 02:14:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 283 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 78 op/s
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.630 350391 DEBUG nova.compute.manager [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received event network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.630 350391 DEBUG oslo_concurrency.lockutils [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.631 350391 DEBUG oslo_concurrency.lockutils [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.631 350391 DEBUG oslo_concurrency.lockutils [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.631 350391 DEBUG nova.compute.manager [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] No waiting events found dispatching network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.631 350391 WARNING nova.compute.manager [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received unexpected event network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.632 350391 DEBUG nova.compute.manager [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received event network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.632 350391 DEBUG oslo_concurrency.lockutils [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.632 350391 DEBUG oslo_concurrency.lockutils [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.632 350391 DEBUG oslo_concurrency.lockutils [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.632 350391 DEBUG nova.compute.manager [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] No waiting events found dispatching network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.633 350391 WARNING nova.compute.manager [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received unexpected event network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.633 350391 DEBUG nova.compute.manager [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received event network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.633 350391 DEBUG oslo_concurrency.lockutils [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.633 350391 DEBUG oslo_concurrency.lockutils [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.634 350391 DEBUG oslo_concurrency.lockutils [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.634 350391 DEBUG nova.compute.manager [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] No waiting events found dispatching network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:14:25 compute-0 nova_compute[350387]: 2025-11-26 02:14:25.634 350391 WARNING nova.compute.manager [req-0ae2be6f-a256-4580-9705-5f2a5c3ca0ee req-0325c3c6-0740-40c2-801b-e41fd4a58b62 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received unexpected event network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 26 02:14:26 compute-0 podman[450659]: 2025-11-26 02:14:26.002532656 +0000 UTC m=+0.130541129 container create 950a0cf27b043da796f1ea64f8289242599bc5a6a931bed83eeea782eb563821 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:14:26 compute-0 podman[450659]: 2025-11-26 02:14:25.95059009 +0000 UTC m=+0.078598583 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 02:14:26 compute-0 systemd[1]: Started libpod-conmon-950a0cf27b043da796f1ea64f8289242599bc5a6a931bed83eeea782eb563821.scope.
Nov 26 02:14:26 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/019674fa5dd37028790f70e232229272105cff713498eaf250e83bb4b0ae1d27/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 02:14:26 compute-0 podman[450659]: 2025-11-26 02:14:26.166886641 +0000 UTC m=+0.294895154 container init 950a0cf27b043da796f1ea64f8289242599bc5a6a931bed83eeea782eb563821 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 02:14:26 compute-0 podman[450659]: 2025-11-26 02:14:26.175104251 +0000 UTC m=+0.303112714 container start 950a0cf27b043da796f1ea64f8289242599bc5a6a931bed83eeea782eb563821 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 26 02:14:26 compute-0 neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753[450674]: [NOTICE]   (450678) : New worker (450680) forked
Nov 26 02:14:26 compute-0 neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753[450674]: [NOTICE]   (450678) : Loading success.
Nov 26 02:14:26 compute-0 nova_compute[350387]: 2025-11-26 02:14:26.659 350391 DEBUG nova.compute.manager [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 02:14:26 compute-0 nova_compute[350387]: 2025-11-26 02:14:26.661 350391 DEBUG nova.virt.libvirt.host [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Removed pending event for bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 26 02:14:26 compute-0 nova_compute[350387]: 2025-11-26 02:14:26.663 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123266.658858, bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:14:26 compute-0 nova_compute[350387]: 2025-11-26 02:14:26.664 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] VM Resumed (Lifecycle Event)#033[00m
Nov 26 02:14:26 compute-0 nova_compute[350387]: 2025-11-26 02:14:26.672 350391 INFO nova.virt.libvirt.driver [-] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Instance rebooted successfully.#033[00m
Nov 26 02:14:26 compute-0 nova_compute[350387]: 2025-11-26 02:14:26.673 350391 DEBUG nova.compute.manager [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:14:26 compute-0 nova_compute[350387]: 2025-11-26 02:14:26.694 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:14:26 compute-0 nova_compute[350387]: 2025-11-26 02:14:26.700 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:14:26 compute-0 nova_compute[350387]: 2025-11-26 02:14:26.738 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 26 02:14:26 compute-0 nova_compute[350387]: 2025-11-26 02:14:26.738 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123266.660326, bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:14:26 compute-0 nova_compute[350387]: 2025-11-26 02:14:26.739 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] VM Started (Lifecycle Event)#033[00m
Nov 26 02:14:26 compute-0 nova_compute[350387]: 2025-11-26 02:14:26.761 350391 DEBUG oslo_concurrency.lockutils [None req-29d1a078-aff8-4ea0-89ab-f82df3607c45 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:14:26 compute-0 nova_compute[350387]: 2025-11-26 02:14:26.764 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:14:26 compute-0 nova_compute[350387]: 2025-11-26 02:14:26.772 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:14:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:14:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/652294861' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:14:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:14:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/652294861' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:14:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 283 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 72 op/s
Nov 26 02:14:28 compute-0 nova_compute[350387]: 2025-11-26 02:14:28.547 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 283 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 62 op/s
Nov 26 02:14:29 compute-0 podman[158021]: time="2025-11-26T02:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:14:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46279 "" "Go-http-client/1.1"
Nov 26 02:14:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9564 "" "Go-http-client/1.1"
Nov 26 02:14:29 compute-0 nova_compute[350387]: 2025-11-26 02:14:29.826 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:14:30 compute-0 podman[450732]: 2025-11-26 02:14:30.574872145 +0000 UTC m=+0.130623531 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 26 02:14:30 compute-0 podman[450733]: 2025-11-26 02:14:30.577248482 +0000 UTC m=+0.106961298 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 26 02:14:30 compute-0 podman[450740]: 2025-11-26 02:14:30.595111212 +0000 UTC m=+0.131704091 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 02:14:31 compute-0 openstack_network_exporter[367323]: ERROR   02:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:14:31 compute-0 openstack_network_exporter[367323]: ERROR   02:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:14:31 compute-0 openstack_network_exporter[367323]: ERROR   02:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:14:31 compute-0 openstack_network_exporter[367323]: ERROR   02:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:14:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:14:31 compute-0 openstack_network_exporter[367323]: ERROR   02:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:14:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:14:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 283 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 12 KiB/s wr, 98 op/s
Nov 26 02:14:33 compute-0 nova_compute[350387]: 2025-11-26 02:14:33.547 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 283 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 11 KiB/s wr, 86 op/s
Nov 26 02:14:34 compute-0 nova_compute[350387]: 2025-11-26 02:14:34.831 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:14:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 283 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Nov 26 02:14:36 compute-0 nova_compute[350387]: 2025-11-26 02:14:36.305 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:14:36 compute-0 nova_compute[350387]: 2025-11-26 02:14:36.306 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:14:36 compute-0 nova_compute[350387]: 2025-11-26 02:14:36.356 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:14:36 compute-0 nova_compute[350387]: 2025-11-26 02:14:36.356 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:14:36 compute-0 nova_compute[350387]: 2025-11-26 02:14:36.357 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:14:36 compute-0 nova_compute[350387]: 2025-11-26 02:14:36.357 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:14:36 compute-0 nova_compute[350387]: 2025-11-26 02:14:36.358 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:14:36 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:14:36 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637605655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:14:36 compute-0 nova_compute[350387]: 2025-11-26 02:14:36.984 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.130 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.131 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.137 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.138 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.143 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.144 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:14:37 compute-0 podman[450812]: 2025-11-26 02:14:37.156184012 +0000 UTC m=+0.093053608 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 26 02:14:37 compute-0 podman[450814]: 2025-11-26 02:14:37.192487459 +0000 UTC m=+0.140167718 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller)
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.558 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.559 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3425MB free_disk=59.87631607055664GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.560 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.560 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:14:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 283 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 69 op/s
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.655 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 74d081af-66cd-4e37-99e4-31f777885766 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.655 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.655 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 8f12f2a2-6379-4fcb-b93e-eac05f10f599 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.656 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.656 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:14:37 compute-0 nova_compute[350387]: 2025-11-26 02:14:37.745 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:14:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:14:38 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/305954918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:14:38 compute-0 nova_compute[350387]: 2025-11-26 02:14:38.227 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:14:38 compute-0 nova_compute[350387]: 2025-11-26 02:14:38.235 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:14:38 compute-0 nova_compute[350387]: 2025-11-26 02:14:38.257 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:14:38 compute-0 nova_compute[350387]: 2025-11-26 02:14:38.295 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:14:38 compute-0 nova_compute[350387]: 2025-11-26 02:14:38.296 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:14:38 compute-0 nova_compute[350387]: 2025-11-26 02:14:38.550 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 283 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 67 op/s
Nov 26 02:14:39 compute-0 nova_compute[350387]: 2025-11-26 02:14:39.835 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:14:40 compute-0 nova_compute[350387]: 2025-11-26 02:14:40.291 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:14:40 compute-0 nova_compute[350387]: 2025-11-26 02:14:40.291 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:14:40 compute-0 nova_compute[350387]: 2025-11-26 02:14:40.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:14:40 compute-0 podman[450876]: 2025-11-26 02:14:40.555579487 +0000 UTC m=+0.094402176 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:14:40 compute-0 podman[450875]: 2025-11-26 02:14:40.567317026 +0000 UTC m=+0.109352255 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_id=edpm, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-container, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:14:41
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'vms', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'backups']
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 283 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 67 op/s
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:14:41 compute-0 ceph-mgr[193049]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2845592742
Nov 26 02:14:42 compute-0 nova_compute[350387]: 2025-11-26 02:14:42.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:14:42 compute-0 nova_compute[350387]: 2025-11-26 02:14:42.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:14:42 compute-0 nova_compute[350387]: 2025-11-26 02:14:42.497 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:14:42 compute-0 nova_compute[350387]: 2025-11-26 02:14:42.498 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:14:42 compute-0 nova_compute[350387]: 2025-11-26 02:14:42.499 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.874 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.874 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.876 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.876 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.884 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '74d081af-66cd-4e37-99e4-31f777885766', 'name': 'te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'dbaf181e-c7da-4938-bfef-7ab3aa9a19bc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'user_id': '3a9710ede02d47cbb016ff596d936633', 'hostId': '0514aa3466932c9e7b93e3dcd39fcbb186e60af35850a79a2e38f108', 'status': 'active', 'metadata': {'metering.server_group': 'bd820598-acdd-4f42-8252-1f5951161b01'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.889 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 02:14:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:42.890 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}4e94a0ede5bb893797130fc39ee992faf1803b43b6582353b5619a442e3adefc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.416 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1978 Content-Type: application/json Date: Wed, 26 Nov 2025 02:14:42 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-9cf2d6e4-b7fd-47ca-a912-71ccfbecf3fc x-openstack-request-id: req-9cf2d6e4-b7fd-47ca-a912-71ccfbecf3fc _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.417 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9", "name": "tempest-ServerActionsTestJSON-server-824419160", "status": "ACTIVE", "tenant_id": "e0ff318c290040838d6133cda861268a", "user_id": "3b8a1343dbab4fa693b622013d763897", "metadata": {}, "hostId": "3619c373b66037902c5e595910d03f2994e8a69722d604e109161bbe", "image": {"id": "4728a8a0-1107-4816-98c6-74482d53f92c", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/4728a8a0-1107-4816-98c6-74482d53f92c"}]}, "flavor": {"id": "6db4d080-ab1e-4a78-a6d9-858137b0ba8b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/6db4d080-ab1e-4a78-a6d9-858137b0ba8b"}]}, "created": "2025-11-26T02:12:58Z", "updated": "2025-11-26T02:14:26Z", "addresses": {"tempest-ServerActionsTestJSON-456600665-network": [{"version": 4, "addr": "10.100.0.11", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:68:03:6c"}, {"version": 4, "addr": "192.168.122.188", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:68:03:6c"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-396503000", "OS-SRV-USG:launched_at": "2025-11-26T02:13:11.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--297916561"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.417 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 used request id req-9cf2d6e4-b7fd-47ca-a912-71ccfbecf3fc request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.418 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9', 'name': 'tempest-ServerActionsTestJSON-server-824419160', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '4728a8a0-1107-4816-98c6-74482d53f92c'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e0ff318c290040838d6133cda861268a', 'user_id': '3b8a1343dbab4fa693b622013d763897', 'hostId': '3619c373b66037902c5e595910d03f2994e8a69722d604e109161bbe', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.421 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 8f12f2a2-6379-4fcb-b93e-eac05f10f599 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.423 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/8f12f2a2-6379-4fcb-b93e-eac05f10f599 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}4e94a0ede5bb893797130fc39ee992faf1803b43b6582353b5619a442e3adefc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 02:14:43 compute-0 nova_compute[350387]: 2025-11-26 02:14:43.553 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 283 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 936 KiB/s rd, 31 op/s
Nov 26 02:14:43 compute-0 nova_compute[350387]: 2025-11-26 02:14:43.888 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Updating instance_info_cache with network_info: [{"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.894 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2081 Content-Type: application/json Date: Wed, 26 Nov 2025 02:14:43 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-234f79cd-8cd2-4878-8816-614dbaca367c x-openstack-request-id: req-234f79cd-8cd2-4878-8816-614dbaca367c _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.894 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "8f12f2a2-6379-4fcb-b93e-eac05f10f599", "name": "tempest-TestServerBasicOps-server-1676766604", "status": "ACTIVE", "tenant_id": "8fc101eeda814bb98f1a44c789c8958f", "user_id": "236e06cd46874605a18288ba033ee875", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "cd7c0af32950a9ec775df1ba3f570462c38ca62004909d7b6ed6dd8b", "image": {"id": "4728a8a0-1107-4816-98c6-74482d53f92c", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/4728a8a0-1107-4816-98c6-74482d53f92c"}]}, "flavor": {"id": "6db4d080-ab1e-4a78-a6d9-858137b0ba8b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/6db4d080-ab1e-4a78-a6d9-858137b0ba8b"}]}, "created": "2025-11-26T02:14:03Z", "updated": "2025-11-26T02:14:14Z", "addresses": {"tempest-TestServerBasicOps-996320676-network": [{"version": 4, "addr": "10.100.0.6", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:04:0d:fa"}, {"version": 4, "addr": "192.168.122.242", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:04:0d:fa"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/8f12f2a2-6379-4fcb-b93e-eac05f10f599"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/8f12f2a2-6379-4fcb-b93e-eac05f10f599"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-638417550", "OS-SRV-USG:launched_at": "2025-11-26T02:14:14.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-43790267"}, {"name": "tempest-securitygroup--1135926364"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.894 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/8f12f2a2-6379-4fcb-b93e-eac05f10f599 used request id req-234f79cd-8cd2-4878-8816-614dbaca367c request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.896 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f12f2a2-6379-4fcb-b93e-eac05f10f599', 'name': 'tempest-TestServerBasicOps-server-1676766604', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '4728a8a0-1107-4816-98c6-74482d53f92c'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8fc101eeda814bb98f1a44c789c8958f', 'user_id': '236e06cd46874605a18288ba033ee875', 'hostId': 'cd7c0af32950a9ec775df1ba3f570462c38ca62004909d7b6ed6dd8b', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.896 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.896 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.897 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.897 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.898 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T02:14:43.897176) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.899 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.899 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.900 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.900 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.900 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.900 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.901 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T02:14:43.900535) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.906 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.911 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 / tapd4404ee6-72 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.911 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:43 compute-0 nova_compute[350387]: 2025-11-26 02:14:43.914 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:14:43 compute-0 nova_compute[350387]: 2025-11-26 02:14:43.915 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.916 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 8f12f2a2-6379-4fcb-b93e-eac05f10f599 / tap20b2d898-f3 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.916 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.917 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.917 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.917 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.918 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.918 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.918 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T02:14:43.918348) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.920 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.920 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.920 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.920 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.920 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.921 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T02:14:43.921117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.922 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.922 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.922 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.923 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.923 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.924 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.924 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.924 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.924 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.925 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T02:14:43.924427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.925 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.925 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.926 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.926 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.927 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.927 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.927 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.927 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.928 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.928 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T02:14:43.927740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.928 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.929 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.929 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.930 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.930 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.930 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.930 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.930 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.931 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T02:14:43.931119) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.965 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/cpu volume: 129140000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:43.998 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/cpu volume: 16570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.021 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/cpu volume: 28200000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.022 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.022 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.022 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.022 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.022 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.023 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T02:14:44.023107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.024 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.bytes.delta volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.024 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.025 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.025 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.025 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.026 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.026 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.026 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.027 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.027 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T02:14:44.026813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.027 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/memory.usage volume: 43.5234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.028 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.028 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9: ceilometer.compute.pollsters.NoVolumeException
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.028 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.028 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 8f12f2a2-6379-4fcb-b93e-eac05f10f599: ceilometer.compute.pollsters.NoVolumeException
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.029 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.029 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.029 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.029 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.030 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.030 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.030 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-26T02:14:44.030175) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.030 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-824419160>, <NovaLikeServer: tempest-TestServerBasicOps-server-1676766604>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-824419160>, <NovaLikeServer: tempest-TestServerBasicOps-server-1676766604>]
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.031 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.031 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.031 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.031 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.031 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.032 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T02:14:44.031520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.032 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.032 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.033 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.033 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.033 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.033 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.033 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.033 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.034 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.bytes.delta volume: 1262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T02:14:44.033536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.034 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.034 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.035 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.035 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.035 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.035 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.035 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.035 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.036 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T02:14:44.035591) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.036 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.036 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.037 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.037 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.037 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.037 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.037 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.037 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.038 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T02:14:44.037628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.038 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.038 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.039 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.039 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.039 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.039 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.039 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.039 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.040 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.040 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T02:14:44.039643) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.040 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.040 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.041 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.041 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.041 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.041 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.041 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.041 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T02:14:44.041742) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.054 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.055 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.067 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.067 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.082 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.082 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.083 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.083 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.083 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.083 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.083 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.083 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T02:14:44.083515) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.124 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.125 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.157 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.158 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.207 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.208 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.208 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.208 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.209 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.209 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.209 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.209 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.210 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-26T02:14:44.209361) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.210 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-824419160>, <NovaLikeServer: tempest-TestServerBasicOps-server-1676766604>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-824419160>, <NovaLikeServer: tempest-TestServerBasicOps-server-1676766604>]
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.210 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.210 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.210 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.210 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.211 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T02:14:44.210869) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.211 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.latency volume: 2333207221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.211 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.latency volume: 852741029 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.212 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.read.latency volume: 1507553812 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.212 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.read.latency volume: 1958204 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.212 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.read.latency volume: 1808875985 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.213 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.read.latency volume: 3689853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.213 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.213 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.213 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.213 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.214 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.214 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T02:14:44.214068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.214 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.215 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.215 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.215 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.216 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.216 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.216 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.216 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.216 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.216 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.217 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.217 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.217 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T02:14:44.217046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.218 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.218 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.218 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.218 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.219 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.219 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.219 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.219 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.219 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.220 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.220 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.bytes volume: 72847360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.220 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T02:14:44.220027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.220 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.221 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.221 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.221 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.221 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.222 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.222 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.222 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.222 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.222 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.223 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.223 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.223 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T02:14:44.222957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.223 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.224 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.224 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.224 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.224 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.224 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.224 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.225 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.225 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.latency volume: 8514171650 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.225 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T02:14:44.224980) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.225 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.226 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.226 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.226 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.226 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.227 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.227 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.227 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.227 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.227 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.228 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.228 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.requests volume: 310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.228 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T02:14:44.227886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.228 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.228 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.229 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.229 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.229 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.230 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.230 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.230 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.230 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.230 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.230 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.231 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T02:14:44.230756) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.231 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.231 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.232 15 DEBUG ceilometer.compute.pollsters [-] bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.232 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.232 15 DEBUG ceilometer.compute.pollsters [-] 8f12f2a2-6379-4fcb-b93e-eac05f10f599/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.233 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.233 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.234 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.234 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.234 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.234 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.234 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.235 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.235 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.235 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.235 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.235 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.235 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.235 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.236 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.236 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.236 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.236 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.236 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.236 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.236 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.237 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.237 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.237 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.237 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.237 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:14:44.237 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:14:44 compute-0 nova_compute[350387]: 2025-11-26 02:14:44.838 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:14:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 283 MiB data, 410 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:14:46 compute-0 ovn_controller[89102]: 2025-11-26T02:14:46Z|00155|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Nov 26 02:14:46 compute-0 podman[450912]: 2025-11-26 02:14:46.561421181 +0000 UTC m=+0.118916393 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9)
Nov 26 02:14:46 compute-0 podman[450913]: 2025-11-26 02:14:46.575003372 +0000 UTC m=+0.133243315 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 02:14:47 compute-0 nova_compute[350387]: 2025-11-26 02:14:47.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:14:47 compute-0 nova_compute[350387]: 2025-11-26 02:14:47.301 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:14:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 283 MiB data, 410 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:14:48 compute-0 nova_compute[350387]: 2025-11-26 02:14:48.555 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:49 compute-0 ovn_controller[89102]: 2025-11-26T02:14:49Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:04:0d:fa 10.100.0.6
Nov 26 02:14:49 compute-0 ovn_controller[89102]: 2025-11-26T02:14:49Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:0d:fa 10.100.0.6
Nov 26 02:14:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 283 MiB data, 410 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:14:49 compute-0 nova_compute[350387]: 2025-11-26 02:14:49.842 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:14:51 compute-0 nova_compute[350387]: 2025-11-26 02:14:51.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:14:51 compute-0 nova_compute[350387]: 2025-11-26 02:14:51.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001866450865240607 of space, bias 1.0, pg target 0.559935259572182 quantized to 32 (current 32)
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:14:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 295 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 231 KiB/s rd, 1.5 MiB/s wr, 34 op/s
Nov 26 02:14:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:14:52 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:14:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:14:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:14:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:14:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:14:52 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 73d82d52-9cfd-4fd2-887f-42951d791c04 does not exist
Nov 26 02:14:52 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 6070bf47-5b38-4484-bf39-a1a683deb7eb does not exist
Nov 26 02:14:52 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 37d486bc-da31-4e92-8f76-17ae893b55a3 does not exist
Nov 26 02:14:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:14:52 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:14:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:14:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:14:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:14:52 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:14:53 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:14:53 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:14:53 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:14:53 compute-0 podman[451219]: 2025-11-26 02:14:53.318090281 +0000 UTC m=+0.077477142 container create 156f24b378262c713b55df14d5f5c83f58237688ac1ccd48a7e3b8e72f21fcef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 02:14:53 compute-0 systemd[1]: Started libpod-conmon-156f24b378262c713b55df14d5f5c83f58237688ac1ccd48a7e3b8e72f21fcef.scope.
Nov 26 02:14:53 compute-0 podman[451219]: 2025-11-26 02:14:53.286810345 +0000 UTC m=+0.046197286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:14:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:14:53 compute-0 podman[451219]: 2025-11-26 02:14:53.447900718 +0000 UTC m=+0.207287599 container init 156f24b378262c713b55df14d5f5c83f58237688ac1ccd48a7e3b8e72f21fcef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 02:14:53 compute-0 podman[451219]: 2025-11-26 02:14:53.46473496 +0000 UTC m=+0.224121821 container start 156f24b378262c713b55df14d5f5c83f58237688ac1ccd48a7e3b8e72f21fcef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_knuth, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 02:14:53 compute-0 podman[451219]: 2025-11-26 02:14:53.469467363 +0000 UTC m=+0.228854224 container attach 156f24b378262c713b55df14d5f5c83f58237688ac1ccd48a7e3b8e72f21fcef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_knuth, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 02:14:53 compute-0 bold_knuth[451234]: 167 167
Nov 26 02:14:53 compute-0 systemd[1]: libpod-156f24b378262c713b55df14d5f5c83f58237688ac1ccd48a7e3b8e72f21fcef.scope: Deactivated successfully.
Nov 26 02:14:53 compute-0 podman[451219]: 2025-11-26 02:14:53.477278081 +0000 UTC m=+0.236664952 container died 156f24b378262c713b55df14d5f5c83f58237688ac1ccd48a7e3b8e72f21fcef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 02:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb1ad3a827f162276190054cd2438e37fc5d56268e0ca44b2d07b50ad6cf06cb-merged.mount: Deactivated successfully.
Nov 26 02:14:53 compute-0 podman[451219]: 2025-11-26 02:14:53.539467994 +0000 UTC m=+0.298854855 container remove 156f24b378262c713b55df14d5f5c83f58237688ac1ccd48a7e3b8e72f21fcef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:14:53 compute-0 nova_compute[350387]: 2025-11-26 02:14:53.557 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:53 compute-0 systemd[1]: libpod-conmon-156f24b378262c713b55df14d5f5c83f58237688ac1ccd48a7e3b8e72f21fcef.scope: Deactivated successfully.
Nov 26 02:14:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 312 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 26 02:14:53 compute-0 podman[451257]: 2025-11-26 02:14:53.782130613 +0000 UTC m=+0.070928448 container create 6e3db1dbd09de0e78645cff61e0a015a5cdad9d14f7e87131181bdd3398cb393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kilby, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 02:14:53 compute-0 podman[451257]: 2025-11-26 02:14:53.750923208 +0000 UTC m=+0.039721003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:14:53 compute-0 systemd[1]: Started libpod-conmon-6e3db1dbd09de0e78645cff61e0a015a5cdad9d14f7e87131181bdd3398cb393.scope.
Nov 26 02:14:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505bab8c9df441c63de2f131777447d7599d26fec96688fdcc32c33c8819d340/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505bab8c9df441c63de2f131777447d7599d26fec96688fdcc32c33c8819d340/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505bab8c9df441c63de2f131777447d7599d26fec96688fdcc32c33c8819d340/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505bab8c9df441c63de2f131777447d7599d26fec96688fdcc32c33c8819d340/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505bab8c9df441c63de2f131777447d7599d26fec96688fdcc32c33c8819d340/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:14:53 compute-0 podman[451257]: 2025-11-26 02:14:53.998381092 +0000 UTC m=+0.287178927 container init 6e3db1dbd09de0e78645cff61e0a015a5cdad9d14f7e87131181bdd3398cb393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kilby, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:14:54 compute-0 podman[451257]: 2025-11-26 02:14:54.00724067 +0000 UTC m=+0.296038465 container start 6e3db1dbd09de0e78645cff61e0a015a5cdad9d14f7e87131181bdd3398cb393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:14:54 compute-0 podman[451257]: 2025-11-26 02:14:54.011551061 +0000 UTC m=+0.300348926 container attach 6e3db1dbd09de0e78645cff61e0a015a5cdad9d14f7e87131181bdd3398cb393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 02:14:54 compute-0 nova_compute[350387]: 2025-11-26 02:14:54.847 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:55 compute-0 dazzling_kilby[451273]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:14:55 compute-0 dazzling_kilby[451273]: --> relative data size: 1.0
Nov 26 02:14:55 compute-0 dazzling_kilby[451273]: --> All data devices are unavailable
Nov 26 02:14:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:14:55 compute-0 systemd[1]: libpod-6e3db1dbd09de0e78645cff61e0a015a5cdad9d14f7e87131181bdd3398cb393.scope: Deactivated successfully.
Nov 26 02:14:55 compute-0 systemd[1]: libpod-6e3db1dbd09de0e78645cff61e0a015a5cdad9d14f7e87131181bdd3398cb393.scope: Consumed 1.099s CPU time.
Nov 26 02:14:55 compute-0 podman[451257]: 2025-11-26 02:14:55.164454133 +0000 UTC m=+1.453251928 container died 6e3db1dbd09de0e78645cff61e0a015a5cdad9d14f7e87131181bdd3398cb393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kilby, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 02:14:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-505bab8c9df441c63de2f131777447d7599d26fec96688fdcc32c33c8819d340-merged.mount: Deactivated successfully.
Nov 26 02:14:55 compute-0 podman[451257]: 2025-11-26 02:14:55.231434679 +0000 UTC m=+1.520232474 container remove 6e3db1dbd09de0e78645cff61e0a015a5cdad9d14f7e87131181bdd3398cb393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 02:14:55 compute-0 systemd[1]: libpod-conmon-6e3db1dbd09de0e78645cff61e0a015a5cdad9d14f7e87131181bdd3398cb393.scope: Deactivated successfully.
Nov 26 02:14:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 315 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 26 02:14:56 compute-0 podman[451447]: 2025-11-26 02:14:56.295451162 +0000 UTC m=+0.059320323 container create 56701e3a874c13d19d21b60d50543d961b2ac968f863bcfd85f7d8b2ebb9e045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:14:56 compute-0 systemd[1]: Started libpod-conmon-56701e3a874c13d19d21b60d50543d961b2ac968f863bcfd85f7d8b2ebb9e045.scope.
Nov 26 02:14:56 compute-0 podman[451447]: 2025-11-26 02:14:56.27577613 +0000 UTC m=+0.039645311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:14:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:14:56 compute-0 podman[451447]: 2025-11-26 02:14:56.404085365 +0000 UTC m=+0.167954546 container init 56701e3a874c13d19d21b60d50543d961b2ac968f863bcfd85f7d8b2ebb9e045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bhaskara, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:14:56 compute-0 podman[451447]: 2025-11-26 02:14:56.415755222 +0000 UTC m=+0.179624373 container start 56701e3a874c13d19d21b60d50543d961b2ac968f863bcfd85f7d8b2ebb9e045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:14:56 compute-0 podman[451447]: 2025-11-26 02:14:56.420288439 +0000 UTC m=+0.184157600 container attach 56701e3a874c13d19d21b60d50543d961b2ac968f863bcfd85f7d8b2ebb9e045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 02:14:56 compute-0 frosty_bhaskara[451462]: 167 167
Nov 26 02:14:56 compute-0 systemd[1]: libpod-56701e3a874c13d19d21b60d50543d961b2ac968f863bcfd85f7d8b2ebb9e045.scope: Deactivated successfully.
Nov 26 02:14:56 compute-0 conmon[451462]: conmon 56701e3a874c13d19d21 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56701e3a874c13d19d21b60d50543d961b2ac968f863bcfd85f7d8b2ebb9e045.scope/container/memory.events
Nov 26 02:14:56 compute-0 podman[451447]: 2025-11-26 02:14:56.426447162 +0000 UTC m=+0.190316363 container died 56701e3a874c13d19d21b60d50543d961b2ac968f863bcfd85f7d8b2ebb9e045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 26 02:14:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cbd4acaad593cda628ff21f3e9779f0a611954e50e7dffbd720a24bcf7d4383-merged.mount: Deactivated successfully.
Nov 26 02:14:56 compute-0 podman[451447]: 2025-11-26 02:14:56.491202466 +0000 UTC m=+0.255071667 container remove 56701e3a874c13d19d21b60d50543d961b2ac968f863bcfd85f7d8b2ebb9e045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:14:56 compute-0 systemd[1]: libpod-conmon-56701e3a874c13d19d21b60d50543d961b2ac968f863bcfd85f7d8b2ebb9e045.scope: Deactivated successfully.
Nov 26 02:14:56 compute-0 podman[451485]: 2025-11-26 02:14:56.770384619 +0000 UTC m=+0.078851871 container create ea0a717888a9fb39e9b827d0cc4027e36bc1394a18334490d71c6dc32ba0ef6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 02:14:56 compute-0 podman[451485]: 2025-11-26 02:14:56.734122722 +0000 UTC m=+0.042589975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:14:56 compute-0 systemd[1]: Started libpod-conmon-ea0a717888a9fb39e9b827d0cc4027e36bc1394a18334490d71c6dc32ba0ef6b.scope.
Nov 26 02:14:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03dbb49bf50f0918981ecb37a35306dd15c4ddfeb4e6b99210222216d4dbe526/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03dbb49bf50f0918981ecb37a35306dd15c4ddfeb4e6b99210222216d4dbe526/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03dbb49bf50f0918981ecb37a35306dd15c4ddfeb4e6b99210222216d4dbe526/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03dbb49bf50f0918981ecb37a35306dd15c4ddfeb4e6b99210222216d4dbe526/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:14:56 compute-0 podman[451485]: 2025-11-26 02:14:56.970138715 +0000 UTC m=+0.278605967 container init ea0a717888a9fb39e9b827d0cc4027e36bc1394a18334490d71c6dc32ba0ef6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:14:56 compute-0 podman[451485]: 2025-11-26 02:14:56.987527493 +0000 UTC m=+0.295994735 container start ea0a717888a9fb39e9b827d0cc4027e36bc1394a18334490d71c6dc32ba0ef6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:14:56 compute-0 podman[451485]: 2025-11-26 02:14:56.994471867 +0000 UTC m=+0.302939099 container attach ea0a717888a9fb39e9b827d0cc4027e36bc1394a18334490d71c6dc32ba0ef6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 02:14:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 315 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]: {
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:    "0": [
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:        {
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "devices": [
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "/dev/loop3"
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            ],
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "lv_name": "ceph_lv0",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "lv_size": "21470642176",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "name": "ceph_lv0",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "tags": {
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.cluster_name": "ceph",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.crush_device_class": "",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.encrypted": "0",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.osd_id": "0",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.type": "block",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.vdo": "0"
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            },
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "type": "block",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "vg_name": "ceph_vg0"
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:        }
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:    ],
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:    "1": [
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:        {
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "devices": [
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "/dev/loop4"
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            ],
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "lv_name": "ceph_lv1",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "lv_size": "21470642176",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "name": "ceph_lv1",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "tags": {
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.cluster_name": "ceph",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.crush_device_class": "",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.encrypted": "0",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.osd_id": "1",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.type": "block",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.vdo": "0"
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            },
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "type": "block",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "vg_name": "ceph_vg1"
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:        }
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:    ],
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:    "2": [
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:        {
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "devices": [
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "/dev/loop5"
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            ],
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "lv_name": "ceph_lv2",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "lv_size": "21470642176",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "name": "ceph_lv2",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "tags": {
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.cluster_name": "ceph",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.crush_device_class": "",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.encrypted": "0",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.osd_id": "2",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.type": "block",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:                "ceph.vdo": "0"
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            },
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "type": "block",
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:            "vg_name": "ceph_vg2"
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:        }
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]:    ]
Nov 26 02:14:57 compute-0 vibrant_ptolemy[451501]: }
Nov 26 02:14:57 compute-0 systemd[1]: libpod-ea0a717888a9fb39e9b827d0cc4027e36bc1394a18334490d71c6dc32ba0ef6b.scope: Deactivated successfully.
Nov 26 02:14:57 compute-0 podman[451485]: 2025-11-26 02:14:57.811909221 +0000 UTC m=+1.120376453 container died ea0a717888a9fb39e9b827d0cc4027e36bc1394a18334490d71c6dc32ba0ef6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-03dbb49bf50f0918981ecb37a35306dd15c4ddfeb4e6b99210222216d4dbe526-merged.mount: Deactivated successfully.
Nov 26 02:14:57 compute-0 podman[451485]: 2025-11-26 02:14:57.899277179 +0000 UTC m=+1.207744391 container remove ea0a717888a9fb39e9b827d0cc4027e36bc1394a18334490d71c6dc32ba0ef6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 02:14:57 compute-0 systemd[1]: libpod-conmon-ea0a717888a9fb39e9b827d0cc4027e36bc1394a18334490d71c6dc32ba0ef6b.scope: Deactivated successfully.
Nov 26 02:14:58 compute-0 nova_compute[350387]: 2025-11-26 02:14:58.559 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:14:58 compute-0 podman[451660]: 2025-11-26 02:14:58.889554934 +0000 UTC m=+0.058535911 container create 16c016f74d38c507f454edd0fd53567b8cd6d8b6cf6c674e0837f768c36b7169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_nash, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:14:58 compute-0 systemd[1]: Started libpod-conmon-16c016f74d38c507f454edd0fd53567b8cd6d8b6cf6c674e0837f768c36b7169.scope.
Nov 26 02:14:58 compute-0 podman[451660]: 2025-11-26 02:14:58.868946046 +0000 UTC m=+0.037927033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:14:58 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:14:59 compute-0 podman[451660]: 2025-11-26 02:14:59.021248933 +0000 UTC m=+0.190229940 container init 16c016f74d38c507f454edd0fd53567b8cd6d8b6cf6c674e0837f768c36b7169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 02:14:59 compute-0 podman[451660]: 2025-11-26 02:14:59.032717325 +0000 UTC m=+0.201698302 container start 16c016f74d38c507f454edd0fd53567b8cd6d8b6cf6c674e0837f768c36b7169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_nash, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 02:14:59 compute-0 podman[451660]: 2025-11-26 02:14:59.047006365 +0000 UTC m=+0.215987342 container attach 16c016f74d38c507f454edd0fd53567b8cd6d8b6cf6c674e0837f768c36b7169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_nash, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:14:59 compute-0 sweet_nash[451676]: 167 167
Nov 26 02:14:59 compute-0 systemd[1]: libpod-16c016f74d38c507f454edd0fd53567b8cd6d8b6cf6c674e0837f768c36b7169.scope: Deactivated successfully.
Nov 26 02:14:59 compute-0 podman[451660]: 2025-11-26 02:14:59.057423097 +0000 UTC m=+0.226404074 container died 16c016f74d38c507f454edd0fd53567b8cd6d8b6cf6c674e0837f768c36b7169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_nash, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 02:14:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-167c942d2d6c2a26d2c7704cd89a795c3ab3019222f1bcb0b406af2a7783ba3b-merged.mount: Deactivated successfully.
Nov 26 02:14:59 compute-0 podman[451660]: 2025-11-26 02:14:59.130036572 +0000 UTC m=+0.299017559 container remove 16c016f74d38c507f454edd0fd53567b8cd6d8b6cf6c674e0837f768c36b7169 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_nash, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:14:59 compute-0 systemd[1]: libpod-conmon-16c016f74d38c507f454edd0fd53567b8cd6d8b6cf6c674e0837f768c36b7169.scope: Deactivated successfully.
Nov 26 02:14:59 compute-0 podman[451698]: 2025-11-26 02:14:59.350141089 +0000 UTC m=+0.053577503 container create 184e821f5d645ce4f72504141c1e6d3384fdf06edc603e4b2f571fa1b9a70f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:14:59 compute-0 systemd[1]: Started libpod-conmon-184e821f5d645ce4f72504141c1e6d3384fdf06edc603e4b2f571fa1b9a70f37.scope.
Nov 26 02:14:59 compute-0 podman[451698]: 2025-11-26 02:14:59.333074251 +0000 UTC m=+0.036510665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:14:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d090d621eb6b9ddde30d55c8a5f9dc6020e64892f8682ecc96a4ae54092021f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d090d621eb6b9ddde30d55c8a5f9dc6020e64892f8682ecc96a4ae54092021f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d090d621eb6b9ddde30d55c8a5f9dc6020e64892f8682ecc96a4ae54092021f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d090d621eb6b9ddde30d55c8a5f9dc6020e64892f8682ecc96a4ae54092021f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:14:59 compute-0 podman[451698]: 2025-11-26 02:14:59.480631565 +0000 UTC m=+0.184068029 container init 184e821f5d645ce4f72504141c1e6d3384fdf06edc603e4b2f571fa1b9a70f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 02:14:59 compute-0 podman[451698]: 2025-11-26 02:14:59.494250646 +0000 UTC m=+0.197687070 container start 184e821f5d645ce4f72504141c1e6d3384fdf06edc603e4b2f571fa1b9a70f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:14:59 compute-0 podman[451698]: 2025-11-26 02:14:59.499326739 +0000 UTC m=+0.202763173 container attach 184e821f5d645ce4f72504141c1e6d3384fdf06edc603e4b2f571fa1b9a70f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:14:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 315 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 26 02:14:59 compute-0 podman[158021]: time="2025-11-26T02:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:14:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47845 "" "Go-http-client/1.1"
Nov 26 02:14:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9994 "" "Go-http-client/1.1"
Nov 26 02:14:59 compute-0 nova_compute[350387]: 2025-11-26 02:14:59.854 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:15:00 compute-0 musing_boyd[451714]: {
Nov 26 02:15:00 compute-0 musing_boyd[451714]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:15:00 compute-0 musing_boyd[451714]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:15:00 compute-0 musing_boyd[451714]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:15:00 compute-0 musing_boyd[451714]:        "osd_id": 0,
Nov 26 02:15:00 compute-0 musing_boyd[451714]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:15:00 compute-0 musing_boyd[451714]:        "type": "bluestore"
Nov 26 02:15:00 compute-0 musing_boyd[451714]:    },
Nov 26 02:15:00 compute-0 musing_boyd[451714]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:15:00 compute-0 musing_boyd[451714]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:15:00 compute-0 musing_boyd[451714]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:15:00 compute-0 musing_boyd[451714]:        "osd_id": 2,
Nov 26 02:15:00 compute-0 musing_boyd[451714]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:15:00 compute-0 musing_boyd[451714]:        "type": "bluestore"
Nov 26 02:15:00 compute-0 musing_boyd[451714]:    },
Nov 26 02:15:00 compute-0 musing_boyd[451714]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:15:00 compute-0 musing_boyd[451714]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:15:00 compute-0 musing_boyd[451714]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:15:00 compute-0 musing_boyd[451714]:        "osd_id": 1,
Nov 26 02:15:00 compute-0 musing_boyd[451714]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:15:00 compute-0 musing_boyd[451714]:        "type": "bluestore"
Nov 26 02:15:00 compute-0 musing_boyd[451714]:    }
Nov 26 02:15:00 compute-0 musing_boyd[451714]: }
Nov 26 02:15:00 compute-0 systemd[1]: libpod-184e821f5d645ce4f72504141c1e6d3384fdf06edc603e4b2f571fa1b9a70f37.scope: Deactivated successfully.
Nov 26 02:15:00 compute-0 systemd[1]: libpod-184e821f5d645ce4f72504141c1e6d3384fdf06edc603e4b2f571fa1b9a70f37.scope: Consumed 1.125s CPU time.
Nov 26 02:15:00 compute-0 podman[451748]: 2025-11-26 02:15:00.733421096 +0000 UTC m=+0.054852118 container died 184e821f5d645ce4f72504141c1e6d3384fdf06edc603e4b2f571fa1b9a70f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 02:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d090d621eb6b9ddde30d55c8a5f9dc6020e64892f8682ecc96a4ae54092021f2-merged.mount: Deactivated successfully.
Nov 26 02:15:00 compute-0 podman[451747]: 2025-11-26 02:15:00.804528719 +0000 UTC m=+0.115217030 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 26 02:15:00 compute-0 podman[451749]: 2025-11-26 02:15:00.821576856 +0000 UTC m=+0.128299786 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 26 02:15:00 compute-0 podman[451750]: 2025-11-26 02:15:00.826386271 +0000 UTC m=+0.138583994 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:15:00 compute-0 podman[451748]: 2025-11-26 02:15:00.843459839 +0000 UTC m=+0.164890831 container remove 184e821f5d645ce4f72504141c1e6d3384fdf06edc603e4b2f571fa1b9a70f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:15:00 compute-0 systemd[1]: libpod-conmon-184e821f5d645ce4f72504141c1e6d3384fdf06edc603e4b2f571fa1b9a70f37.scope: Deactivated successfully.
Nov 26 02:15:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:15:00 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:15:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:15:00 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:15:00 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 704d9abb-20ba-4757-a6c1-e1a177f56d24 does not exist
Nov 26 02:15:00 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev e7e70f70-1818-4fec-9ef9-f56e488a6b47 does not exist
Nov 26 02:15:01 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:15:01 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:15:01 compute-0 openstack_network_exporter[367323]: ERROR   02:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:15:01 compute-0 openstack_network_exporter[367323]: ERROR   02:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:15:01 compute-0 openstack_network_exporter[367323]: ERROR   02:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:15:01 compute-0 openstack_network_exporter[367323]: ERROR   02:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:15:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:15:01 compute-0 openstack_network_exporter[367323]: ERROR   02:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:15:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:15:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 315 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 26 02:15:03 compute-0 ovn_controller[89102]: 2025-11-26T02:15:03Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:03:6c 10.100.0.11
Nov 26 02:15:03 compute-0 nova_compute[350387]: 2025-11-26 02:15:03.565 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 315 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 191 KiB/s rd, 719 KiB/s wr, 43 op/s
Nov 26 02:15:04 compute-0 nova_compute[350387]: 2025-11-26 02:15:04.857 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.155531) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123305155905, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 889, "num_deletes": 250, "total_data_size": 1238249, "memory_usage": 1268000, "flush_reason": "Manual Compaction"}
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123305166354, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 1216842, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39362, "largest_seqno": 40250, "table_properties": {"data_size": 1212353, "index_size": 2141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 8678, "raw_average_key_size": 17, "raw_value_size": 1203532, "raw_average_value_size": 2411, "num_data_blocks": 96, "num_entries": 499, "num_filter_entries": 499, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764123226, "oldest_key_time": 1764123226, "file_creation_time": 1764123305, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 10910 microseconds, and 5547 cpu microseconds.
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.166448) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 1216842 bytes OK
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.166468) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.169051) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.169069) EVENT_LOG_v1 {"time_micros": 1764123305169064, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.169089) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 1233912, prev total WAL file size 1233912, number of live WAL files 2.
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.170512) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(1188KB)], [92(6613KB)]
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123305170554, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 7989543, "oldest_snapshot_seqno": -1}
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 5576 keys, 7254493 bytes, temperature: kUnknown
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123305226594, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 7254493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7219922, "index_size": 19549, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13957, "raw_key_size": 144707, "raw_average_key_size": 25, "raw_value_size": 7121670, "raw_average_value_size": 1277, "num_data_blocks": 778, "num_entries": 5576, "num_filter_entries": 5576, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764123305, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.227191) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 7254493 bytes
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.229592) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.5 rd, 128.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 6.5 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(12.5) write-amplify(6.0) OK, records in: 6088, records dropped: 512 output_compression: NoCompression
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.229616) EVENT_LOG_v1 {"time_micros": 1764123305229605, "job": 54, "event": "compaction_finished", "compaction_time_micros": 56481, "compaction_time_cpu_micros": 36347, "output_level": 6, "num_output_files": 1, "total_output_size": 7254493, "num_input_records": 6088, "num_output_records": 5576, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123305231617, "job": 54, "event": "table_file_deletion", "file_number": 94}
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123305234336, "job": 54, "event": "table_file_deletion", "file_number": 92}
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.170311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.234959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.234964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.234967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.234970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:15:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:15:05.234972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:15:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 315 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 29 KiB/s wr, 17 op/s
Nov 26 02:15:07 compute-0 nova_compute[350387]: 2025-11-26 02:15:07.105 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "add194b7-6a6c-48ef-8355-3344185eb43e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:07 compute-0 nova_compute[350387]: 2025-11-26 02:15:07.106 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:07 compute-0 nova_compute[350387]: 2025-11-26 02:15:07.132 350391 DEBUG nova.compute.manager [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 02:15:07 compute-0 nova_compute[350387]: 2025-11-26 02:15:07.231 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:07 compute-0 nova_compute[350387]: 2025-11-26 02:15:07.232 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:07 compute-0 nova_compute[350387]: 2025-11-26 02:15:07.245 350391 DEBUG nova.virt.hardware [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 02:15:07 compute-0 nova_compute[350387]: 2025-11-26 02:15:07.246 350391 INFO nova.compute.claims [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 02:15:07 compute-0 nova_compute[350387]: 2025-11-26 02:15:07.475 350391 DEBUG oslo_concurrency.processutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:15:07 compute-0 podman[451874]: 2025-11-26 02:15:07.59798986 +0000 UTC m=+0.143056640 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:15:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 315 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 534 KiB/s rd, 23 KiB/s wr, 47 op/s
Nov 26 02:15:07 compute-0 podman[451875]: 2025-11-26 02:15:07.605612703 +0000 UTC m=+0.142800652 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 02:15:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:15:07 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2883147519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.030 350391 DEBUG oslo_concurrency.processutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.043 350391 DEBUG nova.compute.provider_tree [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.059 350391 DEBUG nova.scheduler.client.report [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.084 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.085 350391 DEBUG nova.compute.manager [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.144 350391 DEBUG nova.compute.manager [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.145 350391 DEBUG nova.network.neutron [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.165 350391 INFO nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.185 350391 DEBUG nova.compute.manager [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.290 350391 DEBUG nova.compute.manager [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.295 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.296 350391 INFO nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Creating image(s)#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.345 350391 DEBUG nova.storage.rbd_utils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] rbd image add194b7-6a6c-48ef-8355-3344185eb43e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.404 350391 DEBUG nova.storage.rbd_utils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] rbd image add194b7-6a6c-48ef-8355-3344185eb43e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.456 350391 DEBUG nova.storage.rbd_utils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] rbd image add194b7-6a6c-48ef-8355-3344185eb43e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.467 350391 DEBUG oslo_concurrency.processutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.565 350391 DEBUG oslo_concurrency.processutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.570 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "75aa7190add890d937d223054d1bca64341e098f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.572 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "75aa7190add890d937d223054d1bca64341e098f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.572 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "75aa7190add890d937d223054d1bca64341e098f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.620 350391 DEBUG nova.storage.rbd_utils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] rbd image add194b7-6a6c-48ef-8355-3344185eb43e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.633 350391 DEBUG oslo_concurrency.processutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f add194b7-6a6c-48ef-8355-3344185eb43e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.669 350391 DEBUG nova.policy [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9710ede02d47cbb016ff596d936633', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 02:15:08 compute-0 nova_compute[350387]: 2025-11-26 02:15:08.673 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:09 compute-0 nova_compute[350387]: 2025-11-26 02:15:09.052 350391 DEBUG oslo_concurrency.processutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f add194b7-6a6c-48ef-8355-3344185eb43e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:15:09 compute-0 nova_compute[350387]: 2025-11-26 02:15:09.170 350391 DEBUG nova.storage.rbd_utils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] resizing rbd image add194b7-6a6c-48ef-8355-3344185eb43e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 26 02:15:09 compute-0 nova_compute[350387]: 2025-11-26 02:15:09.355 350391 DEBUG nova.objects.instance [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lazy-loading 'migration_context' on Instance uuid add194b7-6a6c-48ef-8355-3344185eb43e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:15:09 compute-0 nova_compute[350387]: 2025-11-26 02:15:09.375 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 02:15:09 compute-0 nova_compute[350387]: 2025-11-26 02:15:09.375 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Ensure instance console log exists: /var/lib/nova/instances/add194b7-6a6c-48ef-8355-3344185eb43e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 02:15:09 compute-0 nova_compute[350387]: 2025-11-26 02:15:09.377 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:09 compute-0 nova_compute[350387]: 2025-11-26 02:15:09.378 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:09 compute-0 nova_compute[350387]: 2025-11-26 02:15:09.378 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 315 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 533 KiB/s rd, 23 KiB/s wr, 47 op/s
Nov 26 02:15:09 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:09.794 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:15:09 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:09.796 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 02:15:09 compute-0 nova_compute[350387]: 2025-11-26 02:15:09.798 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:09 compute-0 nova_compute[350387]: 2025-11-26 02:15:09.860 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:09 compute-0 nova_compute[350387]: 2025-11-26 02:15:09.913 350391 DEBUG nova.network.neutron [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Successfully created port: caa46d5d-d6ee-42de-a514-e911d1f0fc60 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 02:15:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:15:10 compute-0 nova_compute[350387]: 2025-11-26 02:15:10.717 350391 DEBUG nova.network.neutron [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Successfully updated port: caa46d5d-d6ee-42de-a514-e911d1f0fc60 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 02:15:10 compute-0 nova_compute[350387]: 2025-11-26 02:15:10.735 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:15:10 compute-0 nova_compute[350387]: 2025-11-26 02:15:10.735 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquired lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:15:10 compute-0 nova_compute[350387]: 2025-11-26 02:15:10.736 350391 DEBUG nova.network.neutron [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 02:15:10 compute-0 nova_compute[350387]: 2025-11-26 02:15:10.816 350391 DEBUG nova.compute.manager [req-0db86883-1ca2-40e3-a296-69eba71eb37f req-80608583-5b23-41a1-aa4d-aeeada736db4 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Received event network-changed-caa46d5d-d6ee-42de-a514-e911d1f0fc60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:15:10 compute-0 nova_compute[350387]: 2025-11-26 02:15:10.817 350391 DEBUG nova.compute.manager [req-0db86883-1ca2-40e3-a296-69eba71eb37f req-80608583-5b23-41a1-aa4d-aeeada736db4 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Refreshing instance network info cache due to event network-changed-caa46d5d-d6ee-42de-a514-e911d1f0fc60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 02:15:10 compute-0 nova_compute[350387]: 2025-11-26 02:15:10.818 350391 DEBUG oslo_concurrency.lockutils [req-0db86883-1ca2-40e3-a296-69eba71eb37f req-80608583-5b23-41a1-aa4d-aeeada736db4 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:15:10 compute-0 nova_compute[350387]: 2025-11-26 02:15:10.931 350391 DEBUG nova.network.neutron [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 02:15:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:15:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:15:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:15:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:15:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:15:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:15:11 compute-0 podman[452105]: 2025-11-26 02:15:11.586072849 +0000 UTC m=+0.129069187 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release-0.7.12=, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., architecture=x86_64)
Nov 26 02:15:11 compute-0 podman[452106]: 2025-11-26 02:15:11.601159052 +0000 UTC m=+0.139999954 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:15:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 347 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 534 KiB/s rd, 909 KiB/s wr, 51 op/s
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.077 350391 DEBUG nova.network.neutron [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Updating instance_info_cache with network_info: [{"id": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "address": "fa:16:3e:6e:b7:00", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaa46d5d-d6", "ovs_interfaceid": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.100 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Releasing lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.100 350391 DEBUG nova.compute.manager [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Instance network_info: |[{"id": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "address": "fa:16:3e:6e:b7:00", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaa46d5d-d6", "ovs_interfaceid": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.101 350391 DEBUG oslo_concurrency.lockutils [req-0db86883-1ca2-40e3-a296-69eba71eb37f req-80608583-5b23-41a1-aa4d-aeeada736db4 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquired lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.103 350391 DEBUG nova.network.neutron [req-0db86883-1ca2-40e3-a296-69eba71eb37f req-80608583-5b23-41a1-aa4d-aeeada736db4 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Refreshing network info cache for port caa46d5d-d6ee-42de-a514-e911d1f0fc60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.109 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Start _get_guest_xml network_info=[{"id": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "address": "fa:16:3e:6e:b7:00", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaa46d5d-d6", "ovs_interfaceid": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:12:09Z,direct_url=<?>,disk_format='qcow2',id=dbaf181e-c7da-4938-bfef-7ab3aa9a19bc,min_disk=0,min_ram=0,name='tempest-scenario-img--177366414',owner='cb4e9e1ffe494961ba45f8f24f21b106',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:12:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encrypted': False, 'encryption_format': None, 'size': 0, 'device_name': '/dev/vda', 'image_id': 'dbaf181e-c7da-4938-bfef-7ab3aa9a19bc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.120 350391 WARNING nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.136 350391 DEBUG nova.virt.libvirt.host [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.138 350391 DEBUG nova.virt.libvirt.host [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.143 350391 DEBUG nova.virt.libvirt.host [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.145 350391 DEBUG nova.virt.libvirt.host [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.146 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.147 350391 DEBUG nova.virt.hardware [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T02:09:05Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6db4d080-ab1e-4a78-a6d9-858137b0ba8b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T02:12:09Z,direct_url=<?>,disk_format='qcow2',id=dbaf181e-c7da-4938-bfef-7ab3aa9a19bc,min_disk=0,min_ram=0,name='tempest-scenario-img--177366414',owner='cb4e9e1ffe494961ba45f8f24f21b106',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T02:12:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.148 350391 DEBUG nova.virt.hardware [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.149 350391 DEBUG nova.virt.hardware [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.150 350391 DEBUG nova.virt.hardware [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.151 350391 DEBUG nova.virt.hardware [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.152 350391 DEBUG nova.virt.hardware [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.153 350391 DEBUG nova.virt.hardware [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.154 350391 DEBUG nova.virt.hardware [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.155 350391 DEBUG nova.virt.hardware [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.156 350391 DEBUG nova.virt.hardware [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.157 350391 DEBUG nova.virt.hardware [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.164 350391 DEBUG oslo_concurrency.processutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:15:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:15:12 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1491188137' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.675 350391 DEBUG oslo_concurrency.processutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.730 350391 DEBUG nova.storage.rbd_utils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] rbd image add194b7-6a6c-48ef-8355-3344185eb43e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:15:12 compute-0 nova_compute[350387]: 2025-11-26 02:15:12.743 350391 DEBUG oslo_concurrency.processutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:15:12 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:12.799 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:15:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 26 02:15:13 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1673757962' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.238 350391 DEBUG oslo_concurrency.processutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.241 350391 DEBUG nova.virt.libvirt.vif [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i',id=15,image_ref='dbaf181e-c7da-4938-bfef-7ab3aa9a19bc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bd820598-acdd-4f42-8252-1f5951161b01'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb4e9e1ffe494961ba45f8f24f21b106',ramdisk_id='',reservation_id='r-lsmzl6nz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='dbaf181e-c7da-4938-bfef-7ab3aa9a19bc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-624283200',owner_user_name='tempest-PrometheusGabbiTest-624283200-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:15:08Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='3a9710ede02d47cbb016ff596d936633',uuid=add194b7-6a6c-48ef-8355-3344185eb43e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "address": "fa:16:3e:6e:b7:00", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaa46d5d-d6", "ovs_interfaceid": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.241 350391 DEBUG nova.network.os_vif_util [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Converting VIF {"id": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "address": "fa:16:3e:6e:b7:00", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaa46d5d-d6", "ovs_interfaceid": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.242 350391 DEBUG nova.network.os_vif_util [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:b7:00,bridge_name='br-int',has_traffic_filtering=True,id=caa46d5d-d6ee-42de-a514-e911d1f0fc60,network=Network(02245f78-e221-4ecd-ae3b-975782a68c5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaa46d5d-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.244 350391 DEBUG nova.objects.instance [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lazy-loading 'pci_devices' on Instance uuid add194b7-6a6c-48ef-8355-3344185eb43e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.260 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] End _get_guest_xml xml=<domain type="kvm">
Nov 26 02:15:13 compute-0 nova_compute[350387]:  <uuid>add194b7-6a6c-48ef-8355-3344185eb43e</uuid>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  <name>instance-0000000f</name>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  <memory>131072</memory>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  <vcpu>1</vcpu>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  <metadata>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <nova:name>te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i</nova:name>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <nova:creationTime>2025-11-26 02:15:12</nova:creationTime>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <nova:flavor name="m1.nano">
Nov 26 02:15:13 compute-0 nova_compute[350387]:        <nova:memory>128</nova:memory>
Nov 26 02:15:13 compute-0 nova_compute[350387]:        <nova:disk>1</nova:disk>
Nov 26 02:15:13 compute-0 nova_compute[350387]:        <nova:swap>0</nova:swap>
Nov 26 02:15:13 compute-0 nova_compute[350387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 02:15:13 compute-0 nova_compute[350387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      </nova:flavor>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <nova:owner>
Nov 26 02:15:13 compute-0 nova_compute[350387]:        <nova:user uuid="3a9710ede02d47cbb016ff596d936633">tempest-PrometheusGabbiTest-624283200-project-member</nova:user>
Nov 26 02:15:13 compute-0 nova_compute[350387]:        <nova:project uuid="cb4e9e1ffe494961ba45f8f24f21b106">tempest-PrometheusGabbiTest-624283200</nova:project>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      </nova:owner>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <nova:root type="image" uuid="dbaf181e-c7da-4938-bfef-7ab3aa9a19bc"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <nova:ports>
Nov 26 02:15:13 compute-0 nova_compute[350387]:        <nova:port uuid="caa46d5d-d6ee-42de-a514-e911d1f0fc60">
Nov 26 02:15:13 compute-0 nova_compute[350387]:          <nova:ip type="fixed" address="10.100.2.215" ipVersion="4"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:        </nova:port>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      </nova:ports>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    </nova:instance>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  </metadata>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  <sysinfo type="smbios">
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <system>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <entry name="manufacturer">RDO</entry>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <entry name="serial">add194b7-6a6c-48ef-8355-3344185eb43e</entry>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <entry name="uuid">add194b7-6a6c-48ef-8355-3344185eb43e</entry>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <entry name="family">Virtual Machine</entry>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    </system>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  </sysinfo>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  <os>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <boot dev="hd"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <smbios mode="sysinfo"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  </os>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  <features>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <acpi/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <apic/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <vmcoreinfo/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  </features>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  <clock offset="utc">
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <timer name="hpet" present="no"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  </clock>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  <cpu mode="host-model" match="exact">
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  </cpu>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  <devices>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <disk type="network" device="disk">
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/add194b7-6a6c-48ef-8355-3344185eb43e_disk">
Nov 26 02:15:13 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      </source>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:15:13 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <target dev="vda" bus="virtio"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <disk type="network" device="cdrom">
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <driver type="raw" cache="none"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <source protocol="rbd" name="vms/add194b7-6a6c-48ef-8355-3344185eb43e_disk.config">
Nov 26 02:15:13 compute-0 nova_compute[350387]:        <host name="192.168.122.100" port="6789"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      </source>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <auth username="openstack">
Nov 26 02:15:13 compute-0 nova_compute[350387]:        <secret type="ceph" uuid="36901f64-240e-5c29-a2e2-29b56f2c329c"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      </auth>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <target dev="sda" bus="sata"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    </disk>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <interface type="ethernet">
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <mac address="fa:16:3e:6e:b7:00"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <mtu size="1442"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <target dev="tapcaa46d5d-d6"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    </interface>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <serial type="pty">
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <log file="/var/lib/nova/instances/add194b7-6a6c-48ef-8355-3344185eb43e/console.log" append="off"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    </serial>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <video>
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <model type="virtio"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    </video>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <input type="tablet" bus="usb"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <rng model="virtio">
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <backend model="random">/dev/urandom</backend>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    </rng>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <controller type="usb" index="0"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    <memballoon model="virtio">
Nov 26 02:15:13 compute-0 nova_compute[350387]:      <stats period="10"/>
Nov 26 02:15:13 compute-0 nova_compute[350387]:    </memballoon>
Nov 26 02:15:13 compute-0 nova_compute[350387]:  </devices>
Nov 26 02:15:13 compute-0 nova_compute[350387]: </domain>
Nov 26 02:15:13 compute-0 nova_compute[350387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.260 350391 DEBUG nova.compute.manager [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Preparing to wait for external event network-vif-plugged-caa46d5d-d6ee-42de-a514-e911d1f0fc60 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.260 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.269 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.269 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.270 350391 DEBUG nova.virt.libvirt.vif [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T02:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i',id=15,image_ref='dbaf181e-c7da-4938-bfef-7ab3aa9a19bc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bd820598-acdd-4f42-8252-1f5951161b01'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb4e9e1ffe494961ba45f8f24f21b106',ramdisk_id='',reservation_id='r-lsmzl6nz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='dbaf181e-c7da-4938-bfef-7ab3aa9a19bc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-624283200',owner_user_name='tempest-PrometheusGabbiTest-624283200-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T02:15:08Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='3a9710ede02d47cbb016ff596d936633',uuid=add194b7-6a6c-48ef-8355-3344185eb43e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "address": "fa:16:3e:6e:b7:00", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaa46d5d-d6", "ovs_interfaceid": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.270 350391 DEBUG nova.network.os_vif_util [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Converting VIF {"id": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "address": "fa:16:3e:6e:b7:00", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaa46d5d-d6", "ovs_interfaceid": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.271 350391 DEBUG nova.network.os_vif_util [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:b7:00,bridge_name='br-int',has_traffic_filtering=True,id=caa46d5d-d6ee-42de-a514-e911d1f0fc60,network=Network(02245f78-e221-4ecd-ae3b-975782a68c5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaa46d5d-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.271 350391 DEBUG os_vif [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:b7:00,bridge_name='br-int',has_traffic_filtering=True,id=caa46d5d-d6ee-42de-a514-e911d1f0fc60,network=Network(02245f78-e221-4ecd-ae3b-975782a68c5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaa46d5d-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.273 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.274 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.274 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.278 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.278 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcaa46d5d-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.279 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcaa46d5d-d6, col_values=(('external_ids', {'iface-id': 'caa46d5d-d6ee-42de-a514-e911d1f0fc60', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:b7:00', 'vm-uuid': 'add194b7-6a6c-48ef-8355-3344185eb43e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.281 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.283 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:15:13 compute-0 NetworkManager[48886]: <info>  [1764123313.2905] manager: (tapcaa46d5d-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.292 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.294 350391 INFO os_vif [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:b7:00,bridge_name='br-int',has_traffic_filtering=True,id=caa46d5d-d6ee-42de-a514-e911d1f0fc60,network=Network(02245f78-e221-4ecd-ae3b-975782a68c5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaa46d5d-d6')#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.341 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.342 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.342 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] No VIF found with MAC fa:16:3e:6e:b7:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.343 350391 INFO nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Using config drive#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.369 350391 DEBUG nova.storage.rbd_utils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] rbd image add194b7-6a6c-48ef-8355-3344185eb43e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.569 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 364 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 549 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.726 350391 DEBUG nova.network.neutron [req-0db86883-1ca2-40e3-a296-69eba71eb37f req-80608583-5b23-41a1-aa4d-aeeada736db4 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Updated VIF entry in instance network info cache for port caa46d5d-d6ee-42de-a514-e911d1f0fc60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.727 350391 DEBUG nova.network.neutron [req-0db86883-1ca2-40e3-a296-69eba71eb37f req-80608583-5b23-41a1-aa4d-aeeada736db4 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Updating instance_info_cache with network_info: [{"id": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "address": "fa:16:3e:6e:b7:00", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaa46d5d-d6", "ovs_interfaceid": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.742 350391 DEBUG oslo_concurrency.lockutils [req-0db86883-1ca2-40e3-a296-69eba71eb37f req-80608583-5b23-41a1-aa4d-aeeada736db4 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Releasing lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.756 350391 INFO nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Creating config drive at /var/lib/nova/instances/add194b7-6a6c-48ef-8355-3344185eb43e/disk.config#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.762 350391 DEBUG oslo_concurrency.processutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/add194b7-6a6c-48ef-8355-3344185eb43e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp78hrb4ke execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.891 350391 DEBUG oslo_concurrency.processutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/add194b7-6a6c-48ef-8355-3344185eb43e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp78hrb4ke" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.941 350391 DEBUG nova.storage.rbd_utils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] rbd image add194b7-6a6c-48ef-8355-3344185eb43e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 26 02:15:13 compute-0 nova_compute[350387]: 2025-11-26 02:15:13.951 350391 DEBUG oslo_concurrency.processutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/add194b7-6a6c-48ef-8355-3344185eb43e/disk.config add194b7-6a6c-48ef-8355-3344185eb43e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:15:14 compute-0 nova_compute[350387]: 2025-11-26 02:15:14.246 350391 DEBUG oslo_concurrency.processutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/add194b7-6a6c-48ef-8355-3344185eb43e/disk.config add194b7-6a6c-48ef-8355-3344185eb43e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:15:14 compute-0 nova_compute[350387]: 2025-11-26 02:15:14.247 350391 INFO nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Deleting local config drive /var/lib/nova/instances/add194b7-6a6c-48ef-8355-3344185eb43e/disk.config because it was imported into RBD.#033[00m
Nov 26 02:15:14 compute-0 kernel: tapcaa46d5d-d6: entered promiscuous mode
Nov 26 02:15:14 compute-0 NetworkManager[48886]: <info>  [1764123314.3389] manager: (tapcaa46d5d-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Nov 26 02:15:14 compute-0 ovn_controller[89102]: 2025-11-26T02:15:14Z|00156|binding|INFO|Claiming lport caa46d5d-d6ee-42de-a514-e911d1f0fc60 for this chassis.
Nov 26 02:15:14 compute-0 ovn_controller[89102]: 2025-11-26T02:15:14Z|00157|binding|INFO|caa46d5d-d6ee-42de-a514-e911d1f0fc60: Claiming fa:16:3e:6e:b7:00 10.100.2.215
Nov 26 02:15:14 compute-0 nova_compute[350387]: 2025-11-26 02:15:14.354 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:14 compute-0 nova_compute[350387]: 2025-11-26 02:15:14.392 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:14 compute-0 ovn_controller[89102]: 2025-11-26T02:15:14Z|00158|binding|INFO|Setting lport caa46d5d-d6ee-42de-a514-e911d1f0fc60 ovn-installed in OVS
Nov 26 02:15:14 compute-0 systemd-udevd[452277]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 02:15:14 compute-0 systemd-machined[138512]: New machine qemu-16-instance-0000000f.
Nov 26 02:15:14 compute-0 NetworkManager[48886]: <info>  [1764123314.4264] device (tapcaa46d5d-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 02:15:14 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Nov 26 02:15:14 compute-0 NetworkManager[48886]: <info>  [1764123314.4279] device (tapcaa46d5d-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 02:15:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:14.483 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:b7:00 10.100.2.215'], port_security=['fa:16:3e:6e:b7:00 10.100.2.215'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.215/16', 'neutron:device_id': 'add194b7-6a6c-48ef-8355-3344185eb43e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02245f78-e221-4ecd-ae3b-975782a68c5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20511ddf-b2cd-472a-84f8-e35fd6d0c575', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61c2d3e7-61df-4898-a297-774785d24b01, chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=caa46d5d-d6ee-42de-a514-e911d1f0fc60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:15:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:14.484 286844 INFO neutron.agent.ovn.metadata.agent [-] Port caa46d5d-d6ee-42de-a514-e911d1f0fc60 in datapath 02245f78-e221-4ecd-ae3b-975782a68c5e bound to our chassis#033[00m
Nov 26 02:15:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:14.487 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02245f78-e221-4ecd-ae3b-975782a68c5e#033[00m
Nov 26 02:15:14 compute-0 ovn_controller[89102]: 2025-11-26T02:15:14Z|00159|binding|INFO|Setting lport caa46d5d-d6ee-42de-a514-e911d1f0fc60 up in Southbound
Nov 26 02:15:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:14.517 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[8e33f376-c10b-4d49-8b85-efd41ea70b4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:14.571 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[30b1b6f5-2878-4e26-b6a6-1ec62f34c7e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:14.575 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[6f3894b7-f9bb-42bc-bde6-1ad65247443f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:14.622 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[7c5d853b-165a-482c-84e0-70344bb14411]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:14.653 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[fe451ddd-34ab-4417-b949-b5fc98ef4737]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02245f78-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:c1:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677802, 'reachable_time': 23686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452292, 'error': None, 'target': 'ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:14.680 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[5dfa890b-0439-4bae-b5b3-89d3ebfb3115]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap02245f78-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677815, 'tstamp': 677815}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 452293, 'error': None, 'target': 'ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap02245f78-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677819, 'tstamp': 677819}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 452293, 'error': None, 'target': 'ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:14.683 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02245f78-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:15:14 compute-0 nova_compute[350387]: 2025-11-26 02:15:14.686 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:14 compute-0 nova_compute[350387]: 2025-11-26 02:15:14.688 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:14.689 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02245f78-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:15:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:14.690 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:15:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:14.698 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02245f78-e0, col_values=(('external_ids', {'iface-id': 'b6066942-f0e5-4ff0-92ae-a027fdd86fa7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:15:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:14.698 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:15:14 compute-0 nova_compute[350387]: 2025-11-26 02:15:14.778 350391 DEBUG nova.compute.manager [req-e09596cc-1635-4473-a7ec-81ad49217819 req-6cfb0fe5-1ff2-4cb2-9df9-0b75bfba71d1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Received event network-vif-plugged-caa46d5d-d6ee-42de-a514-e911d1f0fc60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:15:14 compute-0 nova_compute[350387]: 2025-11-26 02:15:14.779 350391 DEBUG oslo_concurrency.lockutils [req-e09596cc-1635-4473-a7ec-81ad49217819 req-6cfb0fe5-1ff2-4cb2-9df9-0b75bfba71d1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:14 compute-0 nova_compute[350387]: 2025-11-26 02:15:14.779 350391 DEBUG oslo_concurrency.lockutils [req-e09596cc-1635-4473-a7ec-81ad49217819 req-6cfb0fe5-1ff2-4cb2-9df9-0b75bfba71d1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:14 compute-0 nova_compute[350387]: 2025-11-26 02:15:14.780 350391 DEBUG oslo_concurrency.lockutils [req-e09596cc-1635-4473-a7ec-81ad49217819 req-6cfb0fe5-1ff2-4cb2-9df9-0b75bfba71d1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:14 compute-0 nova_compute[350387]: 2025-11-26 02:15:14.780 350391 DEBUG nova.compute.manager [req-e09596cc-1635-4473-a7ec-81ad49217819 req-6cfb0fe5-1ff2-4cb2-9df9-0b75bfba71d1 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Processing event network-vif-plugged-caa46d5d-d6ee-42de-a514-e911d1f0fc60 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 02:15:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.570 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123315.5696814, add194b7-6a6c-48ef-8355-3344185eb43e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.571 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] VM Started (Lifecycle Event)#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.576 350391 DEBUG nova.compute.manager [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.584 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.593 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.595 350391 INFO nova.virt.libvirt.driver [-] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Instance spawned successfully.#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.596 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 02:15:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 364 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 466 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.611 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.631 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.632 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.634 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.636 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.638 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.639 350391 DEBUG nova.virt.libvirt.driver [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.644 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.645 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123315.570291, add194b7-6a6c-48ef-8355-3344185eb43e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.645 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] VM Paused (Lifecycle Event)#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.693 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.700 350391 DEBUG nova.virt.driver [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] Emitting event <LifecycleEvent: 1764123315.5823095, add194b7-6a6c-48ef-8355-3344185eb43e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.701 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] VM Resumed (Lifecycle Event)#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.708 350391 INFO nova.compute.manager [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Took 7.42 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.708 350391 DEBUG nova.compute.manager [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.727 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.734 350391 DEBUG nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.765 350391 INFO nova.compute.manager [None req-9345a393-64bd-43c2-b8fb-ebe7fa75f844 - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.783 350391 INFO nova.compute.manager [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Took 8.60 seconds to build instance.#033[00m
Nov 26 02:15:15 compute-0 nova_compute[350387]: 2025-11-26 02:15:15.824 350391 DEBUG oslo_concurrency.lockutils [None req-9a2e80f1-d22c-4662-ab6a-abb36f2682bc 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:17 compute-0 podman[452336]: 2025-11-26 02:15:17.56258891 +0000 UTC m=+0.105973020 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, managed_by=edpm_ansible)
Nov 26 02:15:17 compute-0 podman[452337]: 2025-11-26 02:15:17.592507239 +0000 UTC m=+0.126222608 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 02:15:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 364 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 437 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Nov 26 02:15:17 compute-0 nova_compute[350387]: 2025-11-26 02:15:17.650 350391 DEBUG nova.compute.manager [req-f00dbfc9-9b8f-4f47-a1b2-56cbf18ad02f req-2c443b2d-9060-427b-96e6-199b263ce041 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Received event network-vif-plugged-caa46d5d-d6ee-42de-a514-e911d1f0fc60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:15:17 compute-0 nova_compute[350387]: 2025-11-26 02:15:17.651 350391 DEBUG oslo_concurrency.lockutils [req-f00dbfc9-9b8f-4f47-a1b2-56cbf18ad02f req-2c443b2d-9060-427b-96e6-199b263ce041 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:17 compute-0 nova_compute[350387]: 2025-11-26 02:15:17.651 350391 DEBUG oslo_concurrency.lockutils [req-f00dbfc9-9b8f-4f47-a1b2-56cbf18ad02f req-2c443b2d-9060-427b-96e6-199b263ce041 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:17 compute-0 nova_compute[350387]: 2025-11-26 02:15:17.651 350391 DEBUG oslo_concurrency.lockutils [req-f00dbfc9-9b8f-4f47-a1b2-56cbf18ad02f req-2c443b2d-9060-427b-96e6-199b263ce041 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:17 compute-0 nova_compute[350387]: 2025-11-26 02:15:17.652 350391 DEBUG nova.compute.manager [req-f00dbfc9-9b8f-4f47-a1b2-56cbf18ad02f req-2c443b2d-9060-427b-96e6-199b263ce041 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] No waiting events found dispatching network-vif-plugged-caa46d5d-d6ee-42de-a514-e911d1f0fc60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:15:17 compute-0 nova_compute[350387]: 2025-11-26 02:15:17.652 350391 WARNING nova.compute.manager [req-f00dbfc9-9b8f-4f47-a1b2-56cbf18ad02f req-2c443b2d-9060-427b-96e6-199b263ce041 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Received unexpected event network-vif-plugged-caa46d5d-d6ee-42de-a514-e911d1f0fc60 for instance with vm_state active and task_state None.#033[00m
Nov 26 02:15:18 compute-0 nova_compute[350387]: 2025-11-26 02:15:18.283 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:18 compute-0 nova_compute[350387]: 2025-11-26 02:15:18.575 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 364 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 26 02:15:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:15:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:20.196 287163 DEBUG eventlet.wsgi.server [-] (287163) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 26 02:15:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:20.198 287163 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Nov 26 02:15:20 compute-0 ovn_metadata_agent[286828]: Accept: */*#015
Nov 26 02:15:20 compute-0 ovn_metadata_agent[286828]: Connection: close#015
Nov 26 02:15:20 compute-0 ovn_metadata_agent[286828]: Content-Type: text/plain#015
Nov 26 02:15:20 compute-0 ovn_metadata_agent[286828]: Host: 169.254.169.254#015
Nov 26 02:15:20 compute-0 ovn_metadata_agent[286828]: User-Agent: curl/7.84.0#015
Nov 26 02:15:20 compute-0 ovn_metadata_agent[286828]: X-Forwarded-For: 10.100.0.6#015
Nov 26 02:15:20 compute-0 ovn_metadata_agent[286828]: X-Ovn-Network-Id: d28058d3-5123-44dd-9839-1c451b6aed46 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 26 02:15:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 364 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 1011 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:22.550 287163 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:22.551 287163 INFO eventlet.wsgi.server [-] 10.100.0.6,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 2.3531673#033[00m
Nov 26 02:15:22 compute-0 haproxy-metadata-proxy-d28058d3-5123-44dd-9839-1c451b6aed46[450393]: 10.100.0.6:33238 [26/Nov/2025:02:15:20.195] listener listener/metadata 0/0/0/2356/2356 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:22.698 287163 DEBUG eventlet.wsgi.server [-] (287163) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:22.700 287163 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: Accept: */*#015
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: Connection: close#015
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: Content-Length: 100#015
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: Content-Type: application/x-www-form-urlencoded#015
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: Host: 169.254.169.254#015
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: User-Agent: curl/7.84.0#015
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: X-Forwarded-For: 10.100.0.6#015
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: X-Ovn-Network-Id: d28058d3-5123-44dd-9839-1c451b6aed46#015
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: #015
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:22.961 287163 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 26 02:15:22 compute-0 haproxy-metadata-proxy-d28058d3-5123-44dd-9839-1c451b6aed46[450393]: 10.100.0.6:33248 [26/Nov/2025:02:15:22.697] listener listener/metadata 0/0/0/264/264 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Nov 26 02:15:22 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:22.967 287163 INFO eventlet.wsgi.server [-] 10.100.0.6,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2673154#033[00m
Nov 26 02:15:23 compute-0 nova_compute[350387]: 2025-11-26 02:15:23.287 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:23 compute-0 nova_compute[350387]: 2025-11-26 02:15:23.578 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 364 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 959 KiB/s wr, 98 op/s
Nov 26 02:15:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:25.000 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:25.000 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:25.003 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:15:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 364 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 75 op/s
Nov 26 02:15:25 compute-0 nova_compute[350387]: 2025-11-26 02:15:25.623 350391 DEBUG oslo_concurrency.lockutils [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Acquiring lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:25 compute-0 nova_compute[350387]: 2025-11-26 02:15:25.624 350391 DEBUG oslo_concurrency.lockutils [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:25 compute-0 nova_compute[350387]: 2025-11-26 02:15:25.624 350391 DEBUG oslo_concurrency.lockutils [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Acquiring lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:25 compute-0 nova_compute[350387]: 2025-11-26 02:15:25.626 350391 DEBUG oslo_concurrency.lockutils [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:25 compute-0 nova_compute[350387]: 2025-11-26 02:15:25.627 350391 DEBUG oslo_concurrency.lockutils [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:25 compute-0 nova_compute[350387]: 2025-11-26 02:15:25.629 350391 INFO nova.compute.manager [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Terminating instance#033[00m
Nov 26 02:15:25 compute-0 nova_compute[350387]: 2025-11-26 02:15:25.632 350391 DEBUG nova.compute.manager [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 02:15:25 compute-0 kernel: tap20b2d898-f3 (unregistering): left promiscuous mode
Nov 26 02:15:25 compute-0 NetworkManager[48886]: <info>  [1764123325.7865] device (tap20b2d898-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 02:15:25 compute-0 ovn_controller[89102]: 2025-11-26T02:15:25Z|00160|binding|INFO|Releasing lport 20b2d898-f324-4aae-ae7e-59312c845d00 from this chassis (sb_readonly=0)
Nov 26 02:15:25 compute-0 ovn_controller[89102]: 2025-11-26T02:15:25Z|00161|binding|INFO|Setting lport 20b2d898-f324-4aae-ae7e-59312c845d00 down in Southbound
Nov 26 02:15:25 compute-0 nova_compute[350387]: 2025-11-26 02:15:25.801 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:25 compute-0 ovn_controller[89102]: 2025-11-26T02:15:25Z|00162|binding|INFO|Removing iface tap20b2d898-f3 ovn-installed in OVS
Nov 26 02:15:25 compute-0 nova_compute[350387]: 2025-11-26 02:15:25.815 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:25.827 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:0d:fa 10.100.0.6'], port_security=['fa:16:3e:04:0d:fa 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8f12f2a2-6379-4fcb-b93e-eac05f10f599', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d28058d3-5123-44dd-9839-1c451b6aed46', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8fc101eeda814bb98f1a44c789c8958f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42e95318-726c-4ecf-a3b6-a6d03830d387 eae8d84f-0041-4340-9d86-01ee4f5b7c47', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.242'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38707aa4-19c4-4574-af55-4f9c77111de6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=20b2d898-f324-4aae-ae7e-59312c845d00) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:15:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:25.834 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 20b2d898-f324-4aae-ae7e-59312c845d00 in datapath d28058d3-5123-44dd-9839-1c451b6aed46 unbound from our chassis#033[00m
Nov 26 02:15:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:25.838 286844 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d28058d3-5123-44dd-9839-1c451b6aed46, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 02:15:25 compute-0 nova_compute[350387]: 2025-11-26 02:15:25.838 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:25.840 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[11c99e90-5acf-459e-8e8e-7a539fc209c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:25.841 286844 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46 namespace which is not needed anymore#033[00m
Nov 26 02:15:25 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 26 02:15:25 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 43.919s CPU time.
Nov 26 02:15:25 compute-0 systemd-machined[138512]: Machine qemu-14-instance-0000000e terminated.
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.074 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.084 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:26 compute-0 neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46[450387]: [NOTICE]   (450391) : haproxy version is 2.8.14-c23fe91
Nov 26 02:15:26 compute-0 neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46[450387]: [NOTICE]   (450391) : path to executable is /usr/sbin/haproxy
Nov 26 02:15:26 compute-0 neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46[450387]: [WARNING]  (450391) : Exiting Master process...
Nov 26 02:15:26 compute-0 neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46[450387]: [WARNING]  (450391) : Exiting Master process...
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.093 350391 INFO nova.virt.libvirt.driver [-] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Instance destroyed successfully.#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.094 350391 DEBUG nova.objects.instance [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lazy-loading 'resources' on Instance uuid 8f12f2a2-6379-4fcb-b93e-eac05f10f599 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:15:26 compute-0 neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46[450387]: [ALERT]    (450391) : Current worker (450393) exited with code 143 (Terminated)
Nov 26 02:15:26 compute-0 neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46[450387]: [WARNING]  (450391) : All workers exited. Exiting... (0)
Nov 26 02:15:26 compute-0 systemd[1]: libpod-6a9dd5cc4ef498c3c7f0c7b3bf7a569e428a58b9eb7383bc878030784e42f61c.scope: Deactivated successfully.
Nov 26 02:15:26 compute-0 podman[452403]: 2025-11-26 02:15:26.107798653 +0000 UTC m=+0.108507832 container died 6a9dd5cc4ef498c3c7f0c7b3bf7a569e428a58b9eb7383bc878030784e42f61c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.115 350391 DEBUG nova.virt.libvirt.vif [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T02:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1676766604',display_name='tempest-TestServerBasicOps-server-1676766604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1676766604',id=14,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbE8la1gGbjQiMcSfF/XigEXCELNDkg7Bg++ChqSdPSjpeMvCOTzJudEtOKmieBCaeA40kk3ByO6Qz/g2P2LT+PPC7W+fCyL+638Mcm5qJam9Lyn3htqyGvZHvxNtPzpg==',key_name='tempest-TestServerBasicOps-638417550',keypairs=<?>,launch_index=0,launched_at=2025-11-26T02:14:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8fc101eeda814bb98f1a44c789c8958f',ramdisk_id='',reservation_id='r-51jplxlq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-969259594',owner_user_name='tempest-TestServerBasicOps-969259594-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T02:15:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='236e06cd46874605a18288ba033ee875',uuid=8f12f2a2-6379-4fcb-b93e-eac05f10f599,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20b2d898-f324-4aae-ae7e-59312c845d00", "address": "fa:16:3e:04:0d:fa", "network": {"id": "d28058d3-5123-44dd-9839-1c451b6aed46", "bridge": "br-int", "label": "tempest-TestServerBasicOps-996320676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fc101eeda814bb98f1a44c789c8958f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b2d898-f3", "ovs_interfaceid": "20b2d898-f324-4aae-ae7e-59312c845d00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.116 350391 DEBUG nova.network.os_vif_util [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Converting VIF {"id": "20b2d898-f324-4aae-ae7e-59312c845d00", "address": "fa:16:3e:04:0d:fa", "network": {"id": "d28058d3-5123-44dd-9839-1c451b6aed46", "bridge": "br-int", "label": "tempest-TestServerBasicOps-996320676-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fc101eeda814bb98f1a44c789c8958f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20b2d898-f3", "ovs_interfaceid": "20b2d898-f324-4aae-ae7e-59312c845d00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.117 350391 DEBUG nova.network.os_vif_util [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:04:0d:fa,bridge_name='br-int',has_traffic_filtering=True,id=20b2d898-f324-4aae-ae7e-59312c845d00,network=Network(d28058d3-5123-44dd-9839-1c451b6aed46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20b2d898-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.119 350391 DEBUG os_vif [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:0d:fa,bridge_name='br-int',has_traffic_filtering=True,id=20b2d898-f324-4aae-ae7e-59312c845d00,network=Network(d28058d3-5123-44dd-9839-1c451b6aed46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20b2d898-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.121 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.122 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20b2d898-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.130 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.132 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.135 350391 INFO os_vif [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:0d:fa,bridge_name='br-int',has_traffic_filtering=True,id=20b2d898-f324-4aae-ae7e-59312c845d00,network=Network(d28058d3-5123-44dd-9839-1c451b6aed46),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20b2d898-f3')#033[00m
Nov 26 02:15:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a9dd5cc4ef498c3c7f0c7b3bf7a569e428a58b9eb7383bc878030784e42f61c-userdata-shm.mount: Deactivated successfully.
Nov 26 02:15:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-33b2f0458d6526fdc8cf0d39ec37dc07c9743fa4b093112d93a107fd2fb68671-merged.mount: Deactivated successfully.
Nov 26 02:15:26 compute-0 podman[452403]: 2025-11-26 02:15:26.178641448 +0000 UTC m=+0.179350607 container cleanup 6a9dd5cc4ef498c3c7f0c7b3bf7a569e428a58b9eb7383bc878030784e42f61c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 02:15:26 compute-0 systemd[1]: libpod-conmon-6a9dd5cc4ef498c3c7f0c7b3bf7a569e428a58b9eb7383bc878030784e42f61c.scope: Deactivated successfully.
Nov 26 02:15:26 compute-0 podman[452455]: 2025-11-26 02:15:26.293717992 +0000 UTC m=+0.071596097 container remove 6a9dd5cc4ef498c3c7f0c7b3bf7a569e428a58b9eb7383bc878030784e42f61c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:15:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:26.304 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[28f551f1-3886-4235-aeef-0001b66165ad]: (4, ('Wed Nov 26 02:15:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46 (6a9dd5cc4ef498c3c7f0c7b3bf7a569e428a58b9eb7383bc878030784e42f61c)\n6a9dd5cc4ef498c3c7f0c7b3bf7a569e428a58b9eb7383bc878030784e42f61c\nWed Nov 26 02:15:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46 (6a9dd5cc4ef498c3c7f0c7b3bf7a569e428a58b9eb7383bc878030784e42f61c)\n6a9dd5cc4ef498c3c7f0c7b3bf7a569e428a58b9eb7383bc878030784e42f61c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:26.307 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[18caad57-8ab9-4a3d-8bb5-718c564226d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:26.309 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd28058d3-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.311 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:26 compute-0 kernel: tapd28058d3-50: left promiscuous mode
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.318 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:26.321 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[f3bd1999-4a5b-4671-810a-ef30f27b385d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.339 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:26.337 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[741d3848-20d6-4d5f-9062-d2a9af7fa298]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:26.342 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[405abc51-2d00-45af-8718-a3cc48a001a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:26.368 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[60b5f663-a610-473d-9d3e-8568e80ab4d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688398, 'reachable_time': 16745, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452471, 'error': None, 'target': 'ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:26 compute-0 systemd[1]: run-netns-ovnmeta\x2dd28058d3\x2d5123\x2d44dd\x2d9839\x2d1c451b6aed46.mount: Deactivated successfully.
Nov 26 02:15:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:26.374 287175 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d28058d3-5123-44dd-9839-1c451b6aed46 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 02:15:26 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:26.375 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[8bfa6217-c810-4824-956a-9aa739d58e99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.698 350391 DEBUG nova.compute.manager [req-7b381c48-62d7-4973-823f-a75a5cee5bc0 req-313a510a-13da-4333-82f8-060afe2b5dfe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Received event network-vif-unplugged-20b2d898-f324-4aae-ae7e-59312c845d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.698 350391 DEBUG oslo_concurrency.lockutils [req-7b381c48-62d7-4973-823f-a75a5cee5bc0 req-313a510a-13da-4333-82f8-060afe2b5dfe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.699 350391 DEBUG oslo_concurrency.lockutils [req-7b381c48-62d7-4973-823f-a75a5cee5bc0 req-313a510a-13da-4333-82f8-060afe2b5dfe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.699 350391 DEBUG oslo_concurrency.lockutils [req-7b381c48-62d7-4973-823f-a75a5cee5bc0 req-313a510a-13da-4333-82f8-060afe2b5dfe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.699 350391 DEBUG nova.compute.manager [req-7b381c48-62d7-4973-823f-a75a5cee5bc0 req-313a510a-13da-4333-82f8-060afe2b5dfe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] No waiting events found dispatching network-vif-unplugged-20b2d898-f324-4aae-ae7e-59312c845d00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.699 350391 DEBUG nova.compute.manager [req-7b381c48-62d7-4973-823f-a75a5cee5bc0 req-313a510a-13da-4333-82f8-060afe2b5dfe 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Received event network-vif-unplugged-20b2d898-f324-4aae-ae7e-59312c845d00 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.799 350391 INFO nova.virt.libvirt.driver [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Deleting instance files /var/lib/nova/instances/8f12f2a2-6379-4fcb-b93e-eac05f10f599_del#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.800 350391 INFO nova.virt.libvirt.driver [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Deletion of /var/lib/nova/instances/8f12f2a2-6379-4fcb-b93e-eac05f10f599_del complete#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.866 350391 INFO nova.compute.manager [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Took 1.23 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.867 350391 DEBUG oslo.service.loopingcall [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.868 350391 DEBUG nova.compute.manager [-] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 02:15:26 compute-0 nova_compute[350387]: 2025-11-26 02:15:26.868 350391 DEBUG nova.network.neutron [-] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 02:15:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:15:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2534799425' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:15:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:15:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2534799425' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:15:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 337 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 85 op/s
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.051 350391 DEBUG nova.network.neutron [-] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.079 350391 INFO nova.compute.manager [-] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Took 1.21 seconds to deallocate network for instance.#033[00m
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.123 350391 DEBUG oslo_concurrency.lockutils [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.124 350391 DEBUG oslo_concurrency.lockutils [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.158 350391 DEBUG nova.compute.manager [req-4895db50-7154-4a04-ad34-b41be5224834 req-d9dde172-7bd1-4d3a-be9e-8929eec7c031 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Received event network-vif-deleted-20b2d898-f324-4aae-ae7e-59312c845d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.405 350391 DEBUG oslo_concurrency.processutils [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.580 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.798 350391 DEBUG nova.compute.manager [req-d5496e19-f091-4d33-81cd-5216dd1cab3c req-cb5934fd-351e-4d63-a5ca-fb12bc9f0cd3 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Received event network-vif-plugged-20b2d898-f324-4aae-ae7e-59312c845d00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.798 350391 DEBUG oslo_concurrency.lockutils [req-d5496e19-f091-4d33-81cd-5216dd1cab3c req-cb5934fd-351e-4d63-a5ca-fb12bc9f0cd3 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.799 350391 DEBUG oslo_concurrency.lockutils [req-d5496e19-f091-4d33-81cd-5216dd1cab3c req-cb5934fd-351e-4d63-a5ca-fb12bc9f0cd3 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.799 350391 DEBUG oslo_concurrency.lockutils [req-d5496e19-f091-4d33-81cd-5216dd1cab3c req-cb5934fd-351e-4d63-a5ca-fb12bc9f0cd3 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.800 350391 DEBUG nova.compute.manager [req-d5496e19-f091-4d33-81cd-5216dd1cab3c req-cb5934fd-351e-4d63-a5ca-fb12bc9f0cd3 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] No waiting events found dispatching network-vif-plugged-20b2d898-f324-4aae-ae7e-59312c845d00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.800 350391 WARNING nova.compute.manager [req-d5496e19-f091-4d33-81cd-5216dd1cab3c req-cb5934fd-351e-4d63-a5ca-fb12bc9f0cd3 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Received unexpected event network-vif-plugged-20b2d898-f324-4aae-ae7e-59312c845d00 for instance with vm_state deleted and task_state None.#033[00m
Nov 26 02:15:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:15:28 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/144911004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.923 350391 DEBUG oslo_concurrency.processutils [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.937 350391 DEBUG nova.compute.provider_tree [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.957 350391 DEBUG nova.scheduler.client.report [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:15:28 compute-0 nova_compute[350387]: 2025-11-26 02:15:28.986 350391 DEBUG oslo_concurrency.lockutils [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:29 compute-0 nova_compute[350387]: 2025-11-26 02:15:29.025 350391 INFO nova.scheduler.client.report [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Deleted allocations for instance 8f12f2a2-6379-4fcb-b93e-eac05f10f599#033[00m
Nov 26 02:15:29 compute-0 nova_compute[350387]: 2025-11-26 02:15:29.124 350391 DEBUG oslo_concurrency.lockutils [None req-27416657-e650-4815-b3bb-545c8249c92d 236e06cd46874605a18288ba033ee875 8fc101eeda814bb98f1a44c789c8958f - - default default] Lock "8f12f2a2-6379-4fcb-b93e-eac05f10f599" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 337 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 7.8 KiB/s wr, 73 op/s
Nov 26 02:15:29 compute-0 podman[158021]: time="2025-11-26T02:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:15:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45046 "" "Go-http-client/1.1"
Nov 26 02:15:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9114 "" "Go-http-client/1.1"
Nov 26 02:15:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:15:31 compute-0 nova_compute[350387]: 2025-11-26 02:15:31.131 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:31 compute-0 openstack_network_exporter[367323]: ERROR   02:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:15:31 compute-0 openstack_network_exporter[367323]: ERROR   02:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:15:31 compute-0 openstack_network_exporter[367323]: ERROR   02:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:15:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:15:31 compute-0 openstack_network_exporter[367323]: ERROR   02:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:15:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:15:31 compute-0 openstack_network_exporter[367323]: ERROR   02:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:15:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 284 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.3 KiB/s wr, 93 op/s
Nov 26 02:15:31 compute-0 podman[452495]: 2025-11-26 02:15:31.617038732 +0000 UTC m=+0.089465198 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Nov 26 02:15:31 compute-0 podman[452497]: 2025-11-26 02:15:31.618108592 +0000 UTC m=+0.086634189 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:15:31 compute-0 podman[452496]: 2025-11-26 02:15:31.646220149 +0000 UTC m=+0.120085045 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 26 02:15:33 compute-0 nova_compute[350387]: 2025-11-26 02:15:33.582 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 284 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 991 KiB/s rd, 6.9 KiB/s wr, 62 op/s
Nov 26 02:15:33 compute-0 nova_compute[350387]: 2025-11-26 02:15:33.932 350391 DEBUG oslo_concurrency.lockutils [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Acquiring lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:33 compute-0 nova_compute[350387]: 2025-11-26 02:15:33.933 350391 DEBUG oslo_concurrency.lockutils [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:33 compute-0 nova_compute[350387]: 2025-11-26 02:15:33.933 350391 DEBUG oslo_concurrency.lockutils [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Acquiring lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:33 compute-0 nova_compute[350387]: 2025-11-26 02:15:33.933 350391 DEBUG oslo_concurrency.lockutils [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:33 compute-0 nova_compute[350387]: 2025-11-26 02:15:33.934 350391 DEBUG oslo_concurrency.lockutils [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:33 compute-0 nova_compute[350387]: 2025-11-26 02:15:33.935 350391 INFO nova.compute.manager [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Terminating instance#033[00m
Nov 26 02:15:33 compute-0 nova_compute[350387]: 2025-11-26 02:15:33.937 350391 DEBUG nova.compute.manager [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 02:15:34 compute-0 kernel: tapd4404ee6-72 (unregistering): left promiscuous mode
Nov 26 02:15:34 compute-0 NetworkManager[48886]: <info>  [1764123334.0391] device (tapd4404ee6-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.053 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:34 compute-0 ovn_controller[89102]: 2025-11-26T02:15:34Z|00163|binding|INFO|Releasing lport d4404ee6-7244-483c-99ba-127555e6ee3b from this chassis (sb_readonly=0)
Nov 26 02:15:34 compute-0 ovn_controller[89102]: 2025-11-26T02:15:34Z|00164|binding|INFO|Setting lport d4404ee6-7244-483c-99ba-127555e6ee3b down in Southbound
Nov 26 02:15:34 compute-0 ovn_controller[89102]: 2025-11-26T02:15:34Z|00165|binding|INFO|Removing iface tapd4404ee6-72 ovn-installed in OVS
Nov 26 02:15:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:34.069 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:03:6c 10.100.0.11'], port_security=['fa:16:3e:68:03:6c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e2c25548-a42e-4a7d-850c-bdecd264a753', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e0ff318c290040838d6133cda861268a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '392666de-076f-4a6b-abfe-d6c4dadf08c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4500b2b3-5d5b-4a74-8ac2-4092583234ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=d4404ee6-7244-483c-99ba-127555e6ee3b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:15:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:34.070 286844 INFO neutron.agent.ovn.metadata.agent [-] Port d4404ee6-7244-483c-99ba-127555e6ee3b in datapath e2c25548-a42e-4a7d-850c-bdecd264a753 unbound from our chassis#033[00m
Nov 26 02:15:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:34.072 286844 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e2c25548-a42e-4a7d-850c-bdecd264a753, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.072 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:34.073 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[bf702062-d121-449a-a8e1-37964d23f38c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:34.073 286844 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753 namespace which is not needed anymore#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.093 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:34 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 26 02:15:34 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Consumed 47.682s CPU time.
Nov 26 02:15:34 compute-0 systemd-machined[138512]: Machine qemu-15-instance-0000000d terminated.
Nov 26 02:15:34 compute-0 NetworkManager[48886]: <info>  [1764123334.1600] manager: (tapd4404ee6-72): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Nov 26 02:15:34 compute-0 systemd-udevd[452554]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.184 350391 INFO nova.virt.libvirt.driver [-] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Instance destroyed successfully.#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.185 350391 DEBUG nova.objects.instance [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lazy-loading 'resources' on Instance uuid bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.200 350391 DEBUG nova.virt.libvirt.vif [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T02:12:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-824419160',display_name='tempest-ServerActionsTestJSON-server-824419160',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-824419160',id=13,image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGOUC98EN8hXycvhDt+xkn1avlrGbOp5ZypZ/FC9FWbfZj4H71JpSUmspsuEJl9YVQFHAmKxvB9zaiq05i2wC+MbwLZ87985MOXdrZIPoo0BLwHbkHW4LlqojeJFtrF82A==',key_name='tempest-keypair-396503000',keypairs=<?>,launch_index=0,launched_at=2025-11-26T02:13:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e0ff318c290040838d6133cda861268a',ramdisk_id='',reservation_id='r-pb5w045d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4728a8a0-1107-4816-98c6-74482d53f92c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1777809074',owner_user_name='tempest-ServerActionsTestJSON-1777809074-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T02:14:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b8a1343dbab4fa693b622013d763897',uuid=bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.201 350391 DEBUG nova.network.os_vif_util [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Converting VIF {"id": "d4404ee6-7244-483c-99ba-127555e6ee3b", "address": "fa:16:3e:68:03:6c", "network": {"id": "e2c25548-a42e-4a7d-850c-bdecd264a753", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-456600665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0ff318c290040838d6133cda861268a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4404ee6-72", "ovs_interfaceid": "d4404ee6-7244-483c-99ba-127555e6ee3b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.201 350391 DEBUG nova.network.os_vif_util [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:03:6c,bridge_name='br-int',has_traffic_filtering=True,id=d4404ee6-7244-483c-99ba-127555e6ee3b,network=Network(e2c25548-a42e-4a7d-850c-bdecd264a753),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4404ee6-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.202 350391 DEBUG os_vif [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:03:6c,bridge_name='br-int',has_traffic_filtering=True,id=d4404ee6-7244-483c-99ba-127555e6ee3b,network=Network(e2c25548-a42e-4a7d-850c-bdecd264a753),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4404ee6-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.203 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.203 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4404ee6-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.205 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.207 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.208 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.209 350391 INFO os_vif [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:03:6c,bridge_name='br-int',has_traffic_filtering=True,id=d4404ee6-7244-483c-99ba-127555e6ee3b,network=Network(e2c25548-a42e-4a7d-850c-bdecd264a753),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4404ee6-72')#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.266 350391 DEBUG nova.compute.manager [req-89cfd59b-cf5f-4685-afaa-a7f402946b9b req-feeaf475-562d-4daf-a273-5ef64dc82e88 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received event network-vif-unplugged-d4404ee6-7244-483c-99ba-127555e6ee3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.266 350391 DEBUG oslo_concurrency.lockutils [req-89cfd59b-cf5f-4685-afaa-a7f402946b9b req-feeaf475-562d-4daf-a273-5ef64dc82e88 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.266 350391 DEBUG oslo_concurrency.lockutils [req-89cfd59b-cf5f-4685-afaa-a7f402946b9b req-feeaf475-562d-4daf-a273-5ef64dc82e88 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.267 350391 DEBUG oslo_concurrency.lockutils [req-89cfd59b-cf5f-4685-afaa-a7f402946b9b req-feeaf475-562d-4daf-a273-5ef64dc82e88 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.267 350391 DEBUG nova.compute.manager [req-89cfd59b-cf5f-4685-afaa-a7f402946b9b req-feeaf475-562d-4daf-a273-5ef64dc82e88 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] No waiting events found dispatching network-vif-unplugged-d4404ee6-7244-483c-99ba-127555e6ee3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.267 350391 DEBUG nova.compute.manager [req-89cfd59b-cf5f-4685-afaa-a7f402946b9b req-feeaf475-562d-4daf-a273-5ef64dc82e88 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received event network-vif-unplugged-d4404ee6-7244-483c-99ba-127555e6ee3b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 02:15:34 compute-0 neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753[450674]: [NOTICE]   (450678) : haproxy version is 2.8.14-c23fe91
Nov 26 02:15:34 compute-0 neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753[450674]: [NOTICE]   (450678) : path to executable is /usr/sbin/haproxy
Nov 26 02:15:34 compute-0 neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753[450674]: [WARNING]  (450678) : Exiting Master process...
Nov 26 02:15:34 compute-0 neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753[450674]: [WARNING]  (450678) : Exiting Master process...
Nov 26 02:15:34 compute-0 neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753[450674]: [ALERT]    (450678) : Current worker (450680) exited with code 143 (Terminated)
Nov 26 02:15:34 compute-0 neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753[450674]: [WARNING]  (450678) : All workers exited. Exiting... (0)
Nov 26 02:15:34 compute-0 systemd[1]: libpod-950a0cf27b043da796f1ea64f8289242599bc5a6a931bed83eeea782eb563821.scope: Deactivated successfully.
Nov 26 02:15:34 compute-0 podman[452577]: 2025-11-26 02:15:34.293174402 +0000 UTC m=+0.083008587 container died 950a0cf27b043da796f1ea64f8289242599bc5a6a931bed83eeea782eb563821 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:15:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-950a0cf27b043da796f1ea64f8289242599bc5a6a931bed83eeea782eb563821-userdata-shm.mount: Deactivated successfully.
Nov 26 02:15:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-019674fa5dd37028790f70e232229272105cff713498eaf250e83bb4b0ae1d27-merged.mount: Deactivated successfully.
Nov 26 02:15:34 compute-0 podman[452577]: 2025-11-26 02:15:34.38376987 +0000 UTC m=+0.173604095 container cleanup 950a0cf27b043da796f1ea64f8289242599bc5a6a931bed83eeea782eb563821 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 02:15:34 compute-0 systemd[1]: libpod-conmon-950a0cf27b043da796f1ea64f8289242599bc5a6a931bed83eeea782eb563821.scope: Deactivated successfully.
Nov 26 02:15:34 compute-0 podman[452625]: 2025-11-26 02:15:34.505947194 +0000 UTC m=+0.083330816 container remove 950a0cf27b043da796f1ea64f8289242599bc5a6a931bed83eeea782eb563821 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 02:15:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:34.529 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[95e53770-a993-42ce-a469-5b2b3eda1a8c]: (4, ('Wed Nov 26 02:15:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753 (950a0cf27b043da796f1ea64f8289242599bc5a6a931bed83eeea782eb563821)\n950a0cf27b043da796f1ea64f8289242599bc5a6a931bed83eeea782eb563821\nWed Nov 26 02:15:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753 (950a0cf27b043da796f1ea64f8289242599bc5a6a931bed83eeea782eb563821)\n950a0cf27b043da796f1ea64f8289242599bc5a6a931bed83eeea782eb563821\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:34.532 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[74840df9-7372-4116-9daa-bdec3bfe7b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:34.533 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2c25548-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:15:34 compute-0 kernel: tape2c25548-a0: left promiscuous mode
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.535 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:34 compute-0 nova_compute[350387]: 2025-11-26 02:15:34.552 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:34.560 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[083bac73-9a0e-40ae-a1ae-0fb77a2bda62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:34.580 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[6c301777-53c4-446f-b1d1-46d02bcb3daa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:34.582 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[f32e5115-ae25-45ff-832f-3e4eb889714c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:34.618 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[10dd7d57-d6ab-4508-8e91-ed44730b2d45]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689571, 'reachable_time': 40687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452638, 'error': None, 'target': 'ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:34 compute-0 systemd[1]: run-netns-ovnmeta\x2de2c25548\x2da42e\x2d4a7d\x2d850c\x2dbdecd264a753.mount: Deactivated successfully.
Nov 26 02:15:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:34.622 287175 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e2c25548-a42e-4a7d-850c-bdecd264a753 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 02:15:34 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:15:34.622 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[94c3fec9-fd86-4cfe-a74f-7f8b9fd3a285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:15:35 compute-0 nova_compute[350387]: 2025-11-26 02:15:35.127 350391 INFO nova.virt.libvirt.driver [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Deleting instance files /var/lib/nova/instances/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_del#033[00m
Nov 26 02:15:35 compute-0 nova_compute[350387]: 2025-11-26 02:15:35.128 350391 INFO nova.virt.libvirt.driver [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Deletion of /var/lib/nova/instances/bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9_del complete#033[00m
Nov 26 02:15:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:15:35 compute-0 nova_compute[350387]: 2025-11-26 02:15:35.205 350391 INFO nova.compute.manager [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Took 1.27 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 02:15:35 compute-0 nova_compute[350387]: 2025-11-26 02:15:35.205 350391 DEBUG oslo.service.loopingcall [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 02:15:35 compute-0 nova_compute[350387]: 2025-11-26 02:15:35.206 350391 DEBUG nova.compute.manager [-] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 02:15:35 compute-0 nova_compute[350387]: 2025-11-26 02:15:35.207 350391 DEBUG nova.network.neutron [-] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 02:15:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 266 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.0 KiB/s wr, 48 op/s
Nov 26 02:15:35 compute-0 ovn_controller[89102]: 2025-11-26T02:15:35Z|00166|binding|INFO|Releasing lport b6066942-f0e5-4ff0-92ae-a027fdd86fa7 from this chassis (sb_readonly=0)
Nov 26 02:15:35 compute-0 nova_compute[350387]: 2025-11-26 02:15:35.689 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:35 compute-0 ovn_controller[89102]: 2025-11-26T02:15:35Z|00167|binding|INFO|Releasing lport b6066942-f0e5-4ff0-92ae-a027fdd86fa7 from this chassis (sb_readonly=0)
Nov 26 02:15:35 compute-0 nova_compute[350387]: 2025-11-26 02:15:35.984 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.278 350391 DEBUG nova.network.neutron [-] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.293 350391 INFO nova.compute.manager [-] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Took 1.09 seconds to deallocate network for instance.#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.331 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.332 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.332 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.332 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.332 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.366 350391 DEBUG nova.compute.manager [req-3ef235bc-416c-4d3b-b511-09b96ae1e136 req-dd5998d6-5872-445f-9532-1216b825b0f5 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received event network-vif-deleted-d4404ee6-7244-483c-99ba-127555e6ee3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.367 350391 DEBUG oslo_concurrency.lockutils [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.368 350391 DEBUG oslo_concurrency.lockutils [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.380 350391 DEBUG nova.compute.manager [req-d10a0dc3-d6aa-4c2d-9f60-0861cb2a6fab req-063d057e-0261-4722-afb3-c07ddd3db41e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received event network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.380 350391 DEBUG oslo_concurrency.lockutils [req-d10a0dc3-d6aa-4c2d-9f60-0861cb2a6fab req-063d057e-0261-4722-afb3-c07ddd3db41e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.381 350391 DEBUG oslo_concurrency.lockutils [req-d10a0dc3-d6aa-4c2d-9f60-0861cb2a6fab req-063d057e-0261-4722-afb3-c07ddd3db41e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.381 350391 DEBUG oslo_concurrency.lockutils [req-d10a0dc3-d6aa-4c2d-9f60-0861cb2a6fab req-063d057e-0261-4722-afb3-c07ddd3db41e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.381 350391 DEBUG nova.compute.manager [req-d10a0dc3-d6aa-4c2d-9f60-0861cb2a6fab req-063d057e-0261-4722-afb3-c07ddd3db41e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] No waiting events found dispatching network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.381 350391 WARNING nova.compute.manager [req-d10a0dc3-d6aa-4c2d-9f60-0861cb2a6fab req-063d057e-0261-4722-afb3-c07ddd3db41e 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Received unexpected event network-vif-plugged-d4404ee6-7244-483c-99ba-127555e6ee3b for instance with vm_state deleted and task_state None.#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.458 350391 DEBUG oslo_concurrency.processutils [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:15:36 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:15:36 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1563946859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.845 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.965 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.965 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.979 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:15:36 compute-0 nova_compute[350387]: 2025-11-26 02:15:36.980 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:15:37 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:15:37 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3311406175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:15:37 compute-0 nova_compute[350387]: 2025-11-26 02:15:37.034 350391 DEBUG oslo_concurrency.processutils [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:15:37 compute-0 nova_compute[350387]: 2025-11-26 02:15:37.049 350391 DEBUG nova.compute.provider_tree [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:15:37 compute-0 nova_compute[350387]: 2025-11-26 02:15:37.068 350391 DEBUG nova.scheduler.client.report [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:15:37 compute-0 nova_compute[350387]: 2025-11-26 02:15:37.092 350391 DEBUG oslo_concurrency.lockutils [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:37 compute-0 nova_compute[350387]: 2025-11-26 02:15:37.133 350391 INFO nova.scheduler.client.report [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Deleted allocations for instance bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9#033[00m
Nov 26 02:15:37 compute-0 nova_compute[350387]: 2025-11-26 02:15:37.212 350391 DEBUG oslo_concurrency.lockutils [None req-c33816b7-a73f-4bef-b230-8b9fc7eee76e 3b8a1343dbab4fa693b622013d763897 e0ff318c290040838d6133cda861268a - - default default] Lock "bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:37 compute-0 nova_compute[350387]: 2025-11-26 02:15:37.509 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:15:37 compute-0 nova_compute[350387]: 2025-11-26 02:15:37.510 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3607MB free_disk=59.88849639892578GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:15:37 compute-0 nova_compute[350387]: 2025-11-26 02:15:37.510 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:15:37 compute-0 nova_compute[350387]: 2025-11-26 02:15:37.511 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:15:37 compute-0 nova_compute[350387]: 2025-11-26 02:15:37.595 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 74d081af-66cd-4e37-99e4-31f777885766 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:15:37 compute-0 nova_compute[350387]: 2025-11-26 02:15:37.595 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance add194b7-6a6c-48ef-8355-3344185eb43e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:15:37 compute-0 nova_compute[350387]: 2025-11-26 02:15:37.595 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:15:37 compute-0 nova_compute[350387]: 2025-11-26 02:15:37.596 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:15:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 10 KiB/s wr, 58 op/s
Nov 26 02:15:37 compute-0 nova_compute[350387]: 2025-11-26 02:15:37.672 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:15:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:15:38 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3718795476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:15:38 compute-0 nova_compute[350387]: 2025-11-26 02:15:38.119 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:15:38 compute-0 nova_compute[350387]: 2025-11-26 02:15:38.133 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:15:38 compute-0 nova_compute[350387]: 2025-11-26 02:15:38.152 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:15:38 compute-0 nova_compute[350387]: 2025-11-26 02:15:38.187 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:15:38 compute-0 nova_compute[350387]: 2025-11-26 02:15:38.187 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:15:38 compute-0 nova_compute[350387]: 2025-11-26 02:15:38.585 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:38 compute-0 podman[452708]: 2025-11-26 02:15:38.617332038 +0000 UTC m=+0.168517483 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 02:15:38 compute-0 podman[452709]: 2025-11-26 02:15:38.655732124 +0000 UTC m=+0.205396276 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:15:39 compute-0 nova_compute[350387]: 2025-11-26 02:15:39.206 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.7 KiB/s wr, 48 op/s
Nov 26 02:15:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:15:40 compute-0 nova_compute[350387]: 2025-11-26 02:15:40.189 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:15:40 compute-0 nova_compute[350387]: 2025-11-26 02:15:40.190 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:15:40 compute-0 nova_compute[350387]: 2025-11-26 02:15:40.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:15:40 compute-0 nova_compute[350387]: 2025-11-26 02:15:40.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:15:41 compute-0 nova_compute[350387]: 2025-11-26 02:15:41.085 350391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764123326.0834324, 8f12f2a2-6379-4fcb-b93e-eac05f10f599 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:15:41 compute-0 nova_compute[350387]: 2025-11-26 02:15:41.086 350391 INFO nova.compute.manager [-] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] VM Stopped (Lifecycle Event)#033[00m
Nov 26 02:15:41 compute-0 nova_compute[350387]: 2025-11-26 02:15:41.115 350391 DEBUG nova.compute.manager [None req-a6588fb9-950c-4bd0-ad9a-7a810269abf5 - - - - - -] [instance: 8f12f2a2-6379-4fcb-b93e-eac05f10f599] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:15:41
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data']
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.7 KiB/s wr, 48 op/s
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:15:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:15:42 compute-0 nova_compute[350387]: 2025-11-26 02:15:42.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:15:42 compute-0 nova_compute[350387]: 2025-11-26 02:15:42.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:15:42 compute-0 nova_compute[350387]: 2025-11-26 02:15:42.302 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:15:42 compute-0 podman[452752]: 2025-11-26 02:15:42.588422771 +0000 UTC m=+0.127974727 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, config_id=edpm, version=9.4, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.buildah.version=1.29.0, name=ubi9)
Nov 26 02:15:42 compute-0 podman[452753]: 2025-11-26 02:15:42.613168004 +0000 UTC m=+0.146483535 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 02:15:42 compute-0 nova_compute[350387]: 2025-11-26 02:15:42.928 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:15:42 compute-0 nova_compute[350387]: 2025-11-26 02:15:42.929 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:15:42 compute-0 nova_compute[350387]: 2025-11-26 02:15:42.929 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:15:42 compute-0 nova_compute[350387]: 2025-11-26 02:15:42.930 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 74d081af-66cd-4e37-99e4-31f777885766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:15:43 compute-0 nova_compute[350387]: 2025-11-26 02:15:43.588 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 26 02:15:44 compute-0 nova_compute[350387]: 2025-11-26 02:15:44.209 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:44 compute-0 nova_compute[350387]: 2025-11-26 02:15:44.533 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updating instance_info_cache with network_info: [{"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:15:44 compute-0 nova_compute[350387]: 2025-11-26 02:15:44.553 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:15:44 compute-0 nova_compute[350387]: 2025-11-26 02:15:44.554 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:15:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:15:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 26 02:15:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 26 02:15:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 26 02:15:46 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 26 02:15:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 818 B/s wr, 3 op/s
Nov 26 02:15:48 compute-0 podman[452791]: 2025-11-26 02:15:48.574110059 +0000 UTC m=+0.124164540 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:15:48 compute-0 podman[452790]: 2025-11-26 02:15:48.577665839 +0000 UTC m=+0.129303394 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.7, release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9)
Nov 26 02:15:48 compute-0 nova_compute[350387]: 2025-11-26 02:15:48.591 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 26 02:15:48 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 26 02:15:48 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 26 02:15:49 compute-0 nova_compute[350387]: 2025-11-26 02:15:49.182 350391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764123334.18073, bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:15:49 compute-0 nova_compute[350387]: 2025-11-26 02:15:49.183 350391 INFO nova.compute.manager [-] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] VM Stopped (Lifecycle Event)#033[00m
Nov 26 02:15:49 compute-0 nova_compute[350387]: 2025-11-26 02:15:49.205 350391 DEBUG nova.compute.manager [None req-06b9e551-7aa1-4b1b-ad60-c8015bd88fbd - - - - - -] [instance: bca63c40-45cc-4d2b-9ef8-4ffd9a0da4b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:15:49 compute-0 nova_compute[350387]: 2025-11-26 02:15:49.216 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:49 compute-0 nova_compute[350387]: 2025-11-26 02:15:49.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:15:49 compute-0 nova_compute[350387]: 2025-11-26 02:15:49.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:15:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 26 02:15:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:15:51 compute-0 nova_compute[350387]: 2025-11-26 02:15:51.293 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011064783160773589 of space, bias 1.0, pg target 0.3319434948232077 quantized to 32 (current 32)
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0012521646276942111 of space, bias 1.0, pg target 0.37564938830826333 quantized to 32 (current 32)
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:15:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.1 KiB/s wr, 53 op/s
Nov 26 02:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:15:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 9888 writes, 37K keys, 9888 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9888 writes, 2676 syncs, 3.70 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2339 writes, 8486 keys, 2339 commit groups, 1.0 writes per commit group, ingest: 8.13 MB, 0.01 MB/s#012Interval WAL: 2339 writes, 986 syncs, 2.37 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:15:53 compute-0 nova_compute[350387]: 2025-11-26 02:15:53.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:15:53 compute-0 nova_compute[350387]: 2025-11-26 02:15:53.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:15:53 compute-0 nova_compute[350387]: 2025-11-26 02:15:53.594 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 3.7 KiB/s wr, 62 op/s
Nov 26 02:15:53 compute-0 ovn_controller[89102]: 2025-11-26T02:15:53Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6e:b7:00 10.100.2.215
Nov 26 02:15:53 compute-0 ovn_controller[89102]: 2025-11-26T02:15:53Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:b7:00 10.100.2.215
Nov 26 02:15:54 compute-0 nova_compute[350387]: 2025-11-26 02:15:54.223 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 02:15:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 26 02:15:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 26 02:15:55 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 26 02:15:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 216 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 241 KiB/s rd, 1.5 MiB/s wr, 79 op/s
Nov 26 02:15:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 234 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 472 KiB/s rd, 2.9 MiB/s wr, 133 op/s
Nov 26 02:15:58 compute-0 nova_compute[350387]: 2025-11-26 02:15:58.597 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:15:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 3056 syncs, 3.70 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2571 writes, 9774 keys, 2571 commit groups, 1.0 writes per commit group, ingest: 11.14 MB, 0.02 MB/s#012Interval WAL: 2571 writes, 1020 syncs, 2.52 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:15:59 compute-0 nova_compute[350387]: 2025-11-26 02:15:59.227 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:15:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 411 KiB/s rd, 2.6 MiB/s wr, 116 op/s
Nov 26 02:15:59 compute-0 podman[158021]: time="2025-11-26T02:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:15:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:15:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8650 "" "Go-http-client/1.1"
Nov 26 02:16:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:16:01 compute-0 openstack_network_exporter[367323]: ERROR   02:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:16:01 compute-0 openstack_network_exporter[367323]: ERROR   02:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:16:01 compute-0 openstack_network_exporter[367323]: ERROR   02:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:16:01 compute-0 openstack_network_exporter[367323]: ERROR   02:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:16:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:16:01 compute-0 openstack_network_exporter[367323]: ERROR   02:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:16:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:16:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 380 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Nov 26 02:16:01 compute-0 podman[452931]: 2025-11-26 02:16:01.837344983 +0000 UTC m=+0.103985814 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Nov 26 02:16:01 compute-0 podman[452932]: 2025-11-26 02:16:01.851581732 +0000 UTC m=+0.112317238 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 26 02:16:01 compute-0 podman[452933]: 2025-11-26 02:16:01.861974463 +0000 UTC m=+0.103302865 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:16:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:16:02 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:16:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:16:02 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:16:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:16:02 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:16:02 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 94f1d21b-2f6b-4660-81b3-20aa29118ced does not exist
Nov 26 02:16:02 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5f828757-b1f6-412c-8d50-e122de388925 does not exist
Nov 26 02:16:02 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev c38c09fc-5cbd-4972-828a-ab1e9a3debac does not exist
Nov 26 02:16:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:16:02 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:16:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:16:02 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:16:02 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:16:02 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:16:03 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:16:03 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:16:03 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:16:03 compute-0 podman[453160]: 2025-11-26 02:16:03.548498586 +0000 UTC m=+0.076076512 container create 1abf96a07dd4e560a349d36c930f54cbdeda7b5eb42313a2f53f5cb79c8265a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 02:16:03 compute-0 nova_compute[350387]: 2025-11-26 02:16:03.602 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:03 compute-0 podman[453160]: 2025-11-26 02:16:03.521310164 +0000 UTC m=+0.048888100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:16:03 compute-0 systemd[1]: Started libpod-conmon-1abf96a07dd4e560a349d36c930f54cbdeda7b5eb42313a2f53f5cb79c8265a8.scope.
Nov 26 02:16:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 372 KiB/s rd, 2.6 MiB/s wr, 70 op/s
Nov 26 02:16:03 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:16:03 compute-0 podman[453160]: 2025-11-26 02:16:03.718995194 +0000 UTC m=+0.246573130 container init 1abf96a07dd4e560a349d36c930f54cbdeda7b5eb42313a2f53f5cb79c8265a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_antonelli, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:16:03 compute-0 podman[453160]: 2025-11-26 02:16:03.737193414 +0000 UTC m=+0.264771360 container start 1abf96a07dd4e560a349d36c930f54cbdeda7b5eb42313a2f53f5cb79c8265a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_antonelli, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 02:16:03 compute-0 xenodochial_antonelli[453176]: 167 167
Nov 26 02:16:03 compute-0 systemd[1]: libpod-1abf96a07dd4e560a349d36c930f54cbdeda7b5eb42313a2f53f5cb79c8265a8.scope: Deactivated successfully.
Nov 26 02:16:03 compute-0 podman[453160]: 2025-11-26 02:16:03.745088385 +0000 UTC m=+0.272666331 container attach 1abf96a07dd4e560a349d36c930f54cbdeda7b5eb42313a2f53f5cb79c8265a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 02:16:03 compute-0 podman[453160]: 2025-11-26 02:16:03.747996906 +0000 UTC m=+0.275574872 container died 1abf96a07dd4e560a349d36c930f54cbdeda7b5eb42313a2f53f5cb79c8265a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:16:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dad1c194efabb6621e13d7454dbeac66d438c72217bc1095432f2efecb02638-merged.mount: Deactivated successfully.
Nov 26 02:16:03 compute-0 podman[453160]: 2025-11-26 02:16:03.81772092 +0000 UTC m=+0.345298866 container remove 1abf96a07dd4e560a349d36c930f54cbdeda7b5eb42313a2f53f5cb79c8265a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_antonelli, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:16:03 compute-0 systemd[1]: libpod-conmon-1abf96a07dd4e560a349d36c930f54cbdeda7b5eb42313a2f53f5cb79c8265a8.scope: Deactivated successfully.
Nov 26 02:16:04 compute-0 podman[453198]: 2025-11-26 02:16:04.125081931 +0000 UTC m=+0.123602444 container create 6bc9ea006d4ad1a9205e77a9e5805a87fd2c10e9066d2bb7b2b121194b408bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:16:04 compute-0 podman[453198]: 2025-11-26 02:16:04.061108039 +0000 UTC m=+0.059628602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:16:04 compute-0 systemd[1]: Started libpod-conmon-6bc9ea006d4ad1a9205e77a9e5805a87fd2c10e9066d2bb7b2b121194b408bc7.scope.
Nov 26 02:16:04 compute-0 nova_compute[350387]: 2025-11-26 02:16:04.230 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da29f51231b360d1ff46a67e5da75e833095115d45d06c815084217cbb60b82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da29f51231b360d1ff46a67e5da75e833095115d45d06c815084217cbb60b82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da29f51231b360d1ff46a67e5da75e833095115d45d06c815084217cbb60b82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da29f51231b360d1ff46a67e5da75e833095115d45d06c815084217cbb60b82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3da29f51231b360d1ff46a67e5da75e833095115d45d06c815084217cbb60b82/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:16:04 compute-0 podman[453198]: 2025-11-26 02:16:04.318643855 +0000 UTC m=+0.317164418 container init 6bc9ea006d4ad1a9205e77a9e5805a87fd2c10e9066d2bb7b2b121194b408bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 26 02:16:04 compute-0 podman[453198]: 2025-11-26 02:16:04.346710691 +0000 UTC m=+0.345231234 container start 6bc9ea006d4ad1a9205e77a9e5805a87fd2c10e9066d2bb7b2b121194b408bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 02:16:04 compute-0 podman[453198]: 2025-11-26 02:16:04.353744328 +0000 UTC m=+0.352264851 container attach 6bc9ea006d4ad1a9205e77a9e5805a87fd2c10e9066d2bb7b2b121194b408bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 02:16:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:16:05 compute-0 loving_nightingale[453214]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:16:05 compute-0 loving_nightingale[453214]: --> relative data size: 1.0
Nov 26 02:16:05 compute-0 loving_nightingale[453214]: --> All data devices are unavailable
Nov 26 02:16:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 208 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Nov 26 02:16:05 compute-0 systemd[1]: libpod-6bc9ea006d4ad1a9205e77a9e5805a87fd2c10e9066d2bb7b2b121194b408bc7.scope: Deactivated successfully.
Nov 26 02:16:05 compute-0 podman[453198]: 2025-11-26 02:16:05.680483512 +0000 UTC m=+1.679004025 container died 6bc9ea006d4ad1a9205e77a9e5805a87fd2c10e9066d2bb7b2b121194b408bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 26 02:16:05 compute-0 systemd[1]: libpod-6bc9ea006d4ad1a9205e77a9e5805a87fd2c10e9066d2bb7b2b121194b408bc7.scope: Consumed 1.238s CPU time.
Nov 26 02:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:16:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 9454 writes, 36K keys, 9454 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9454 writes, 2477 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1928 writes, 7544 keys, 1928 commit groups, 1.0 writes per commit group, ingest: 8.85 MB, 0.01 MB/s#012Interval WAL: 1928 writes, 778 syncs, 2.48 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:16:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3da29f51231b360d1ff46a67e5da75e833095115d45d06c815084217cbb60b82-merged.mount: Deactivated successfully.
Nov 26 02:16:05 compute-0 podman[453198]: 2025-11-26 02:16:05.949285612 +0000 UTC m=+1.947806135 container remove 6bc9ea006d4ad1a9205e77a9e5805a87fd2c10e9066d2bb7b2b121194b408bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:16:05 compute-0 systemd[1]: libpod-conmon-6bc9ea006d4ad1a9205e77a9e5805a87fd2c10e9066d2bb7b2b121194b408bc7.scope: Deactivated successfully.
Nov 26 02:16:06 compute-0 ceph-mgr[193049]: [devicehealth INFO root] Check health
Nov 26 02:16:06 compute-0 podman[453396]: 2025-11-26 02:16:06.979457056 +0000 UTC m=+0.078405608 container create 30d97c207bdd2e135caf66d8456ba2f2770d5de128390a83ce91944fa2583e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_raman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 02:16:07 compute-0 podman[453396]: 2025-11-26 02:16:06.946604565 +0000 UTC m=+0.045553157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:16:07 compute-0 systemd[1]: Started libpod-conmon-30d97c207bdd2e135caf66d8456ba2f2770d5de128390a83ce91944fa2583e1d.scope.
Nov 26 02:16:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:16:07 compute-0 podman[453396]: 2025-11-26 02:16:07.12273481 +0000 UTC m=+0.221683392 container init 30d97c207bdd2e135caf66d8456ba2f2770d5de128390a83ce91944fa2583e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_raman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:16:07 compute-0 podman[453396]: 2025-11-26 02:16:07.136259119 +0000 UTC m=+0.235207641 container start 30d97c207bdd2e135caf66d8456ba2f2770d5de128390a83ce91944fa2583e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 02:16:07 compute-0 podman[453396]: 2025-11-26 02:16:07.141536487 +0000 UTC m=+0.240485029 container attach 30d97c207bdd2e135caf66d8456ba2f2770d5de128390a83ce91944fa2583e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:16:07 compute-0 laughing_raman[453412]: 167 167
Nov 26 02:16:07 compute-0 systemd[1]: libpod-30d97c207bdd2e135caf66d8456ba2f2770d5de128390a83ce91944fa2583e1d.scope: Deactivated successfully.
Nov 26 02:16:07 compute-0 podman[453396]: 2025-11-26 02:16:07.150987652 +0000 UTC m=+0.249936194 container died 30d97c207bdd2e135caf66d8456ba2f2770d5de128390a83ce91944fa2583e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 02:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bef37ae4c9565d9eb47ea4415b900a459669f1887b7f684438824debb56cd52-merged.mount: Deactivated successfully.
Nov 26 02:16:07 compute-0 podman[453396]: 2025-11-26 02:16:07.221093526 +0000 UTC m=+0.320042048 container remove 30d97c207bdd2e135caf66d8456ba2f2770d5de128390a83ce91944fa2583e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_raman, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 02:16:07 compute-0 systemd[1]: libpod-conmon-30d97c207bdd2e135caf66d8456ba2f2770d5de128390a83ce91944fa2583e1d.scope: Deactivated successfully.
Nov 26 02:16:07 compute-0 podman[453435]: 2025-11-26 02:16:07.450300178 +0000 UTC m=+0.059257731 container create e62eb7eb4692d9b0c7a1aeaffe1d6a2975195df6b197c67261ad763d94705141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 02:16:07 compute-0 podman[453435]: 2025-11-26 02:16:07.426193123 +0000 UTC m=+0.035150716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:16:07 compute-0 systemd[1]: Started libpod-conmon-e62eb7eb4692d9b0c7a1aeaffe1d6a2975195df6b197c67261ad763d94705141.scope.
Nov 26 02:16:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae50ddd6315b81d954115a28dec9fbc37df580d4dec0bb7cc81d57eaf984f36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae50ddd6315b81d954115a28dec9fbc37df580d4dec0bb7cc81d57eaf984f36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae50ddd6315b81d954115a28dec9fbc37df580d4dec0bb7cc81d57eaf984f36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae50ddd6315b81d954115a28dec9fbc37df580d4dec0bb7cc81d57eaf984f36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:16:07 compute-0 podman[453435]: 2025-11-26 02:16:07.625641441 +0000 UTC m=+0.234599024 container init e62eb7eb4692d9b0c7a1aeaffe1d6a2975195df6b197c67261ad763d94705141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 02:16:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 181 KiB/s rd, 1.1 MiB/s wr, 44 op/s
Nov 26 02:16:07 compute-0 podman[453435]: 2025-11-26 02:16:07.647491803 +0000 UTC m=+0.256449396 container start e62eb7eb4692d9b0c7a1aeaffe1d6a2975195df6b197c67261ad763d94705141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:16:07 compute-0 podman[453435]: 2025-11-26 02:16:07.656129855 +0000 UTC m=+0.265087488 container attach e62eb7eb4692d9b0c7a1aeaffe1d6a2975195df6b197c67261ad763d94705141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 02:16:08 compute-0 happy_meninsky[453452]: {
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:    "0": [
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:        {
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "devices": [
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "/dev/loop3"
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            ],
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "lv_name": "ceph_lv0",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "lv_size": "21470642176",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "name": "ceph_lv0",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "tags": {
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.cluster_name": "ceph",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.crush_device_class": "",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.encrypted": "0",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.osd_id": "0",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.type": "block",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.vdo": "0"
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            },
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "type": "block",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "vg_name": "ceph_vg0"
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:        }
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:    ],
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:    "1": [
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:        {
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "devices": [
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "/dev/loop4"
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            ],
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "lv_name": "ceph_lv1",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "lv_size": "21470642176",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "name": "ceph_lv1",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "tags": {
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.cluster_name": "ceph",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.crush_device_class": "",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.encrypted": "0",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.osd_id": "1",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.type": "block",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.vdo": "0"
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            },
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "type": "block",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "vg_name": "ceph_vg1"
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:        }
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:    ],
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:    "2": [
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:        {
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "devices": [
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "/dev/loop5"
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            ],
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "lv_name": "ceph_lv2",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "lv_size": "21470642176",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "name": "ceph_lv2",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "tags": {
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.cluster_name": "ceph",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.crush_device_class": "",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.encrypted": "0",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.osd_id": "2",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.type": "block",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:                "ceph.vdo": "0"
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            },
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "type": "block",
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:            "vg_name": "ceph_vg2"
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:        }
Nov 26 02:16:08 compute-0 happy_meninsky[453452]:    ]
Nov 26 02:16:08 compute-0 happy_meninsky[453452]: }
Nov 26 02:16:08 compute-0 systemd[1]: libpod-e62eb7eb4692d9b0c7a1aeaffe1d6a2975195df6b197c67261ad763d94705141.scope: Deactivated successfully.
Nov 26 02:16:08 compute-0 podman[453435]: 2025-11-26 02:16:08.551444091 +0000 UTC m=+1.160401714 container died e62eb7eb4692d9b0c7a1aeaffe1d6a2975195df6b197c67261ad763d94705141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 02:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-bae50ddd6315b81d954115a28dec9fbc37df580d4dec0bb7cc81d57eaf984f36-merged.mount: Deactivated successfully.
Nov 26 02:16:08 compute-0 nova_compute[350387]: 2025-11-26 02:16:08.607 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:08 compute-0 podman[453435]: 2025-11-26 02:16:08.658923932 +0000 UTC m=+1.267881495 container remove e62eb7eb4692d9b0c7a1aeaffe1d6a2975195df6b197c67261ad763d94705141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 02:16:08 compute-0 systemd[1]: libpod-conmon-e62eb7eb4692d9b0c7a1aeaffe1d6a2975195df6b197c67261ad763d94705141.scope: Deactivated successfully.
Nov 26 02:16:08 compute-0 podman[453471]: 2025-11-26 02:16:08.870702126 +0000 UTC m=+0.132807052 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 26 02:16:08 compute-0 podman[453473]: 2025-11-26 02:16:08.911380446 +0000 UTC m=+0.172740531 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 26 02:16:09 compute-0 nova_compute[350387]: 2025-11-26 02:16:09.235 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 0 op/s
Nov 26 02:16:09 compute-0 podman[453652]: 2025-11-26 02:16:09.794506138 +0000 UTC m=+0.082151962 container create 9dfbf4926a1dee15ac067a2744069addb5ad56f180886403c2f01ce4978e55d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gates, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 02:16:09 compute-0 podman[453652]: 2025-11-26 02:16:09.760482045 +0000 UTC m=+0.048127939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:16:09 compute-0 systemd[1]: Started libpod-conmon-9dfbf4926a1dee15ac067a2744069addb5ad56f180886403c2f01ce4978e55d9.scope.
Nov 26 02:16:09 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:16:09 compute-0 podman[453652]: 2025-11-26 02:16:09.929981584 +0000 UTC m=+0.217627438 container init 9dfbf4926a1dee15ac067a2744069addb5ad56f180886403c2f01ce4978e55d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:16:09 compute-0 podman[453652]: 2025-11-26 02:16:09.946202409 +0000 UTC m=+0.233848263 container start 9dfbf4926a1dee15ac067a2744069addb5ad56f180886403c2f01ce4978e55d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:16:09 compute-0 elated_gates[453668]: 167 167
Nov 26 02:16:09 compute-0 systemd[1]: libpod-9dfbf4926a1dee15ac067a2744069addb5ad56f180886403c2f01ce4978e55d9.scope: Deactivated successfully.
Nov 26 02:16:09 compute-0 podman[453652]: 2025-11-26 02:16:09.953813122 +0000 UTC m=+0.241458946 container attach 9dfbf4926a1dee15ac067a2744069addb5ad56f180886403c2f01ce4978e55d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 02:16:09 compute-0 podman[453652]: 2025-11-26 02:16:09.963908635 +0000 UTC m=+0.251554459 container died 9dfbf4926a1dee15ac067a2744069addb5ad56f180886403c2f01ce4978e55d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gates, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-21f1ff0089610d1efee41224c639b969ae80a8d38b217846cb0682ff717fa612-merged.mount: Deactivated successfully.
Nov 26 02:16:10 compute-0 podman[453652]: 2025-11-26 02:16:10.078698871 +0000 UTC m=+0.366344685 container remove 9dfbf4926a1dee15ac067a2744069addb5ad56f180886403c2f01ce4978e55d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gates, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 02:16:10 compute-0 systemd[1]: libpod-conmon-9dfbf4926a1dee15ac067a2744069addb5ad56f180886403c2f01ce4978e55d9.scope: Deactivated successfully.
Nov 26 02:16:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:16:10 compute-0 podman[453692]: 2025-11-26 02:16:10.401138866 +0000 UTC m=+0.093208383 container create 952e01d0176e81cf22946208dd35c9b9e2d16a0e6a29bbea5a959bfa5d9a31b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:16:10 compute-0 podman[453692]: 2025-11-26 02:16:10.365497037 +0000 UTC m=+0.057566604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:16:10 compute-0 systemd[1]: Started libpod-conmon-952e01d0176e81cf22946208dd35c9b9e2d16a0e6a29bbea5a959bfa5d9a31b2.scope.
Nov 26 02:16:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a40869f7a74f829bbef05057e5560c1a97342ab4db5941b14949e444fe1cca6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a40869f7a74f829bbef05057e5560c1a97342ab4db5941b14949e444fe1cca6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a40869f7a74f829bbef05057e5560c1a97342ab4db5941b14949e444fe1cca6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a40869f7a74f829bbef05057e5560c1a97342ab4db5941b14949e444fe1cca6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:16:10 compute-0 podman[453692]: 2025-11-26 02:16:10.559307957 +0000 UTC m=+0.251377534 container init 952e01d0176e81cf22946208dd35c9b9e2d16a0e6a29bbea5a959bfa5d9a31b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 02:16:10 compute-0 podman[453692]: 2025-11-26 02:16:10.60043839 +0000 UTC m=+0.292507907 container start 952e01d0176e81cf22946208dd35c9b9e2d16a0e6a29bbea5a959bfa5d9a31b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:16:10 compute-0 podman[453692]: 2025-11-26 02:16:10.607056425 +0000 UTC m=+0.299125942 container attach 952e01d0176e81cf22946208dd35c9b9e2d16a0e6a29bbea5a959bfa5d9a31b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 02:16:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:16:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:16:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:16:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:16:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:16:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]: {
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:        "osd_id": 0,
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:        "type": "bluestore"
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:    },
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:        "osd_id": 2,
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:        "type": "bluestore"
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:    },
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:        "osd_id": 1,
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:        "type": "bluestore"
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]:    }
Nov 26 02:16:11 compute-0 hopeful_mclaren[453707]: }
Nov 26 02:16:11 compute-0 systemd[1]: libpod-952e01d0176e81cf22946208dd35c9b9e2d16a0e6a29bbea5a959bfa5d9a31b2.scope: Deactivated successfully.
Nov 26 02:16:11 compute-0 systemd[1]: libpod-952e01d0176e81cf22946208dd35c9b9e2d16a0e6a29bbea5a959bfa5d9a31b2.scope: Consumed 1.028s CPU time.
Nov 26 02:16:11 compute-0 podman[453692]: 2025-11-26 02:16:11.622551718 +0000 UTC m=+1.314621245 container died 952e01d0176e81cf22946208dd35c9b9e2d16a0e6a29bbea5a959bfa5d9a31b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:16:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s wr, 0 op/s
Nov 26 02:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a40869f7a74f829bbef05057e5560c1a97342ab4db5941b14949e444fe1cca6-merged.mount: Deactivated successfully.
Nov 26 02:16:11 compute-0 podman[453692]: 2025-11-26 02:16:11.717244791 +0000 UTC m=+1.409314278 container remove 952e01d0176e81cf22946208dd35c9b9e2d16a0e6a29bbea5a959bfa5d9a31b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 02:16:11 compute-0 systemd[1]: libpod-conmon-952e01d0176e81cf22946208dd35c9b9e2d16a0e6a29bbea5a959bfa5d9a31b2.scope: Deactivated successfully.
Nov 26 02:16:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:16:11 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:16:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:16:11 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:16:11 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev abacc885-cabf-4031-a3b3-89cec8de3916 does not exist
Nov 26 02:16:11 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 488f4e7a-e36f-4985-af30-8cb8fbf845e2 does not exist
Nov 26 02:16:12 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:16:12 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:16:13 compute-0 podman[453803]: 2025-11-26 02:16:13.558098628 +0000 UTC m=+0.113714057 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 02:16:13 compute-0 podman[453802]: 2025-11-26 02:16:13.583205271 +0000 UTC m=+0.133891962 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, architecture=x86_64, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, build-date=2024-09-18T21:23:30, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 02:16:13 compute-0 nova_compute[350387]: 2025-11-26 02:16:13.611 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s wr, 0 op/s
Nov 26 02:16:14 compute-0 nova_compute[350387]: 2025-11-26 02:16:14.238 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:16:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:16:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Nov 26 02:16:18 compute-0 nova_compute[350387]: 2025-11-26 02:16:18.613 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:19 compute-0 nova_compute[350387]: 2025-11-26 02:16:19.241 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:19 compute-0 podman[453839]: 2025-11-26 02:16:19.540909547 +0000 UTC m=+0.080458816 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:16:19 compute-0 podman[453838]: 2025-11-26 02:16:19.542942304 +0000 UTC m=+0.098705747 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc.)
Nov 26 02:16:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Nov 26 02:16:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:16:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:16:20.438 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:16:20 compute-0 nova_compute[350387]: 2025-11-26 02:16:20.437 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:20 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:16:20.441 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 02:16:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Nov 26 02:16:23 compute-0 nova_compute[350387]: 2025-11-26 02:16:23.617 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Nov 26 02:16:24 compute-0 nova_compute[350387]: 2025-11-26 02:16:24.244 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:16:25.001 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:16:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:16:25.002 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:16:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:16:25.003 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:16:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:16:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Nov 26 02:16:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:16:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3604794101' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:16:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:16:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3604794101' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:16:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Nov 26 02:16:28 compute-0 nova_compute[350387]: 2025-11-26 02:16:28.619 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:29 compute-0 nova_compute[350387]: 2025-11-26 02:16:29.249 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:29 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:16:29.444 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:16:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Nov 26 02:16:29 compute-0 podman[158021]: time="2025-11-26T02:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:16:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:16:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8648 "" "Go-http-client/1.1"
Nov 26 02:16:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:16:31 compute-0 openstack_network_exporter[367323]: ERROR   02:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:16:31 compute-0 openstack_network_exporter[367323]: ERROR   02:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:16:31 compute-0 openstack_network_exporter[367323]: ERROR   02:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:16:31 compute-0 openstack_network_exporter[367323]: ERROR   02:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:16:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:16:31 compute-0 openstack_network_exporter[367323]: ERROR   02:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:16:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:16:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Nov 26 02:16:32 compute-0 podman[453879]: 2025-11-26 02:16:32.559455694 +0000 UTC m=+0.114565571 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 26 02:16:32 compute-0 podman[453880]: 2025-11-26 02:16:32.575221065 +0000 UTC m=+0.128936763 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 26 02:16:32 compute-0 podman[453881]: 2025-11-26 02:16:32.582701015 +0000 UTC m=+0.130066605 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:16:33 compute-0 nova_compute[350387]: 2025-11-26 02:16:33.622 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Nov 26 02:16:34 compute-0 nova_compute[350387]: 2025-11-26 02:16:34.253 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:16:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Nov 26 02:16:36 compute-0 nova_compute[350387]: 2025-11-26 02:16:36.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:16:36 compute-0 nova_compute[350387]: 2025-11-26 02:16:36.339 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:16:36 compute-0 nova_compute[350387]: 2025-11-26 02:16:36.340 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:16:36 compute-0 nova_compute[350387]: 2025-11-26 02:16:36.340 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:16:36 compute-0 nova_compute[350387]: 2025-11-26 02:16:36.341 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:16:36 compute-0 nova_compute[350387]: 2025-11-26 02:16:36.341 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:16:36 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:16:36 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3472089956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:16:36 compute-0 nova_compute[350387]: 2025-11-26 02:16:36.859 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:16:36 compute-0 nova_compute[350387]: 2025-11-26 02:16:36.974 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:16:36 compute-0 nova_compute[350387]: 2025-11-26 02:16:36.975 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:16:36 compute-0 nova_compute[350387]: 2025-11-26 02:16:36.983 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:16:36 compute-0 nova_compute[350387]: 2025-11-26 02:16:36.984 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:16:37 compute-0 nova_compute[350387]: 2025-11-26 02:16:37.404 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:16:37 compute-0 nova_compute[350387]: 2025-11-26 02:16:37.406 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3511MB free_disk=59.897369384765625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:16:37 compute-0 nova_compute[350387]: 2025-11-26 02:16:37.406 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:16:37 compute-0 nova_compute[350387]: 2025-11-26 02:16:37.407 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:16:37 compute-0 nova_compute[350387]: 2025-11-26 02:16:37.535 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 74d081af-66cd-4e37-99e4-31f777885766 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:16:37 compute-0 nova_compute[350387]: 2025-11-26 02:16:37.536 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance add194b7-6a6c-48ef-8355-3344185eb43e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:16:37 compute-0 nova_compute[350387]: 2025-11-26 02:16:37.537 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:16:37 compute-0 nova_compute[350387]: 2025-11-26 02:16:37.538 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:16:37 compute-0 nova_compute[350387]: 2025-11-26 02:16:37.563 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing inventories for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 02:16:37 compute-0 nova_compute[350387]: 2025-11-26 02:16:37.583 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating ProviderTree inventory for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 02:16:37 compute-0 nova_compute[350387]: 2025-11-26 02:16:37.584 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating inventory in ProviderTree for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 02:16:37 compute-0 nova_compute[350387]: 2025-11-26 02:16:37.617 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing aggregate associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 02:16:37 compute-0 nova_compute[350387]: 2025-11-26 02:16:37.640 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing trait associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, traits: COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 02:16:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 26 02:16:37 compute-0 nova_compute[350387]: 2025-11-26 02:16:37.708 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:16:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:16:38 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/12264296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:16:38 compute-0 nova_compute[350387]: 2025-11-26 02:16:38.219 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:16:38 compute-0 nova_compute[350387]: 2025-11-26 02:16:38.230 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:16:38 compute-0 nova_compute[350387]: 2025-11-26 02:16:38.261 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:16:38 compute-0 nova_compute[350387]: 2025-11-26 02:16:38.264 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:16:38 compute-0 nova_compute[350387]: 2025-11-26 02:16:38.265 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:16:38 compute-0 nova_compute[350387]: 2025-11-26 02:16:38.627 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:39 compute-0 nova_compute[350387]: 2025-11-26 02:16:39.257 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:39 compute-0 podman[453981]: 2025-11-26 02:16:39.572703653 +0000 UTC m=+0.115441315 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 02:16:39 compute-0 podman[453982]: 2025-11-26 02:16:39.611996214 +0000 UTC m=+0.153817471 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 02:16:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Nov 26 02:16:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:16:41
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'backups', 'vms', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:16:41 compute-0 nova_compute[350387]: 2025-11-26 02:16:41.265 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:16:41 compute-0 nova_compute[350387]: 2025-11-26 02:16:41.266 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:16:41 compute-0 nova_compute[350387]: 2025-11-26 02:16:41.267 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:16:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:16:42 compute-0 nova_compute[350387]: 2025-11-26 02:16:42.301 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:16:42 compute-0 nova_compute[350387]: 2025-11-26 02:16:42.301 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.874 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.875 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.875 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.876 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.885 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '74d081af-66cd-4e37-99e4-31f777885766', 'name': 'te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'dbaf181e-c7da-4938-bfef-7ab3aa9a19bc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'user_id': '3a9710ede02d47cbb016ff596d936633', 'hostId': '0514aa3466932c9e7b93e3dcd39fcbb186e60af35850a79a2e38f108', 'status': 'active', 'metadata': {'metering.server_group': 'bd820598-acdd-4f42-8252-1f5951161b01'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.889 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.890 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.891 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance add194b7-6a6c-48ef-8355-3344185eb43e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 02:16:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:42.892 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/add194b7-6a6c-48ef-8355-3344185eb43e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}4e94a0ede5bb893797130fc39ee992faf1803b43b6582353b5619a442e3adefc" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 02:16:42 compute-0 nova_compute[350387]: 2025-11-26 02:16:42.979 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:16:42 compute-0 nova_compute[350387]: 2025-11-26 02:16:42.980 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:16:42 compute-0 nova_compute[350387]: 2025-11-26 02:16:42.981 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.500 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Wed, 26 Nov 2025 02:16:42 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ed5a73e8-fcdf-49eb-bb73-665ce61cbf01 x-openstack-request-id: req-ed5a73e8-fcdf-49eb-bb73-665ce61cbf01 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.500 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "add194b7-6a6c-48ef-8355-3344185eb43e", "name": "te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i", "status": "ACTIVE", "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "user_id": "3a9710ede02d47cbb016ff596d936633", "metadata": {"metering.server_group": "bd820598-acdd-4f42-8252-1f5951161b01"}, "hostId": "0514aa3466932c9e7b93e3dcd39fcbb186e60af35850a79a2e38f108", "image": {"id": "dbaf181e-c7da-4938-bfef-7ab3aa9a19bc", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/dbaf181e-c7da-4938-bfef-7ab3aa9a19bc"}]}, "flavor": {"id": "6db4d080-ab1e-4a78-a6d9-858137b0ba8b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/6db4d080-ab1e-4a78-a6d9-858137b0ba8b"}]}, "created": "2025-11-26T02:15:05Z", "updated": "2025-11-26T02:15:15Z", "addresses": {"": [{"version": 4, "addr": "10.100.2.215", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:6e:b7:00"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/add194b7-6a6c-48ef-8355-3344185eb43e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/add194b7-6a6c-48ef-8355-3344185eb43e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-26T02:15:15.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.500 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/add194b7-6a6c-48ef-8355-3344185eb43e used request id req-ed5a73e8-fcdf-49eb-bb73-665ce61cbf01 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.502 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'add194b7-6a6c-48ef-8355-3344185eb43e', 'name': 'te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'dbaf181e-c7da-4938-bfef-7ab3aa9a19bc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'user_id': '3a9710ede02d47cbb016ff596d936633', 'hostId': '0514aa3466932c9e7b93e3dcd39fcbb186e60af35850a79a2e38f108', 'status': 'active', 'metadata': {'metering.server_group': 'bd820598-acdd-4f42-8252-1f5951161b01'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.503 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.503 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.503 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.503 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.504 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.505 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.505 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.505 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.506 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.506 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T02:16:43.503750) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T02:16:43.506615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.513 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.520 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for add194b7-6a6c-48ef-8355-3344185eb43e / tapcaa46d5d-d6 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.520 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.522 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.522 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.522 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.522 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.523 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.523 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.524 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.525 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.525 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T02:16:43.523329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.526 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.526 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.526 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.527 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.527 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T02:16:43.526798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.529 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.529 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.530 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.530 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.530 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.530 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.531 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T02:16:43.530499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.531 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.531 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.532 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.533 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.533 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.533 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.533 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.533 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.533 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.534 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.535 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.536 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T02:16:43.533652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.536 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.536 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.536 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.536 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T02:16:43.536792) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.572 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/cpu volume: 247070000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 nova_compute[350387]: 2025-11-26 02:16:43.628 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.630 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/cpu volume: 84780000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.631 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.631 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.632 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.632 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.632 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.632 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.633 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.633 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.634 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.635 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.635 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.635 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.636 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.636 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T02:16:43.632669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.636 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.636 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/memory.usage volume: 43.5234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.637 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/memory.usage volume: 43.23046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.637 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T02:16:43.636209) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.638 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.638 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.638 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.638 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.639 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.639 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-26T02:16:43.638958) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.639 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.639 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i>]
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.640 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.640 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.640 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.640 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.641 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.641 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.641 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.bytes volume: 1346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T02:16:43.641130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.642 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.643 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.643 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.643 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.643 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.643 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.643 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T02:16:43.643388) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.644 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.644 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.644 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.644 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.644 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.644 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.645 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.645 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.645 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.646 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.646 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.646 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T02:16:43.644999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.646 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.646 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.646 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.647 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T02:16:43.646697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.647 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.647 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.647 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.648 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.648 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.648 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.648 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.648 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T02:16:43.648261) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.648 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.649 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.649 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.649 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.649 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.649 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.649 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T02:16:43.649811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.665 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.666 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.691 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.691 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.692 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.693 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.693 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.693 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.693 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.693 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.695 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T02:16:43.693801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.733 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.733 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.798 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.bytes volume: 30366720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.799 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.800 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.800 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.800 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.801 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.801 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.801 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-26T02:16:43.801294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.801 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.804 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.804 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i>]
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.805 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.805 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.805 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.805 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.805 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.805 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.latency volume: 2333207221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.806 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T02:16:43.805627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.810 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.latency volume: 852741029 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.811 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.latency volume: 2700802924 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.811 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.latency volume: 184971572 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.812 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.812 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.812 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.818 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.818 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.818 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.818 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.819 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.819 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.requests volume: 1101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.819 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.820 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.820 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.820 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.820 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.820 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.821 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.821 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.821 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.821 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.821 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.822 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.822 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.822 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.822 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.822 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.822 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.822 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.bytes volume: 72847360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.823 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.823 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.bytes volume: 72806400 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.823 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.823 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.824 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.824 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.824 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.824 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.824 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.824 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.824 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.825 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T02:16:43.818737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.825 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.825 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T02:16:43.820996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.825 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.825 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.825 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T02:16:43.822723) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.826 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.826 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.826 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T02:16:43.824456) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.826 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.826 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.latency volume: 8514171650 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.826 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T02:16:43.826189) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.827 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.827 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.latency volume: 7403605396 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.827 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.827 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.828 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.828 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.828 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.828 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.828 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.829 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.requests volume: 310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.829 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T02:16:43.828455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.829 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.829 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.requests volume: 271 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.829 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.830 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.830 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.830 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.830 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.830 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.830 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.831 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.831 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T02:16:43.830795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.831 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.831 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.832 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.832 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.833 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.833 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.833 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.836 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.836 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.836 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.836 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.836 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.836 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.838 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:16:43.838 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:16:44 compute-0 nova_compute[350387]: 2025-11-26 02:16:44.261 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:44 compute-0 nova_compute[350387]: 2025-11-26 02:16:44.550 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Updating instance_info_cache with network_info: [{"id": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "address": "fa:16:3e:6e:b7:00", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaa46d5d-d6", "ovs_interfaceid": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:16:44 compute-0 nova_compute[350387]: 2025-11-26 02:16:44.577 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:16:44 compute-0 nova_compute[350387]: 2025-11-26 02:16:44.578 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:16:44 compute-0 nova_compute[350387]: 2025-11-26 02:16:44.579 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:16:44 compute-0 podman[454025]: 2025-11-26 02:16:44.581127481 +0000 UTC m=+0.129902901 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., name=ubi9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, version=9.4, config_id=edpm, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 02:16:44 compute-0 podman[454026]: 2025-11-26 02:16:44.602092078 +0000 UTC m=+0.149040326 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 02:16:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:16:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:16:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:16:48 compute-0 nova_compute[350387]: 2025-11-26 02:16:48.630 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:49 compute-0 nova_compute[350387]: 2025-11-26 02:16:49.264 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:49 compute-0 nova_compute[350387]: 2025-11-26 02:16:49.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:16:49 compute-0 nova_compute[350387]: 2025-11-26 02:16:49.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:16:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:16:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:16:50 compute-0 podman[454062]: 2025-11-26 02:16:50.588667112 +0000 UTC m=+0.126523246 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal)
Nov 26 02:16:50 compute-0 podman[454063]: 2025-11-26 02:16:50.594738682 +0000 UTC m=+0.137284848 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015154937487790584 of space, bias 1.0, pg target 0.4546481246337175 quantized to 32 (current 32)
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:16:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:16:53 compute-0 nova_compute[350387]: 2025-11-26 02:16:53.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:16:53 compute-0 nova_compute[350387]: 2025-11-26 02:16:53.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:16:53 compute-0 nova_compute[350387]: 2025-11-26 02:16:53.633 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:16:54 compute-0 nova_compute[350387]: 2025-11-26 02:16:54.268 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:16:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:16:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:16:58 compute-0 nova_compute[350387]: 2025-11-26 02:16:58.636 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:59 compute-0 ovn_controller[89102]: 2025-11-26T02:16:59Z|00168|memory_trim|INFO|Detected inactivity (last active 30021 ms ago): trimming memory
Nov 26 02:16:59 compute-0 nova_compute[350387]: 2025-11-26 02:16:59.271 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:16:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:16:59 compute-0 podman[158021]: time="2025-11-26T02:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:16:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:16:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8652 "" "Go-http-client/1.1"
Nov 26 02:17:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:17:01 compute-0 openstack_network_exporter[367323]: ERROR   02:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:17:01 compute-0 openstack_network_exporter[367323]: ERROR   02:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:17:01 compute-0 openstack_network_exporter[367323]: ERROR   02:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:17:01 compute-0 openstack_network_exporter[367323]: ERROR   02:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:17:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:17:01 compute-0 openstack_network_exporter[367323]: ERROR   02:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:17:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:17:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 26 02:17:03 compute-0 podman[454106]: 2025-11-26 02:17:03.567959829 +0000 UTC m=+0.105239200 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 02:17:03 compute-0 podman[454104]: 2025-11-26 02:17:03.585619424 +0000 UTC m=+0.135694473 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 02:17:03 compute-0 podman[454105]: 2025-11-26 02:17:03.595664735 +0000 UTC m=+0.140343053 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 26 02:17:03 compute-0 nova_compute[350387]: 2025-11-26 02:17:03.640 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.276 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.299 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.300 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.301 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.302 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.303 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.304 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.333 350391 DEBUG nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.352 350391 DEBUG nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.353 350391 DEBUG nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Image id dbaf181e-c7da-4938-bfef-7ab3aa9a19bc yields fingerprint 75aa7190add890d937d223054d1bca64341e098f _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.354 350391 INFO nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] image dbaf181e-c7da-4938-bfef-7ab3aa9a19bc at (/var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f): checking#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.354 350391 DEBUG nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] image dbaf181e-c7da-4938-bfef-7ab3aa9a19bc at (/var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.359 350391 DEBUG nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.361 350391 DEBUG nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] 74d081af-66cd-4e37-99e4-31f777885766 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.361 350391 DEBUG nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] add194b7-6a6c-48ef-8355-3344185eb43e is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.362 350391 WARNING nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.363 350391 WARNING nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/8b2418705cce6052c0ebe8d6666be2547437287b#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.364 350391 WARNING nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Unknown base file: /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.365 350391 INFO nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Active base files: /var/lib/nova/instances/_base/75aa7190add890d937d223054d1bca64341e098f#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.365 350391 INFO nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Removable base files: /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e /var/lib/nova/instances/_base/8b2418705cce6052c0ebe8d6666be2547437287b /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.367 350391 INFO nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/f456d938eec6117407d48c9debbc5604edb4194e#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.368 350391 INFO nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/8b2418705cce6052c0ebe8d6666be2547437287b#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.369 350391 INFO nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/beedb32a5f0393b3b7ca21cf7409d6e587060a17#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.369 350391 DEBUG nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.370 350391 DEBUG nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.371 350391 DEBUG nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 26 02:17:04 compute-0 nova_compute[350387]: 2025-11-26 02:17:04.372 350391 INFO nova.virt.libvirt.imagecache [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Nov 26 02:17:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:17:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 26 02:17:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 26 02:17:08 compute-0 nova_compute[350387]: 2025-11-26 02:17:08.644 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:09 compute-0 nova_compute[350387]: 2025-11-26 02:17:09.280 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 26 02:17:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:17:10 compute-0 podman[454160]: 2025-11-26 02:17:10.565856608 +0000 UTC m=+0.119285233 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 26 02:17:10 compute-0 podman[454161]: 2025-11-26 02:17:10.611090675 +0000 UTC m=+0.155814696 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 26 02:17:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:17:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:17:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:17:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:17:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:17:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:17:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 26 02:17:13 compute-0 nova_compute[350387]: 2025-11-26 02:17:13.647 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:17:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:17:14 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:17:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:17:14 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:17:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:17:14 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:17:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:17:14 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:17:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:17:14 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:17:14 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 1a54292d-edc3-47d9-83ad-3adbca0857b1 does not exist
Nov 26 02:17:14 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev fc57e331-59e6-4ffc-8dbd-b5beffc72051 does not exist
Nov 26 02:17:14 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev d7005348-8cf7-4a2b-a5f7-88578fce0794 does not exist
Nov 26 02:17:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:17:14 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:17:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:17:14 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:17:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:17:14 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:17:14 compute-0 nova_compute[350387]: 2025-11-26 02:17:14.283 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:14 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 26 02:17:14 compute-0 podman[454551]: 2025-11-26 02:17:14.785138715 +0000 UTC m=+0.096896236 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=edpm_ansible, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, name=ubi9, release=1214.1726694543, version=9.4, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30)
Nov 26 02:17:14 compute-0 podman[454552]: 2025-11-26 02:17:14.80993791 +0000 UTC m=+0.122276607 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:17:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:17:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:17:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:17:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:17:15 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:17:15 compute-0 podman[454623]: 2025-11-26 02:17:15.079730559 +0000 UTC m=+0.081339230 container create 52e7e23bb8b3562bafe3fb42c19edc3e010ab2d5d56581744b4332dfd23b6e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 02:17:15 compute-0 podman[454623]: 2025-11-26 02:17:15.051513429 +0000 UTC m=+0.053122150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:17:15 compute-0 systemd[1]: Started libpod-conmon-52e7e23bb8b3562bafe3fb42c19edc3e010ab2d5d56581744b4332dfd23b6e33.scope.
Nov 26 02:17:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:17:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:17:15 compute-0 podman[454623]: 2025-11-26 02:17:15.243972791 +0000 UTC m=+0.245581492 container init 52e7e23bb8b3562bafe3fb42c19edc3e010ab2d5d56581744b4332dfd23b6e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kepler, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 02:17:15 compute-0 podman[454623]: 2025-11-26 02:17:15.260238327 +0000 UTC m=+0.261846998 container start 52e7e23bb8b3562bafe3fb42c19edc3e010ab2d5d56581744b4332dfd23b6e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:17:15 compute-0 podman[454623]: 2025-11-26 02:17:15.265796993 +0000 UTC m=+0.267405684 container attach 52e7e23bb8b3562bafe3fb42c19edc3e010ab2d5d56581744b4332dfd23b6e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 02:17:15 compute-0 determined_kepler[454639]: 167 167
Nov 26 02:17:15 compute-0 systemd[1]: libpod-52e7e23bb8b3562bafe3fb42c19edc3e010ab2d5d56581744b4332dfd23b6e33.scope: Deactivated successfully.
Nov 26 02:17:15 compute-0 conmon[454639]: conmon 52e7e23bb8b3562bafe3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52e7e23bb8b3562bafe3fb42c19edc3e010ab2d5d56581744b4332dfd23b6e33.scope/container/memory.events
Nov 26 02:17:15 compute-0 podman[454623]: 2025-11-26 02:17:15.276619716 +0000 UTC m=+0.278228417 container died 52e7e23bb8b3562bafe3fb42c19edc3e010ab2d5d56581744b4332dfd23b6e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kepler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:17:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-295a22f42e69273943ead66675ee7b1745259b6e4c2c0c5e3437a7de21036094-merged.mount: Deactivated successfully.
Nov 26 02:17:15 compute-0 podman[454623]: 2025-11-26 02:17:15.372102631 +0000 UTC m=+0.373711302 container remove 52e7e23bb8b3562bafe3fb42c19edc3e010ab2d5d56581744b4332dfd23b6e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kepler, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:17:15 compute-0 systemd[1]: libpod-conmon-52e7e23bb8b3562bafe3fb42c19edc3e010ab2d5d56581744b4332dfd23b6e33.scope: Deactivated successfully.
Nov 26 02:17:15 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 26 02:17:15 compute-0 podman[454662]: 2025-11-26 02:17:15.663077274 +0000 UTC m=+0.079447037 container create 79efda88a4f35bedf30f3921e5ccd7e17da3314ef867d58477cab1bfac8b7ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_fermat, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:17:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:17:15 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 02:17:15 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 02:17:15 compute-0 podman[454662]: 2025-11-26 02:17:15.64047418 +0000 UTC m=+0.056843943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:17:15 compute-0 systemd[1]: Started libpod-conmon-79efda88a4f35bedf30f3921e5ccd7e17da3314ef867d58477cab1bfac8b7ae8.scope.
Nov 26 02:17:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e467460591297ab0886d068ae3790433ac7e348b2ea0778d245106cac6cb226/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e467460591297ab0886d068ae3790433ac7e348b2ea0778d245106cac6cb226/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e467460591297ab0886d068ae3790433ac7e348b2ea0778d245106cac6cb226/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e467460591297ab0886d068ae3790433ac7e348b2ea0778d245106cac6cb226/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e467460591297ab0886d068ae3790433ac7e348b2ea0778d245106cac6cb226/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:17:15 compute-0 podman[454662]: 2025-11-26 02:17:15.855215587 +0000 UTC m=+0.271585430 container init 79efda88a4f35bedf30f3921e5ccd7e17da3314ef867d58477cab1bfac8b7ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_fermat, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:17:15 compute-0 podman[454662]: 2025-11-26 02:17:15.881791612 +0000 UTC m=+0.298161415 container start 79efda88a4f35bedf30f3921e5ccd7e17da3314ef867d58477cab1bfac8b7ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 02:17:15 compute-0 podman[454662]: 2025-11-26 02:17:15.88888177 +0000 UTC m=+0.305251623 container attach 79efda88a4f35bedf30f3921e5ccd7e17da3314ef867d58477cab1bfac8b7ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_fermat, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 02:17:17 compute-0 practical_fermat[454679]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:17:17 compute-0 practical_fermat[454679]: --> relative data size: 1.0
Nov 26 02:17:17 compute-0 practical_fermat[454679]: --> All data devices are unavailable
Nov 26 02:17:17 compute-0 systemd[1]: libpod-79efda88a4f35bedf30f3921e5ccd7e17da3314ef867d58477cab1bfac8b7ae8.scope: Deactivated successfully.
Nov 26 02:17:17 compute-0 systemd[1]: libpod-79efda88a4f35bedf30f3921e5ccd7e17da3314ef867d58477cab1bfac8b7ae8.scope: Consumed 1.284s CPU time.
Nov 26 02:17:17 compute-0 podman[454662]: 2025-11-26 02:17:17.240551222 +0000 UTC m=+1.656920985 container died 79efda88a4f35bedf30f3921e5ccd7e17da3314ef867d58477cab1bfac8b7ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 02:17:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e467460591297ab0886d068ae3790433ac7e348b2ea0778d245106cac6cb226-merged.mount: Deactivated successfully.
Nov 26 02:17:17 compute-0 podman[454662]: 2025-11-26 02:17:17.299557496 +0000 UTC m=+1.715927259 container remove 79efda88a4f35bedf30f3921e5ccd7e17da3314ef867d58477cab1bfac8b7ae8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_fermat, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:17:17 compute-0 systemd[1]: libpod-conmon-79efda88a4f35bedf30f3921e5ccd7e17da3314ef867d58477cab1bfac8b7ae8.scope: Deactivated successfully.
Nov 26 02:17:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:17:18 compute-0 podman[454854]: 2025-11-26 02:17:18.432250921 +0000 UTC m=+0.103260494 container create 227b53119e90376075861e6f55a066a95105288990a83c68df6b572b11754175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_solomon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 02:17:18 compute-0 podman[454854]: 2025-11-26 02:17:18.387273001 +0000 UTC m=+0.058282614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:17:18 compute-0 systemd[1]: Started libpod-conmon-227b53119e90376075861e6f55a066a95105288990a83c68df6b572b11754175.scope.
Nov 26 02:17:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:17:18 compute-0 podman[454854]: 2025-11-26 02:17:18.620881876 +0000 UTC m=+0.291891439 container init 227b53119e90376075861e6f55a066a95105288990a83c68df6b572b11754175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 02:17:18 compute-0 podman[454854]: 2025-11-26 02:17:18.634720244 +0000 UTC m=+0.305729827 container start 227b53119e90376075861e6f55a066a95105288990a83c68df6b572b11754175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_solomon, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:17:18 compute-0 magical_solomon[454870]: 167 167
Nov 26 02:17:18 compute-0 systemd[1]: libpod-227b53119e90376075861e6f55a066a95105288990a83c68df6b572b11754175.scope: Deactivated successfully.
Nov 26 02:17:18 compute-0 nova_compute[350387]: 2025-11-26 02:17:18.651 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:18 compute-0 podman[454854]: 2025-11-26 02:17:18.69631124 +0000 UTC m=+0.367320793 container attach 227b53119e90376075861e6f55a066a95105288990a83c68df6b572b11754175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 02:17:18 compute-0 podman[454854]: 2025-11-26 02:17:18.696713661 +0000 UTC m=+0.367723214 container died 227b53119e90376075861e6f55a066a95105288990a83c68df6b572b11754175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_solomon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 02:17:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ebd3e7903906b30882ba19b445cbad366480014a930360c4eac5857d533e164-merged.mount: Deactivated successfully.
Nov 26 02:17:19 compute-0 podman[454854]: 2025-11-26 02:17:19.045218205 +0000 UTC m=+0.716227748 container remove 227b53119e90376075861e6f55a066a95105288990a83c68df6b572b11754175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 02:17:19 compute-0 systemd[1]: libpod-conmon-227b53119e90376075861e6f55a066a95105288990a83c68df6b572b11754175.scope: Deactivated successfully.
Nov 26 02:17:19 compute-0 nova_compute[350387]: 2025-11-26 02:17:19.287 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:19 compute-0 podman[454893]: 2025-11-26 02:17:19.361630401 +0000 UTC m=+0.152085962 container create f566bcdd85f882db828a9b1fb515e3ab30323c9a2a8d724bbedc6b6e2d0ea769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:17:19 compute-0 podman[454893]: 2025-11-26 02:17:19.263021188 +0000 UTC m=+0.053476649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:17:19 compute-0 systemd[1]: Started libpod-conmon-f566bcdd85f882db828a9b1fb515e3ab30323c9a2a8d724bbedc6b6e2d0ea769.scope.
Nov 26 02:17:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d34d3e737709868d73579453031f8eda3bd80bdb6797dd61377519f5e96d22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d34d3e737709868d73579453031f8eda3bd80bdb6797dd61377519f5e96d22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d34d3e737709868d73579453031f8eda3bd80bdb6797dd61377519f5e96d22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d34d3e737709868d73579453031f8eda3bd80bdb6797dd61377519f5e96d22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:17:19 compute-0 podman[454893]: 2025-11-26 02:17:19.635618278 +0000 UTC m=+0.426073779 container init f566bcdd85f882db828a9b1fb515e3ab30323c9a2a8d724bbedc6b6e2d0ea769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bartik, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:17:19 compute-0 podman[454893]: 2025-11-26 02:17:19.655307389 +0000 UTC m=+0.445762810 container start f566bcdd85f882db828a9b1fb515e3ab30323c9a2a8d724bbedc6b6e2d0ea769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 26 02:17:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:17:19 compute-0 podman[454893]: 2025-11-26 02:17:19.702659126 +0000 UTC m=+0.493114567 container attach f566bcdd85f882db828a9b1fb515e3ab30323c9a2a8d724bbedc6b6e2d0ea769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bartik, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 02:17:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:17:20 compute-0 condescending_bartik[454911]: {
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:    "0": [
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:        {
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "devices": [
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "/dev/loop3"
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            ],
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "lv_name": "ceph_lv0",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "lv_size": "21470642176",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "name": "ceph_lv0",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "tags": {
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.cluster_name": "ceph",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.crush_device_class": "",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.encrypted": "0",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.osd_id": "0",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.type": "block",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.vdo": "0"
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            },
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "type": "block",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "vg_name": "ceph_vg0"
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:        }
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:    ],
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:    "1": [
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:        {
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "devices": [
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "/dev/loop4"
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            ],
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "lv_name": "ceph_lv1",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "lv_size": "21470642176",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "name": "ceph_lv1",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "tags": {
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.cluster_name": "ceph",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.crush_device_class": "",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.encrypted": "0",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.osd_id": "1",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.type": "block",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.vdo": "0"
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            },
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "type": "block",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "vg_name": "ceph_vg1"
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:        }
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:    ],
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:    "2": [
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:        {
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "devices": [
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "/dev/loop5"
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            ],
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "lv_name": "ceph_lv2",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "lv_size": "21470642176",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "name": "ceph_lv2",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "tags": {
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.cluster_name": "ceph",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.crush_device_class": "",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.encrypted": "0",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.osd_id": "2",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.type": "block",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:                "ceph.vdo": "0"
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            },
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "type": "block",
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:            "vg_name": "ceph_vg2"
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:        }
Nov 26 02:17:20 compute-0 condescending_bartik[454911]:    ]
Nov 26 02:17:20 compute-0 condescending_bartik[454911]: }
Nov 26 02:17:20 compute-0 systemd[1]: libpod-f566bcdd85f882db828a9b1fb515e3ab30323c9a2a8d724bbedc6b6e2d0ea769.scope: Deactivated successfully.
Nov 26 02:17:20 compute-0 podman[454893]: 2025-11-26 02:17:20.498407132 +0000 UTC m=+1.288862593 container died f566bcdd85f882db828a9b1fb515e3ab30323c9a2a8d724bbedc6b6e2d0ea769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 26 02:17:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-88d34d3e737709868d73579453031f8eda3bd80bdb6797dd61377519f5e96d22-merged.mount: Deactivated successfully.
Nov 26 02:17:20 compute-0 podman[454893]: 2025-11-26 02:17:20.878330817 +0000 UTC m=+1.668786238 container remove f566bcdd85f882db828a9b1fb515e3ab30323c9a2a8d724bbedc6b6e2d0ea769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:17:20 compute-0 systemd[1]: libpod-conmon-f566bcdd85f882db828a9b1fb515e3ab30323c9a2a8d724bbedc6b6e2d0ea769.scope: Deactivated successfully.
Nov 26 02:17:20 compute-0 podman[454932]: 2025-11-26 02:17:20.956731613 +0000 UTC m=+0.255783707 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, version=9.6)
Nov 26 02:17:20 compute-0 podman[454933]: 2025-11-26 02:17:20.97266056 +0000 UTC m=+0.268164155 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 02:17:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:17:21 compute-0 podman[455113]: 2025-11-26 02:17:21.878245442 +0000 UTC m=+0.041918686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:17:21 compute-0 podman[455113]: 2025-11-26 02:17:21.986813464 +0000 UTC m=+0.150486728 container create 0356a800c5c3610e318f4c9d8e87a6c093bb4fff35d2b9c6acf8949afe007c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_merkle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 02:17:22 compute-0 systemd[1]: Started libpod-conmon-0356a800c5c3610e318f4c9d8e87a6c093bb4fff35d2b9c6acf8949afe007c58.scope.
Nov 26 02:17:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:17:22 compute-0 podman[455113]: 2025-11-26 02:17:22.211355005 +0000 UTC m=+0.375028279 container init 0356a800c5c3610e318f4c9d8e87a6c093bb4fff35d2b9c6acf8949afe007c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_merkle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 02:17:22 compute-0 podman[455113]: 2025-11-26 02:17:22.225644885 +0000 UTC m=+0.389318129 container start 0356a800c5c3610e318f4c9d8e87a6c093bb4fff35d2b9c6acf8949afe007c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_merkle, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 02:17:22 compute-0 relaxed_merkle[455129]: 167 167
Nov 26 02:17:22 compute-0 systemd[1]: libpod-0356a800c5c3610e318f4c9d8e87a6c093bb4fff35d2b9c6acf8949afe007c58.scope: Deactivated successfully.
Nov 26 02:17:22 compute-0 conmon[455129]: conmon 0356a800c5c3610e318f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0356a800c5c3610e318f4c9d8e87a6c093bb4fff35d2b9c6acf8949afe007c58.scope/container/memory.events
Nov 26 02:17:22 compute-0 podman[455113]: 2025-11-26 02:17:22.334606538 +0000 UTC m=+0.498279772 container attach 0356a800c5c3610e318f4c9d8e87a6c093bb4fff35d2b9c6acf8949afe007c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_merkle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:17:22 compute-0 podman[455113]: 2025-11-26 02:17:22.335047231 +0000 UTC m=+0.498720475 container died 0356a800c5c3610e318f4c9d8e87a6c093bb4fff35d2b9c6acf8949afe007c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 26 02:17:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f872a7c198a710f084abff08318e195d814d30354e721ac2397806940595393-merged.mount: Deactivated successfully.
Nov 26 02:17:22 compute-0 podman[455113]: 2025-11-26 02:17:22.765521002 +0000 UTC m=+0.929194236 container remove 0356a800c5c3610e318f4c9d8e87a6c093bb4fff35d2b9c6acf8949afe007c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_merkle, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:17:22 compute-0 systemd[1]: libpod-conmon-0356a800c5c3610e318f4c9d8e87a6c093bb4fff35d2b9c6acf8949afe007c58.scope: Deactivated successfully.
Nov 26 02:17:23 compute-0 podman[455152]: 2025-11-26 02:17:23.026570066 +0000 UTC m=+0.078457339 container create fa16b0abf7459bbcf649daa74993318f1b08bc455672a457c7e14e2b2f112c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 02:17:23 compute-0 podman[455152]: 2025-11-26 02:17:22.993302914 +0000 UTC m=+0.045190227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:17:23 compute-0 systemd[1]: Started libpod-conmon-fa16b0abf7459bbcf649daa74993318f1b08bc455672a457c7e14e2b2f112c78.scope.
Nov 26 02:17:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02b3671048f975e960055895969107e2c938908625770e18b42e4911892e6a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02b3671048f975e960055895969107e2c938908625770e18b42e4911892e6a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02b3671048f975e960055895969107e2c938908625770e18b42e4911892e6a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02b3671048f975e960055895969107e2c938908625770e18b42e4911892e6a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:17:23 compute-0 podman[455152]: 2025-11-26 02:17:23.225769437 +0000 UTC m=+0.277656710 container init fa16b0abf7459bbcf649daa74993318f1b08bc455672a457c7e14e2b2f112c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hofstadter, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 02:17:23 compute-0 podman[455152]: 2025-11-26 02:17:23.250244393 +0000 UTC m=+0.302131666 container start fa16b0abf7459bbcf649daa74993318f1b08bc455672a457c7e14e2b2f112c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:17:23 compute-0 podman[455152]: 2025-11-26 02:17:23.260589893 +0000 UTC m=+0.312477176 container attach fa16b0abf7459bbcf649daa74993318f1b08bc455672a457c7e14e2b2f112c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 02:17:23 compute-0 nova_compute[350387]: 2025-11-26 02:17:23.653 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:17:24 compute-0 nova_compute[350387]: 2025-11-26 02:17:24.290 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]: {
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:        "osd_id": 0,
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:        "type": "bluestore"
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:    },
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:        "osd_id": 2,
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:        "type": "bluestore"
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:    },
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:        "osd_id": 1,
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:        "type": "bluestore"
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]:    }
Nov 26 02:17:24 compute-0 nice_hofstadter[455167]: }
Nov 26 02:17:24 compute-0 systemd[1]: libpod-fa16b0abf7459bbcf649daa74993318f1b08bc455672a457c7e14e2b2f112c78.scope: Deactivated successfully.
Nov 26 02:17:24 compute-0 systemd[1]: libpod-fa16b0abf7459bbcf649daa74993318f1b08bc455672a457c7e14e2b2f112c78.scope: Consumed 1.124s CPU time.
Nov 26 02:17:24 compute-0 podman[455200]: 2025-11-26 02:17:24.462758876 +0000 UTC m=+0.053308164 container died fa16b0abf7459bbcf649daa74993318f1b08bc455672a457c7e14e2b2f112c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 02:17:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f02b3671048f975e960055895969107e2c938908625770e18b42e4911892e6a4-merged.mount: Deactivated successfully.
Nov 26 02:17:24 compute-0 podman[455200]: 2025-11-26 02:17:24.705201148 +0000 UTC m=+0.295750466 container remove fa16b0abf7459bbcf649daa74993318f1b08bc455672a457c7e14e2b2f112c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:17:24 compute-0 systemd[1]: libpod-conmon-fa16b0abf7459bbcf649daa74993318f1b08bc455672a457c7e14e2b2f112c78.scope: Deactivated successfully.
Nov 26 02:17:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:17:24 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:17:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:17:24 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:17:24 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 2e495f12-9c64-4a24-8c49-996be124d630 does not exist
Nov 26 02:17:24 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 86529315-0d10-441a-b0a9-9419a20b7f94 does not exist
Nov 26 02:17:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:17:25.002 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:17:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:17:25.003 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:17:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:17:25.004 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:17:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:17:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:17:25 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:17:25 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.812979) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123445813056, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1417, "num_deletes": 251, "total_data_size": 2186398, "memory_usage": 2234872, "flush_reason": "Manual Compaction"}
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123445829312, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 2142830, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40251, "largest_seqno": 41667, "table_properties": {"data_size": 2136178, "index_size": 3787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13976, "raw_average_key_size": 19, "raw_value_size": 2122784, "raw_average_value_size": 3036, "num_data_blocks": 170, "num_entries": 699, "num_filter_entries": 699, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764123306, "oldest_key_time": 1764123306, "file_creation_time": 1764123445, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 16414 microseconds, and 10146 cpu microseconds.
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.829401) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 2142830 bytes OK
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.829426) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.832058) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.832078) EVENT_LOG_v1 {"time_micros": 1764123445832071, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.832099) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 2180128, prev total WAL file size 2206616, number of live WAL files 2.
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.833490) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(2092KB)], [95(7084KB)]
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123445833542, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 9397323, "oldest_snapshot_seqno": -1}
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 5757 keys, 7691827 bytes, temperature: kUnknown
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123445893142, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 7691827, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7655578, "index_size": 20779, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 149148, "raw_average_key_size": 25, "raw_value_size": 7553619, "raw_average_value_size": 1312, "num_data_blocks": 826, "num_entries": 5757, "num_filter_entries": 5757, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764123445, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.893942) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 7691827 bytes
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.897144) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.3 rd, 128.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.9 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(8.0) write-amplify(3.6) OK, records in: 6275, records dropped: 518 output_compression: NoCompression
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.897174) EVENT_LOG_v1 {"time_micros": 1764123445897160, "job": 56, "event": "compaction_finished", "compaction_time_micros": 60114, "compaction_time_cpu_micros": 42354, "output_level": 6, "num_output_files": 1, "total_output_size": 7691827, "num_input_records": 6275, "num_output_records": 5757, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123445900148, "job": 56, "event": "table_file_deletion", "file_number": 97}
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123445903494, "job": 56, "event": "table_file_deletion", "file_number": 95}
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.833332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.904342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.904348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.904351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.904354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:17:25 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:17:25.904357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:17:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:17:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3630076415' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:17:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:17:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3630076415' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:17:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:17:28 compute-0 nova_compute[350387]: 2025-11-26 02:17:28.655 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:29 compute-0 nova_compute[350387]: 2025-11-26 02:17:29.294 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:17:29 compute-0 podman[158021]: time="2025-11-26T02:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:17:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:17:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8645 "" "Go-http-client/1.1"
Nov 26 02:17:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:17:31 compute-0 openstack_network_exporter[367323]: ERROR   02:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:17:31 compute-0 openstack_network_exporter[367323]: ERROR   02:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:17:31 compute-0 openstack_network_exporter[367323]: ERROR   02:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:17:31 compute-0 openstack_network_exporter[367323]: ERROR   02:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:17:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:17:31 compute-0 openstack_network_exporter[367323]: ERROR   02:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:17:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:17:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:17:33 compute-0 nova_compute[350387]: 2025-11-26 02:17:33.657 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2015: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Nov 26 02:17:34 compute-0 nova_compute[350387]: 2025-11-26 02:17:34.297 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:34 compute-0 podman[455264]: 2025-11-26 02:17:34.551024992 +0000 UTC m=+0.103558103 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:17:34 compute-0 podman[455266]: 2025-11-26 02:17:34.557651518 +0000 UTC m=+0.099623223 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:17:34 compute-0 podman[455265]: 2025-11-26 02:17:34.589017276 +0000 UTC m=+0.143022858 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Nov 26 02:17:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:17:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 26 02:17:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2017: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Nov 26 02:17:38 compute-0 nova_compute[350387]: 2025-11-26 02:17:38.373 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:17:38 compute-0 nova_compute[350387]: 2025-11-26 02:17:38.414 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:17:38 compute-0 nova_compute[350387]: 2025-11-26 02:17:38.415 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:17:38 compute-0 nova_compute[350387]: 2025-11-26 02:17:38.416 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:17:38 compute-0 nova_compute[350387]: 2025-11-26 02:17:38.416 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:17:38 compute-0 nova_compute[350387]: 2025-11-26 02:17:38.417 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:17:38 compute-0 nova_compute[350387]: 2025-11-26 02:17:38.661 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:17:38 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/982889442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:17:38 compute-0 nova_compute[350387]: 2025-11-26 02:17:38.897 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:17:39 compute-0 nova_compute[350387]: 2025-11-26 02:17:39.043 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:17:39 compute-0 nova_compute[350387]: 2025-11-26 02:17:39.044 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:17:39 compute-0 nova_compute[350387]: 2025-11-26 02:17:39.053 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:17:39 compute-0 nova_compute[350387]: 2025-11-26 02:17:39.054 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:17:39 compute-0 nova_compute[350387]: 2025-11-26 02:17:39.300 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:39 compute-0 nova_compute[350387]: 2025-11-26 02:17:39.535 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:17:39 compute-0 nova_compute[350387]: 2025-11-26 02:17:39.537 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3540MB free_disk=59.897369384765625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:17:39 compute-0 nova_compute[350387]: 2025-11-26 02:17:39.538 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:17:39 compute-0 nova_compute[350387]: 2025-11-26 02:17:39.539 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:17:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 26 02:17:39 compute-0 nova_compute[350387]: 2025-11-26 02:17:39.803 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 74d081af-66cd-4e37-99e4-31f777885766 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:17:39 compute-0 nova_compute[350387]: 2025-11-26 02:17:39.805 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance add194b7-6a6c-48ef-8355-3344185eb43e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:17:39 compute-0 nova_compute[350387]: 2025-11-26 02:17:39.806 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:17:39 compute-0 nova_compute[350387]: 2025-11-26 02:17:39.807 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:17:39 compute-0 nova_compute[350387]: 2025-11-26 02:17:39.992 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:17:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:17:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:17:40 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3244936423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:17:40 compute-0 nova_compute[350387]: 2025-11-26 02:17:40.479 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:17:40 compute-0 nova_compute[350387]: 2025-11-26 02:17:40.492 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:17:40 compute-0 nova_compute[350387]: 2025-11-26 02:17:40.657 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:17:40 compute-0 nova_compute[350387]: 2025-11-26 02:17:40.662 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:17:40 compute-0 nova_compute[350387]: 2025-11-26 02:17:40.663 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:17:41
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.mgr', 'volumes', 'backups', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log']
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:17:41 compute-0 podman[455363]: 2025-11-26 02:17:41.588475839 +0000 UTC m=+0.142833943 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:17:41 compute-0 nova_compute[350387]: 2025-11-26 02:17:41.590 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:17:41 compute-0 nova_compute[350387]: 2025-11-26 02:17:41.590 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:17:41 compute-0 nova_compute[350387]: 2025-11-26 02:17:41.591 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:17:41 compute-0 podman[455364]: 2025-11-26 02:17:41.627526803 +0000 UTC m=+0.179958133 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:17:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:17:42 compute-0 nova_compute[350387]: 2025-11-26 02:17:42.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:17:43 compute-0 nova_compute[350387]: 2025-11-26 02:17:43.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:17:43 compute-0 nova_compute[350387]: 2025-11-26 02:17:43.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:17:43 compute-0 nova_compute[350387]: 2025-11-26 02:17:43.301 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:17:43 compute-0 nova_compute[350387]: 2025-11-26 02:17:43.614 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:17:43 compute-0 nova_compute[350387]: 2025-11-26 02:17:43.615 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:17:43 compute-0 nova_compute[350387]: 2025-11-26 02:17:43.616 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:17:43 compute-0 nova_compute[350387]: 2025-11-26 02:17:43.617 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 74d081af-66cd-4e37-99e4-31f777885766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:17:43 compute-0 nova_compute[350387]: 2025-11-26 02:17:43.668 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2020: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 26 02:17:44 compute-0 nova_compute[350387]: 2025-11-26 02:17:44.304 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:17:45 compute-0 nova_compute[350387]: 2025-11-26 02:17:45.297 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updating instance_info_cache with network_info: [{"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:17:45 compute-0 nova_compute[350387]: 2025-11-26 02:17:45.320 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:17:45 compute-0 nova_compute[350387]: 2025-11-26 02:17:45.321 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:17:45 compute-0 podman[455406]: 2025-11-26 02:17:45.5559098 +0000 UTC m=+0.104723555 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, release-0.7.12=, vcs-type=git, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 26 02:17:45 compute-0 podman[455407]: 2025-11-26 02:17:45.593501573 +0000 UTC m=+0.121787143 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 02:17:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2021: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Nov 26 02:17:46 compute-0 nova_compute[350387]: 2025-11-26 02:17:46.836 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:17:46 compute-0 nova_compute[350387]: 2025-11-26 02:17:46.881 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Triggering sync for uuid 74d081af-66cd-4e37-99e4-31f777885766 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 26 02:17:46 compute-0 nova_compute[350387]: 2025-11-26 02:17:46.882 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Triggering sync for uuid add194b7-6a6c-48ef-8355-3344185eb43e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 26 02:17:46 compute-0 nova_compute[350387]: 2025-11-26 02:17:46.883 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "74d081af-66cd-4e37-99e4-31f777885766" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:17:46 compute-0 nova_compute[350387]: 2025-11-26 02:17:46.884 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "74d081af-66cd-4e37-99e4-31f777885766" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:17:46 compute-0 nova_compute[350387]: 2025-11-26 02:17:46.886 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "add194b7-6a6c-48ef-8355-3344185eb43e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:17:46 compute-0 nova_compute[350387]: 2025-11-26 02:17:46.887 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "add194b7-6a6c-48ef-8355-3344185eb43e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:17:46 compute-0 nova_compute[350387]: 2025-11-26 02:17:46.930 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "74d081af-66cd-4e37-99e4-31f777885766" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:17:46 compute-0 nova_compute[350387]: 2025-11-26 02:17:46.932 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "add194b7-6a6c-48ef-8355-3344185eb43e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:17:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 27 op/s
Nov 26 02:17:48 compute-0 nova_compute[350387]: 2025-11-26 02:17:48.670 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:49 compute-0 nova_compute[350387]: 2025-11-26 02:17:49.309 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:49 compute-0 nova_compute[350387]: 2025-11-26 02:17:49.344 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:17:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2023: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 6 op/s
Nov 26 02:17:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:17:51 compute-0 nova_compute[350387]: 2025-11-26 02:17:51.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:17:51 compute-0 podman[455447]: 2025-11-26 02:17:51.591129406 +0000 UTC m=+0.141108934 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter)
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015154937487790584 of space, bias 1.0, pg target 0.4546481246337175 quantized to 32 (current 32)
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:17:51 compute-0 podman[455448]: 2025-11-26 02:17:51.60446793 +0000 UTC m=+0.148390969 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:17:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:17:53 compute-0 nova_compute[350387]: 2025-11-26 02:17:53.672 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:17:54 compute-0 nova_compute[350387]: 2025-11-26 02:17:54.313 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:17:55 compute-0 nova_compute[350387]: 2025-11-26 02:17:55.295 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:17:55 compute-0 nova_compute[350387]: 2025-11-26 02:17:55.464 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:17:55 compute-0 nova_compute[350387]: 2025-11-26 02:17:55.466 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:17:55 compute-0 nova_compute[350387]: 2025-11-26 02:17:55.467 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:17:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:17:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:17:58 compute-0 nova_compute[350387]: 2025-11-26 02:17:58.676 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:59 compute-0 nova_compute[350387]: 2025-11-26 02:17:59.317 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:17:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:17:59 compute-0 podman[158021]: time="2025-11-26T02:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:17:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:17:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8654 "" "Go-http-client/1.1"
Nov 26 02:18:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:18:00 compute-0 nova_compute[350387]: 2025-11-26 02:18:00.310 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:18:00 compute-0 nova_compute[350387]: 2025-11-26 02:18:00.311 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 02:18:01 compute-0 openstack_network_exporter[367323]: ERROR   02:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:18:01 compute-0 openstack_network_exporter[367323]: ERROR   02:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:18:01 compute-0 openstack_network_exporter[367323]: ERROR   02:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:18:01 compute-0 openstack_network_exporter[367323]: ERROR   02:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:18:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:18:01 compute-0 openstack_network_exporter[367323]: ERROR   02:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:18:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:18:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:18:03 compute-0 nova_compute[350387]: 2025-11-26 02:18:03.679 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:18:04 compute-0 nova_compute[350387]: 2025-11-26 02:18:04.323 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:18:05 compute-0 podman[455490]: 2025-11-26 02:18:05.265482408 +0000 UTC m=+0.103901532 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:18:05 compute-0 podman[455489]: 2025-11-26 02:18:05.287054082 +0000 UTC m=+0.127571115 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Nov 26 02:18:05 compute-0 podman[455491]: 2025-11-26 02:18:05.290042586 +0000 UTC m=+0.123789919 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:18:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:18:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:18:08 compute-0 nova_compute[350387]: 2025-11-26 02:18:08.685 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:09 compute-0 nova_compute[350387]: 2025-11-26 02:18:09.327 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 85 B/s wr, 2 op/s
Nov 26 02:18:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:18:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:18:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:18:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:18:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:18:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:18:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:18:11 compute-0 nova_compute[350387]: 2025-11-26 02:18:11.316 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:18:11 compute-0 nova_compute[350387]: 2025-11-26 02:18:11.317 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 02:18:11 compute-0 nova_compute[350387]: 2025-11-26 02:18:11.335 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 02:18:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 170 B/s wr, 4 op/s
Nov 26 02:18:12 compute-0 podman[455547]: 2025-11-26 02:18:12.612301473 +0000 UTC m=+0.159347925 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:18:12 compute-0 podman[455548]: 2025-11-26 02:18:12.664496386 +0000 UTC m=+0.201088905 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 02:18:13 compute-0 nova_compute[350387]: 2025-11-26 02:18:13.687 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Nov 26 02:18:14 compute-0 nova_compute[350387]: 2025-11-26 02:18:14.332 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:18:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Nov 26 02:18:16 compute-0 podman[455589]: 2025-11-26 02:18:16.581260187 +0000 UTC m=+0.131843155 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, name=ubi9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 26 02:18:16 compute-0 podman[455590]: 2025-11-26 02:18:16.581185895 +0000 UTC m=+0.127745710 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 26 02:18:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Nov 26 02:18:18 compute-0 nova_compute[350387]: 2025-11-26 02:18:18.689 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:19 compute-0 nova_compute[350387]: 2025-11-26 02:18:19.337 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Nov 26 02:18:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:18:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 8.5 KiB/s wr, 3 op/s
Nov 26 02:18:22 compute-0 podman[455627]: 2025-11-26 02:18:22.573638973 +0000 UTC m=+0.118176692 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public)
Nov 26 02:18:22 compute-0 podman[455628]: 2025-11-26 02:18:22.581057971 +0000 UTC m=+0.113257895 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 02:18:23 compute-0 nova_compute[350387]: 2025-11-26 02:18:23.691 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 8.4 KiB/s wr, 0 op/s
Nov 26 02:18:24 compute-0 nova_compute[350387]: 2025-11-26 02:18:24.340 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:18:25.003 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:18:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:18:25.003 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:18:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:18:25.004 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:18:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:18:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 8.4 KiB/s wr, 0 op/s
Nov 26 02:18:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:18:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3198370314' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:18:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:18:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3198370314' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:18:27 compute-0 podman[455939]: 2025-11-26 02:18:27.457074879 +0000 UTC m=+0.081056412 container create 80e2033f23a08c7c73ef793d1623816231c6987101bcb5d3c509af04f6d71834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mccarthy, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:18:27 compute-0 podman[455939]: 2025-11-26 02:18:27.423289472 +0000 UTC m=+0.047271095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:18:27 compute-0 systemd[1]: Started libpod-conmon-80e2033f23a08c7c73ef793d1623816231c6987101bcb5d3c509af04f6d71834.scope.
Nov 26 02:18:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:18:27 compute-0 podman[455939]: 2025-11-26 02:18:27.632128844 +0000 UTC m=+0.256110387 container init 80e2033f23a08c7c73ef793d1623816231c6987101bcb5d3c509af04f6d71834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 02:18:27 compute-0 podman[455939]: 2025-11-26 02:18:27.653639566 +0000 UTC m=+0.277621119 container start 80e2033f23a08c7c73ef793d1623816231c6987101bcb5d3c509af04f6d71834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:18:27 compute-0 podman[455939]: 2025-11-26 02:18:27.660622872 +0000 UTC m=+0.284604435 container attach 80e2033f23a08c7c73ef793d1623816231c6987101bcb5d3c509af04f6d71834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mccarthy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 02:18:27 compute-0 sharp_mccarthy[455955]: 167 167
Nov 26 02:18:27 compute-0 podman[455939]: 2025-11-26 02:18:27.668145983 +0000 UTC m=+0.292127546 container died 80e2033f23a08c7c73ef793d1623816231c6987101bcb5d3c509af04f6d71834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mccarthy, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:18:27 compute-0 systemd[1]: libpod-80e2033f23a08c7c73ef793d1623816231c6987101bcb5d3c509af04f6d71834.scope: Deactivated successfully.
Nov 26 02:18:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 op/s
Nov 26 02:18:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-94ebb3a45b55faf7808f08c0c7abacb473c0c43c035ab1bc3e9d78bc17e2262f-merged.mount: Deactivated successfully.
Nov 26 02:18:27 compute-0 podman[455939]: 2025-11-26 02:18:27.748456763 +0000 UTC m=+0.372438296 container remove 80e2033f23a08c7c73ef793d1623816231c6987101bcb5d3c509af04f6d71834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mccarthy, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:18:27 compute-0 systemd[1]: libpod-conmon-80e2033f23a08c7c73ef793d1623816231c6987101bcb5d3c509af04f6d71834.scope: Deactivated successfully.
Nov 26 02:18:28 compute-0 podman[455978]: 2025-11-26 02:18:28.049429426 +0000 UTC m=+0.101471084 container create f180786e191e6a7f453910d29e490ca9182deaf70f098483b78e59f656d9ef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hamilton, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:18:28 compute-0 podman[455978]: 2025-11-26 02:18:27.998439647 +0000 UTC m=+0.050481365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:18:28 compute-0 systemd[1]: Started libpod-conmon-f180786e191e6a7f453910d29e490ca9182deaf70f098483b78e59f656d9ef02.scope.
Nov 26 02:18:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:18:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f14dcf52d60be81e0c9f5c69328448e893a99434cd54d8095b68b2252b18bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f14dcf52d60be81e0c9f5c69328448e893a99434cd54d8095b68b2252b18bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f14dcf52d60be81e0c9f5c69328448e893a99434cd54d8095b68b2252b18bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f14dcf52d60be81e0c9f5c69328448e893a99434cd54d8095b68b2252b18bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:28 compute-0 podman[455978]: 2025-11-26 02:18:28.204340186 +0000 UTC m=+0.256381914 container init f180786e191e6a7f453910d29e490ca9182deaf70f098483b78e59f656d9ef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 02:18:28 compute-0 podman[455978]: 2025-11-26 02:18:28.232264778 +0000 UTC m=+0.284306466 container start f180786e191e6a7f453910d29e490ca9182deaf70f098483b78e59f656d9ef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 02:18:28 compute-0 podman[455978]: 2025-11-26 02:18:28.24587816 +0000 UTC m=+0.297919848 container attach f180786e191e6a7f453910d29e490ca9182deaf70f098483b78e59f656d9ef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hamilton, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:18:28 compute-0 nova_compute[350387]: 2025-11-26 02:18:28.694 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:29 compute-0 nova_compute[350387]: 2025-11-26 02:18:29.344 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Nov 26 02:18:29 compute-0 podman[158021]: time="2025-11-26T02:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:18:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45407 "" "Go-http-client/1.1"
Nov 26 02:18:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9078 "" "Go-http-client/1.1"
Nov 26 02:18:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]: [
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:    {
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:        "available": false,
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:        "ceph_device": false,
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:        "lsm_data": {},
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:        "lvs": [],
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:        "path": "/dev/sr0",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:        "rejected_reasons": [
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "Has a FileSystem",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "Insufficient space (<5GB)"
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:        ],
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:        "sys_api": {
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "actuators": null,
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "device_nodes": "sr0",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "devname": "sr0",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "human_readable_size": "482.00 KB",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "id_bus": "ata",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "model": "QEMU DVD-ROM",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "nr_requests": "2",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "parent": "/dev/sr0",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "partitions": {},
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "path": "/dev/sr0",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "removable": "1",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "rev": "2.5+",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "ro": "0",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "rotational": "1",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "sas_address": "",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "sas_device_handle": "",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "scheduler_mode": "mq-deadline",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "sectors": 0,
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "sectorsize": "2048",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "size": 493568.0,
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "support_discard": "2048",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "type": "disk",
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:            "vendor": "QEMU"
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:        }
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]:    }
Nov 26 02:18:31 compute-0 thirsty_hamilton[455993]: ]
Nov 26 02:18:31 compute-0 systemd[1]: libpod-f180786e191e6a7f453910d29e490ca9182deaf70f098483b78e59f656d9ef02.scope: Deactivated successfully.
Nov 26 02:18:31 compute-0 systemd[1]: libpod-f180786e191e6a7f453910d29e490ca9182deaf70f098483b78e59f656d9ef02.scope: Consumed 2.871s CPU time.
Nov 26 02:18:31 compute-0 podman[455978]: 2025-11-26 02:18:31.159970007 +0000 UTC m=+3.212011735 container died f180786e191e6a7f453910d29e490ca9182deaf70f098483b78e59f656d9ef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 02:18:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1f14dcf52d60be81e0c9f5c69328448e893a99434cd54d8095b68b2252b18bd-merged.mount: Deactivated successfully.
Nov 26 02:18:31 compute-0 podman[455978]: 2025-11-26 02:18:31.25858735 +0000 UTC m=+3.310628998 container remove f180786e191e6a7f453910d29e490ca9182deaf70f098483b78e59f656d9ef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 02:18:31 compute-0 systemd[1]: libpod-conmon-f180786e191e6a7f453910d29e490ca9182deaf70f098483b78e59f656d9ef02.scope: Deactivated successfully.
Nov 26 02:18:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:18:31 compute-0 openstack_network_exporter[367323]: ERROR   02:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:18:31 compute-0 openstack_network_exporter[367323]: ERROR   02:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:18:31 compute-0 openstack_network_exporter[367323]: ERROR   02:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:18:31 compute-0 openstack_network_exporter[367323]: ERROR   02:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:18:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:18:31 compute-0 openstack_network_exporter[367323]: ERROR   02:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:18:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:18:31 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:18:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:18:31 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:18:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:18:31 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:18:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:18:31 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:18:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:18:31 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:18:31 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev c3d09458-f86f-4536-9d03-7d175aa1d696 does not exist
Nov 26 02:18:31 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev fd2e6260-b545-4224-8c42-34a1568655f2 does not exist
Nov 26 02:18:31 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev d2efbdc3-a679-45c5-9bca-a5c0dd2cba81 does not exist
Nov 26 02:18:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:18:31 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:18:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:18:31 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:18:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:18:31 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:18:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Nov 26 02:18:32 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:18:32 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:18:32 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:18:32 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:18:32 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:18:32 compute-0 podman[458493]: 2025-11-26 02:18:32.789885144 +0000 UTC m=+0.094173040 container create 52ed3ee98043e8dcaf99e66aaa56d44718a371368dbaa726bf47dc3f53eb254b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_keller, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 02:18:32 compute-0 podman[458493]: 2025-11-26 02:18:32.757010773 +0000 UTC m=+0.061298739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:18:32 compute-0 systemd[1]: Started libpod-conmon-52ed3ee98043e8dcaf99e66aaa56d44718a371368dbaa726bf47dc3f53eb254b.scope.
Nov 26 02:18:32 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:18:32 compute-0 podman[458493]: 2025-11-26 02:18:32.940210616 +0000 UTC m=+0.244498572 container init 52ed3ee98043e8dcaf99e66aaa56d44718a371368dbaa726bf47dc3f53eb254b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_keller, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 02:18:32 compute-0 podman[458493]: 2025-11-26 02:18:32.960269218 +0000 UTC m=+0.264557114 container start 52ed3ee98043e8dcaf99e66aaa56d44718a371368dbaa726bf47dc3f53eb254b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:18:32 compute-0 podman[458493]: 2025-11-26 02:18:32.967449979 +0000 UTC m=+0.271737875 container attach 52ed3ee98043e8dcaf99e66aaa56d44718a371368dbaa726bf47dc3f53eb254b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 26 02:18:32 compute-0 boring_keller[458509]: 167 167
Nov 26 02:18:32 compute-0 systemd[1]: libpod-52ed3ee98043e8dcaf99e66aaa56d44718a371368dbaa726bf47dc3f53eb254b.scope: Deactivated successfully.
Nov 26 02:18:32 compute-0 conmon[458509]: conmon 52ed3ee98043e8dcaf99 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52ed3ee98043e8dcaf99e66aaa56d44718a371368dbaa726bf47dc3f53eb254b.scope/container/memory.events
Nov 26 02:18:32 compute-0 podman[458493]: 2025-11-26 02:18:32.971329528 +0000 UTC m=+0.275617394 container died 52ed3ee98043e8dcaf99e66aaa56d44718a371368dbaa726bf47dc3f53eb254b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 02:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bc11f3a15189fdc79c0175f1292cb13fcf7bf9284e1a5a7a7fcc57254f4290e-merged.mount: Deactivated successfully.
Nov 26 02:18:33 compute-0 podman[458493]: 2025-11-26 02:18:33.029867108 +0000 UTC m=+0.334154984 container remove 52ed3ee98043e8dcaf99e66aaa56d44718a371368dbaa726bf47dc3f53eb254b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 02:18:33 compute-0 systemd[1]: libpod-conmon-52ed3ee98043e8dcaf99e66aaa56d44718a371368dbaa726bf47dc3f53eb254b.scope: Deactivated successfully.
Nov 26 02:18:33 compute-0 podman[458532]: 2025-11-26 02:18:33.354080942 +0000 UTC m=+0.112689528 container create fb6f44c9f6672a0fb4bb68c351d02d8b250e5f8f6784325b130d1cf69298a104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_borg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 02:18:33 compute-0 podman[458532]: 2025-11-26 02:18:33.298433293 +0000 UTC m=+0.057041889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:18:33 compute-0 systemd[1]: Started libpod-conmon-fb6f44c9f6672a0fb4bb68c351d02d8b250e5f8f6784325b130d1cf69298a104.scope.
Nov 26 02:18:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9942ef315e5a94738cf1985512c6c639bbd565b5d829d5d58252fd8cf05898e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9942ef315e5a94738cf1985512c6c639bbd565b5d829d5d58252fd8cf05898e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9942ef315e5a94738cf1985512c6c639bbd565b5d829d5d58252fd8cf05898e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9942ef315e5a94738cf1985512c6c639bbd565b5d829d5d58252fd8cf05898e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9942ef315e5a94738cf1985512c6c639bbd565b5d829d5d58252fd8cf05898e5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:33 compute-0 podman[458532]: 2025-11-26 02:18:33.538041576 +0000 UTC m=+0.296650222 container init fb6f44c9f6672a0fb4bb68c351d02d8b250e5f8f6784325b130d1cf69298a104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_borg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:18:33 compute-0 podman[458532]: 2025-11-26 02:18:33.558104859 +0000 UTC m=+0.316713445 container start fb6f44c9f6672a0fb4bb68c351d02d8b250e5f8f6784325b130d1cf69298a104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_borg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:18:33 compute-0 podman[458532]: 2025-11-26 02:18:33.567454051 +0000 UTC m=+0.326062637 container attach fb6f44c9f6672a0fb4bb68c351d02d8b250e5f8f6784325b130d1cf69298a104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 02:18:33 compute-0 nova_compute[350387]: 2025-11-26 02:18:33.697 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Nov 26 02:18:34 compute-0 nova_compute[350387]: 2025-11-26 02:18:34.349 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:34 compute-0 goofy_borg[458547]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:18:34 compute-0 goofy_borg[458547]: --> relative data size: 1.0
Nov 26 02:18:34 compute-0 goofy_borg[458547]: --> All data devices are unavailable
Nov 26 02:18:34 compute-0 systemd[1]: libpod-fb6f44c9f6672a0fb4bb68c351d02d8b250e5f8f6784325b130d1cf69298a104.scope: Deactivated successfully.
Nov 26 02:18:34 compute-0 podman[458532]: 2025-11-26 02:18:34.931452788 +0000 UTC m=+1.690061354 container died fb6f44c9f6672a0fb4bb68c351d02d8b250e5f8f6784325b130d1cf69298a104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 26 02:18:34 compute-0 systemd[1]: libpod-fb6f44c9f6672a0fb4bb68c351d02d8b250e5f8f6784325b130d1cf69298a104.scope: Consumed 1.306s CPU time.
Nov 26 02:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9942ef315e5a94738cf1985512c6c639bbd565b5d829d5d58252fd8cf05898e5-merged.mount: Deactivated successfully.
Nov 26 02:18:35 compute-0 podman[458532]: 2025-11-26 02:18:35.037065397 +0000 UTC m=+1.795673973 container remove fb6f44c9f6672a0fb4bb68c351d02d8b250e5f8f6784325b130d1cf69298a104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_borg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 02:18:35 compute-0 systemd[1]: libpod-conmon-fb6f44c9f6672a0fb4bb68c351d02d8b250e5f8f6784325b130d1cf69298a104.scope: Deactivated successfully.
Nov 26 02:18:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:18:35 compute-0 podman[458638]: 2025-11-26 02:18:35.472056555 +0000 UTC m=+0.110652352 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 02:18:35 compute-0 podman[458636]: 2025-11-26 02:18:35.473418923 +0000 UTC m=+0.110620390 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.4)
Nov 26 02:18:35 compute-0 podman[458637]: 2025-11-26 02:18:35.479327468 +0000 UTC m=+0.117352409 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:18:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Nov 26 02:18:36 compute-0 podman[458781]: 2025-11-26 02:18:36.156376188 +0000 UTC m=+0.089807307 container create 0c7fa40e1db5a86fbf23a5dae9e2729b05244ff71d2cb92db1793fbd3600d3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_albattani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:18:36 compute-0 podman[458781]: 2025-11-26 02:18:36.12394835 +0000 UTC m=+0.057379469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:18:36 compute-0 systemd[1]: Started libpod-conmon-0c7fa40e1db5a86fbf23a5dae9e2729b05244ff71d2cb92db1793fbd3600d3f3.scope.
Nov 26 02:18:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:18:36 compute-0 podman[458781]: 2025-11-26 02:18:36.337312117 +0000 UTC m=+0.270743246 container init 0c7fa40e1db5a86fbf23a5dae9e2729b05244ff71d2cb92db1793fbd3600d3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_albattani, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:18:36 compute-0 podman[458781]: 2025-11-26 02:18:36.354064366 +0000 UTC m=+0.287495475 container start 0c7fa40e1db5a86fbf23a5dae9e2729b05244ff71d2cb92db1793fbd3600d3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:18:36 compute-0 podman[458781]: 2025-11-26 02:18:36.360958249 +0000 UTC m=+0.294389409 container attach 0c7fa40e1db5a86fbf23a5dae9e2729b05244ff71d2cb92db1793fbd3600d3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 02:18:36 compute-0 determined_albattani[458797]: 167 167
Nov 26 02:18:36 compute-0 systemd[1]: libpod-0c7fa40e1db5a86fbf23a5dae9e2729b05244ff71d2cb92db1793fbd3600d3f3.scope: Deactivated successfully.
Nov 26 02:18:36 compute-0 podman[458781]: 2025-11-26 02:18:36.373303895 +0000 UTC m=+0.306735014 container died 0c7fa40e1db5a86fbf23a5dae9e2729b05244ff71d2cb92db1793fbd3600d3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_albattani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 02:18:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f5e76110fcf284c81f76bbbe7f7a3cc3f1a94f941341c44ec45acab5b6e02ea-merged.mount: Deactivated successfully.
Nov 26 02:18:36 compute-0 podman[458781]: 2025-11-26 02:18:36.455665463 +0000 UTC m=+0.389096582 container remove 0c7fa40e1db5a86fbf23a5dae9e2729b05244ff71d2cb92db1793fbd3600d3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_albattani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 02:18:36 compute-0 systemd[1]: libpod-conmon-0c7fa40e1db5a86fbf23a5dae9e2729b05244ff71d2cb92db1793fbd3600d3f3.scope: Deactivated successfully.
Nov 26 02:18:36 compute-0 podman[458821]: 2025-11-26 02:18:36.688471656 +0000 UTC m=+0.082154533 container create b4281c30689bd15016d09327f8cb70b35260cba32af5525b7e7097d678c8bf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 02:18:36 compute-0 podman[458821]: 2025-11-26 02:18:36.654045031 +0000 UTC m=+0.047727958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:18:36 compute-0 systemd[1]: Started libpod-conmon-b4281c30689bd15016d09327f8cb70b35260cba32af5525b7e7097d678c8bf6a.scope.
Nov 26 02:18:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992afb89ee62753d9b17ce6395a87fd16d5b2c5441be6148d2415851f214dfef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992afb89ee62753d9b17ce6395a87fd16d5b2c5441be6148d2415851f214dfef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992afb89ee62753d9b17ce6395a87fd16d5b2c5441be6148d2415851f214dfef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992afb89ee62753d9b17ce6395a87fd16d5b2c5441be6148d2415851f214dfef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:36 compute-0 podman[458821]: 2025-11-26 02:18:36.876963007 +0000 UTC m=+0.270645864 container init b4281c30689bd15016d09327f8cb70b35260cba32af5525b7e7097d678c8bf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 02:18:36 compute-0 podman[458821]: 2025-11-26 02:18:36.899612762 +0000 UTC m=+0.293295609 container start b4281c30689bd15016d09327f8cb70b35260cba32af5525b7e7097d678c8bf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 02:18:36 compute-0 podman[458821]: 2025-11-26 02:18:36.904479048 +0000 UTC m=+0.298161905 container attach b4281c30689bd15016d09327f8cb70b35260cba32af5525b7e7097d678c8bf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:18:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Nov 26 02:18:37 compute-0 competent_davinci[458837]: {
Nov 26 02:18:37 compute-0 competent_davinci[458837]:    "0": [
Nov 26 02:18:37 compute-0 competent_davinci[458837]:        {
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "devices": [
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "/dev/loop3"
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            ],
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "lv_name": "ceph_lv0",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "lv_size": "21470642176",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "name": "ceph_lv0",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "tags": {
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.cluster_name": "ceph",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.crush_device_class": "",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.encrypted": "0",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.osd_id": "0",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.type": "block",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.vdo": "0"
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            },
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "type": "block",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "vg_name": "ceph_vg0"
Nov 26 02:18:37 compute-0 competent_davinci[458837]:        }
Nov 26 02:18:37 compute-0 competent_davinci[458837]:    ],
Nov 26 02:18:37 compute-0 competent_davinci[458837]:    "1": [
Nov 26 02:18:37 compute-0 competent_davinci[458837]:        {
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "devices": [
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "/dev/loop4"
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            ],
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "lv_name": "ceph_lv1",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "lv_size": "21470642176",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "name": "ceph_lv1",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "tags": {
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.cluster_name": "ceph",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.crush_device_class": "",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.encrypted": "0",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.osd_id": "1",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.type": "block",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.vdo": "0"
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            },
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "type": "block",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "vg_name": "ceph_vg1"
Nov 26 02:18:37 compute-0 competent_davinci[458837]:        }
Nov 26 02:18:37 compute-0 competent_davinci[458837]:    ],
Nov 26 02:18:37 compute-0 competent_davinci[458837]:    "2": [
Nov 26 02:18:37 compute-0 competent_davinci[458837]:        {
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "devices": [
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "/dev/loop5"
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            ],
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "lv_name": "ceph_lv2",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "lv_size": "21470642176",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "name": "ceph_lv2",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "tags": {
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.cluster_name": "ceph",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.crush_device_class": "",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.encrypted": "0",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.osd_id": "2",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.type": "block",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:                "ceph.vdo": "0"
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            },
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "type": "block",
Nov 26 02:18:37 compute-0 competent_davinci[458837]:            "vg_name": "ceph_vg2"
Nov 26 02:18:37 compute-0 competent_davinci[458837]:        }
Nov 26 02:18:37 compute-0 competent_davinci[458837]:    ]
Nov 26 02:18:37 compute-0 competent_davinci[458837]: }
Nov 26 02:18:37 compute-0 systemd[1]: libpod-b4281c30689bd15016d09327f8cb70b35260cba32af5525b7e7097d678c8bf6a.scope: Deactivated successfully.
Nov 26 02:18:37 compute-0 podman[458821]: 2025-11-26 02:18:37.758719843 +0000 UTC m=+1.152402720 container died b4281c30689bd15016d09327f8cb70b35260cba32af5525b7e7097d678c8bf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:18:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-992afb89ee62753d9b17ce6395a87fd16d5b2c5441be6148d2415851f214dfef-merged.mount: Deactivated successfully.
Nov 26 02:18:37 compute-0 podman[458821]: 2025-11-26 02:18:37.853803277 +0000 UTC m=+1.247486124 container remove b4281c30689bd15016d09327f8cb70b35260cba32af5525b7e7097d678c8bf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:18:37 compute-0 systemd[1]: libpod-conmon-b4281c30689bd15016d09327f8cb70b35260cba32af5525b7e7097d678c8bf6a.scope: Deactivated successfully.
Nov 26 02:18:38 compute-0 nova_compute[350387]: 2025-11-26 02:18:38.317 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:18:38 compute-0 nova_compute[350387]: 2025-11-26 02:18:38.425 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:18:38 compute-0 nova_compute[350387]: 2025-11-26 02:18:38.426 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:18:38 compute-0 nova_compute[350387]: 2025-11-26 02:18:38.426 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:18:38 compute-0 nova_compute[350387]: 2025-11-26 02:18:38.427 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:18:38 compute-0 nova_compute[350387]: 2025-11-26 02:18:38.427 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:18:38 compute-0 nova_compute[350387]: 2025-11-26 02:18:38.699 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:18:38 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3421091892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:18:38 compute-0 nova_compute[350387]: 2025-11-26 02:18:38.940 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:18:38 compute-0 podman[459020]: 2025-11-26 02:18:38.960930067 +0000 UTC m=+0.091043162 container create 934e71dc347df4e20d0d7976b7b0cadfa2e5cb98857b0cbd18d42f6ffbb1ac1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 02:18:39 compute-0 podman[459020]: 2025-11-26 02:18:38.932751598 +0000 UTC m=+0.062864773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:18:39 compute-0 nova_compute[350387]: 2025-11-26 02:18:39.047 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:18:39 compute-0 nova_compute[350387]: 2025-11-26 02:18:39.048 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:18:39 compute-0 systemd[1]: Started libpod-conmon-934e71dc347df4e20d0d7976b7b0cadfa2e5cb98857b0cbd18d42f6ffbb1ac1d.scope.
Nov 26 02:18:39 compute-0 nova_compute[350387]: 2025-11-26 02:18:39.054 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:18:39 compute-0 nova_compute[350387]: 2025-11-26 02:18:39.054 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:18:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:18:39 compute-0 podman[459020]: 2025-11-26 02:18:39.127164085 +0000 UTC m=+0.257277210 container init 934e71dc347df4e20d0d7976b7b0cadfa2e5cb98857b0cbd18d42f6ffbb1ac1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 02:18:39 compute-0 podman[459020]: 2025-11-26 02:18:39.137604057 +0000 UTC m=+0.267717152 container start 934e71dc347df4e20d0d7976b7b0cadfa2e5cb98857b0cbd18d42f6ffbb1ac1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 02:18:39 compute-0 podman[459020]: 2025-11-26 02:18:39.142802153 +0000 UTC m=+0.272915288 container attach 934e71dc347df4e20d0d7976b7b0cadfa2e5cb98857b0cbd18d42f6ffbb1ac1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:18:39 compute-0 stupefied_gould[459038]: 167 167
Nov 26 02:18:39 compute-0 systemd[1]: libpod-934e71dc347df4e20d0d7976b7b0cadfa2e5cb98857b0cbd18d42f6ffbb1ac1d.scope: Deactivated successfully.
Nov 26 02:18:39 compute-0 conmon[459038]: conmon 934e71dc347df4e20d0d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-934e71dc347df4e20d0d7976b7b0cadfa2e5cb98857b0cbd18d42f6ffbb1ac1d.scope/container/memory.events
Nov 26 02:18:39 compute-0 podman[459020]: 2025-11-26 02:18:39.151427794 +0000 UTC m=+0.281540879 container died 934e71dc347df4e20d0d7976b7b0cadfa2e5cb98857b0cbd18d42f6ffbb1ac1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 02:18:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba0acb71ecbafb8a9c0252aa785700215cbd1550834b9db79ce8aa352690bb2d-merged.mount: Deactivated successfully.
Nov 26 02:18:39 compute-0 podman[459020]: 2025-11-26 02:18:39.205679274 +0000 UTC m=+0.335792359 container remove 934e71dc347df4e20d0d7976b7b0cadfa2e5cb98857b0cbd18d42f6ffbb1ac1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:18:39 compute-0 systemd[1]: libpod-conmon-934e71dc347df4e20d0d7976b7b0cadfa2e5cb98857b0cbd18d42f6ffbb1ac1d.scope: Deactivated successfully.
Nov 26 02:18:39 compute-0 nova_compute[350387]: 2025-11-26 02:18:39.351 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:39 compute-0 podman[459061]: 2025-11-26 02:18:39.464195688 +0000 UTC m=+0.077844922 container create 6b17cdaf5d122b39a9574fa93c9c7dfa643211ce69a332ee08cd0d041787be6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 02:18:39 compute-0 nova_compute[350387]: 2025-11-26 02:18:39.487 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:18:39 compute-0 nova_compute[350387]: 2025-11-26 02:18:39.489 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3523MB free_disk=59.89719009399414GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:18:39 compute-0 nova_compute[350387]: 2025-11-26 02:18:39.489 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:18:39 compute-0 nova_compute[350387]: 2025-11-26 02:18:39.490 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:18:39 compute-0 podman[459061]: 2025-11-26 02:18:39.4350091 +0000 UTC m=+0.048658384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:18:39 compute-0 systemd[1]: Started libpod-conmon-6b17cdaf5d122b39a9574fa93c9c7dfa643211ce69a332ee08cd0d041787be6a.scope.
Nov 26 02:18:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:18:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c158bbe41af5d417d2e16a6c5265e7e41c51dd9899a59c38aec059907011b70b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c158bbe41af5d417d2e16a6c5265e7e41c51dd9899a59c38aec059907011b70b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c158bbe41af5d417d2e16a6c5265e7e41c51dd9899a59c38aec059907011b70b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c158bbe41af5d417d2e16a6c5265e7e41c51dd9899a59c38aec059907011b70b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:18:39 compute-0 podman[459061]: 2025-11-26 02:18:39.598165931 +0000 UTC m=+0.211815185 container init 6b17cdaf5d122b39a9574fa93c9c7dfa643211ce69a332ee08cd0d041787be6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 02:18:39 compute-0 nova_compute[350387]: 2025-11-26 02:18:39.597 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 74d081af-66cd-4e37-99e4-31f777885766 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:18:39 compute-0 nova_compute[350387]: 2025-11-26 02:18:39.599 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance add194b7-6a6c-48ef-8355-3344185eb43e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:18:39 compute-0 nova_compute[350387]: 2025-11-26 02:18:39.599 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:18:39 compute-0 nova_compute[350387]: 2025-11-26 02:18:39.600 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:18:39 compute-0 podman[459061]: 2025-11-26 02:18:39.609917631 +0000 UTC m=+0.223566875 container start 6b17cdaf5d122b39a9574fa93c9c7dfa643211ce69a332ee08cd0d041787be6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_murdock, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 02:18:39 compute-0 podman[459061]: 2025-11-26 02:18:39.617908675 +0000 UTC m=+0.231557909 container attach 6b17cdaf5d122b39a9574fa93c9c7dfa643211ce69a332ee08cd0d041787be6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_murdock, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 02:18:39 compute-0 nova_compute[350387]: 2025-11-26 02:18:39.662 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:18:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Nov 26 02:18:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:18:40 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3895656085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:18:40 compute-0 nova_compute[350387]: 2025-11-26 02:18:40.113 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:18:40 compute-0 nova_compute[350387]: 2025-11-26 02:18:40.127 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:18:40 compute-0 nova_compute[350387]: 2025-11-26 02:18:40.157 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:18:40 compute-0 nova_compute[350387]: 2025-11-26 02:18:40.159 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:18:40 compute-0 nova_compute[350387]: 2025-11-26 02:18:40.160 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:18:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:18:40 compute-0 agitated_murdock[459076]: {
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:        "osd_id": 0,
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:        "type": "bluestore"
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:    },
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:        "osd_id": 2,
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:        "type": "bluestore"
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:    },
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:        "osd_id": 1,
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:        "type": "bluestore"
Nov 26 02:18:40 compute-0 agitated_murdock[459076]:    }
Nov 26 02:18:40 compute-0 agitated_murdock[459076]: }
Nov 26 02:18:40 compute-0 systemd[1]: libpod-6b17cdaf5d122b39a9574fa93c9c7dfa643211ce69a332ee08cd0d041787be6a.scope: Deactivated successfully.
Nov 26 02:18:40 compute-0 podman[459061]: 2025-11-26 02:18:40.696464413 +0000 UTC m=+1.310113677 container died 6b17cdaf5d122b39a9574fa93c9c7dfa643211ce69a332ee08cd0d041787be6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_murdock, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:18:40 compute-0 systemd[1]: libpod-6b17cdaf5d122b39a9574fa93c9c7dfa643211ce69a332ee08cd0d041787be6a.scope: Consumed 1.090s CPU time.
Nov 26 02:18:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c158bbe41af5d417d2e16a6c5265e7e41c51dd9899a59c38aec059907011b70b-merged.mount: Deactivated successfully.
Nov 26 02:18:40 compute-0 podman[459061]: 2025-11-26 02:18:40.820812997 +0000 UTC m=+1.434462241 container remove 6b17cdaf5d122b39a9574fa93c9c7dfa643211ce69a332ee08cd0d041787be6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 02:18:40 compute-0 systemd[1]: libpod-conmon-6b17cdaf5d122b39a9574fa93c9c7dfa643211ce69a332ee08cd0d041787be6a.scope: Deactivated successfully.
Nov 26 02:18:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:18:40 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:18:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:18:40 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:18:40 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev e5bd87a0-b242-44fb-aa68-ba65897c726f does not exist
Nov 26 02:18:40 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5bb354ce-58b2-44b5-9bf4-61290784ae40 does not exist
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:18:41
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'backups', 'default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.meta']
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:18:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:18:41 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:18:41 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:18:42 compute-0 nova_compute[350387]: 2025-11-26 02:18:42.143 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:18:42 compute-0 nova_compute[350387]: 2025-11-26 02:18:42.147 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:18:42 compute-0 nova_compute[350387]: 2025-11-26 02:18:42.303 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.876 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.877 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.877 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.878 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.889 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.890 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.885 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '74d081af-66cd-4e37-99e4-31f777885766', 'name': 'te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'dbaf181e-c7da-4938-bfef-7ab3aa9a19bc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'user_id': '3a9710ede02d47cbb016ff596d936633', 'hostId': '0514aa3466932c9e7b93e3dcd39fcbb186e60af35850a79a2e38f108', 'status': 'active', 'metadata': {'metering.server_group': 'bd820598-acdd-4f42-8252-1f5951161b01'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.895 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'add194b7-6a6c-48ef-8355-3344185eb43e', 'name': 'te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'dbaf181e-c7da-4938-bfef-7ab3aa9a19bc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'user_id': '3a9710ede02d47cbb016ff596d936633', 'hostId': '0514aa3466932c9e7b93e3dcd39fcbb186e60af35850a79a2e38f108', 'status': 'active', 'metadata': {'metering.server_group': 'bd820598-acdd-4f42-8252-1f5951161b01'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.895 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.896 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.896 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.896 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.898 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.898 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.899 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.899 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.899 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T02:18:42.896463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.899 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.899 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T02:18:42.899641) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.907 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.913 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.914 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.914 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.914 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.914 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.915 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.915 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T02:18:42.915242) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.916 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.916 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.917 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.917 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.917 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.917 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T02:18:42.917469) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.918 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.918 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.919 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.919 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.919 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.919 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.920 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.920 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.920 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T02:18:42.920218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.921 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.922 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.922 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.922 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.922 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.922 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.923 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.923 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.923 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T02:18:42.922987) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.924 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.924 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.924 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.925 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.925 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.925 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.925 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.926 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T02:18:42.925627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.948 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/cpu volume: 332440000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.998 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/cpu volume: 203130000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.999 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:42.999 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.000 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.000 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.000 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.000 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.001 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T02:18:43.000690) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.001 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.001 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.002 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.002 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.003 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.003 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.003 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.003 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.004 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/memory.usage volume: 42.33203125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.004 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T02:18:43.003512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.004 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/memory.usage volume: 43.23046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.005 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.005 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.005 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.005 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.006 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.006 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.006 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.006 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T02:18:43.006716) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.007 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.007 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.008 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.008 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.008 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.009 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.009 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.009 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.009 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.010 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.011 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.011 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T02:18:43.009449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.011 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.011 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.012 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.012 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.012 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T02:18:43.012346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.013 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.014 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.014 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.014 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.014 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.015 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.015 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.015 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.016 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.016 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.017 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.017 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.017 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.017 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.018 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T02:18:43.015195) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.018 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T02:18:43.018241) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.019 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.019 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.020 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.020 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.020 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.021 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.021 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T02:18:43.021288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.038 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.038 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.066 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.066 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.067 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.067 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.068 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.068 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.068 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.068 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T02:18:43.068520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.110 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.110 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.188 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.bytes volume: 30366720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.189 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.190 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.190 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.191 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.191 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.191 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.191 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.192 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.192 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.192 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.latency volume: 2432488124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.193 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.latency volume: 867897915 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T02:18:43.192397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.194 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.latency volume: 2700802924 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.195 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.latency volume: 184971572 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.196 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.197 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.197 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.197 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.198 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.198 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T02:18:43.197984) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.198 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.199 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.requests volume: 1101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.199 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.200 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.200 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.200 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.200 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.200 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.201 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.201 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.201 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.202 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.202 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.203 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.203 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.203 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T02:18:43.201030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.203 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.204 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.204 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.204 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.bytes volume: 73039872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.205 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.205 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.206 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.206 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T02:18:43.204447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.206 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.206 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.207 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.207 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.207 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.207 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.207 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.208 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.208 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T02:18:43.207532) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.208 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.209 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.209 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.209 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.209 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.209 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.209 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.latency volume: 8635030715 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.210 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T02:18:43.209694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.211 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.latency volume: 7633186066 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.211 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.211 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.212 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.212 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.212 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.213 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.213 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.requests volume: 317 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.213 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.213 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.requests volume: 279 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.214 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T02:18:43.212965) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.214 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.215 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.215 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.215 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.215 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.215 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.215 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.216 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.216 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.216 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T02:18:43.215549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.217 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.218 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.218 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.219 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.219 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.219 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.219 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.220 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.220 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.220 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.220 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.220 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.221 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.221 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.221 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.221 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.221 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.222 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.222 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.222 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.222 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.223 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.223 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.223 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.223 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.224 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:18:43.224 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:18:43 compute-0 nova_compute[350387]: 2025-11-26 02:18:43.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:18:43 compute-0 podman[459192]: 2025-11-26 02:18:43.544349156 +0000 UTC m=+0.098421459 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 26 02:18:43 compute-0 podman[459193]: 2025-11-26 02:18:43.613506863 +0000 UTC m=+0.164068268 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 02:18:43 compute-0 nova_compute[350387]: 2025-11-26 02:18:43.704 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:18:44 compute-0 nova_compute[350387]: 2025-11-26 02:18:44.355 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:18:45 compute-0 nova_compute[350387]: 2025-11-26 02:18:45.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:18:45 compute-0 nova_compute[350387]: 2025-11-26 02:18:45.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:18:45 compute-0 nova_compute[350387]: 2025-11-26 02:18:45.709 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:18:45 compute-0 nova_compute[350387]: 2025-11-26 02:18:45.712 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:18:45 compute-0 nova_compute[350387]: 2025-11-26 02:18:45.713 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:18:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Nov 26 02:18:47 compute-0 podman[459232]: 2025-11-26 02:18:47.593654741 +0000 UTC m=+0.133513752 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 26 02:18:47 compute-0 podman[459231]: 2025-11-26 02:18:47.622297733 +0000 UTC m=+0.166109975 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, release=1214.1726694543, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, version=9.4, container_name=kepler, build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Nov 26 02:18:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Nov 26 02:18:48 compute-0 nova_compute[350387]: 2025-11-26 02:18:48.685 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Updating instance_info_cache with network_info: [{"id": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "address": "fa:16:3e:6e:b7:00", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaa46d5d-d6", "ovs_interfaceid": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:18:48 compute-0 nova_compute[350387]: 2025-11-26 02:18:48.705 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:18:48 compute-0 nova_compute[350387]: 2025-11-26 02:18:48.708 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:18:48 compute-0 nova_compute[350387]: 2025-11-26 02:18:48.709 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:49 compute-0 nova_compute[350387]: 2025-11-26 02:18:49.360 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Nov 26 02:18:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:18:51 compute-0 nova_compute[350387]: 2025-11-26 02:18:51.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:18:51 compute-0 nova_compute[350387]: 2025-11-26 02:18:51.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015185461027544442 of space, bias 1.0, pg target 0.4555638308263333 quantized to 32 (current 32)
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:18:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Nov 26 02:18:53 compute-0 podman[459271]: 2025-11-26 02:18:53.576953683 +0000 UTC m=+0.123145441 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, architecture=x86_64, vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7)
Nov 26 02:18:53 compute-0 podman[459272]: 2025-11-26 02:18:53.580871633 +0000 UTC m=+0.118124921 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:18:53 compute-0 nova_compute[350387]: 2025-11-26 02:18:53.709 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Nov 26 02:18:54 compute-0 nova_compute[350387]: 2025-11-26 02:18:54.364 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:18:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Nov 26 02:18:57 compute-0 nova_compute[350387]: 2025-11-26 02:18:57.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:18:57 compute-0 nova_compute[350387]: 2025-11-26 02:18:57.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:18:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Nov 26 02:18:58 compute-0 nova_compute[350387]: 2025-11-26 02:18:58.712 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:59 compute-0 nova_compute[350387]: 2025-11-26 02:18:59.369 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:18:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:18:59 compute-0 podman[158021]: time="2025-11-26T02:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:18:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:18:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8649 "" "Go-http-client/1.1"
Nov 26 02:19:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:19:01 compute-0 openstack_network_exporter[367323]: ERROR   02:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:19:01 compute-0 openstack_network_exporter[367323]: ERROR   02:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:19:01 compute-0 openstack_network_exporter[367323]: ERROR   02:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:19:01 compute-0 openstack_network_exporter[367323]: ERROR   02:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:19:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:19:01 compute-0 openstack_network_exporter[367323]: ERROR   02:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:19:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:19:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:19:03 compute-0 nova_compute[350387]: 2025-11-26 02:19:03.715 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:19:04 compute-0 nova_compute[350387]: 2025-11-26 02:19:04.372 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.242024) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123545242129, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1033, "num_deletes": 255, "total_data_size": 1516236, "memory_usage": 1542272, "flush_reason": "Manual Compaction"}
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123545255782, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 1491292, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41668, "largest_seqno": 42700, "table_properties": {"data_size": 1486188, "index_size": 2628, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10641, "raw_average_key_size": 19, "raw_value_size": 1476043, "raw_average_value_size": 2669, "num_data_blocks": 118, "num_entries": 553, "num_filter_entries": 553, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764123445, "oldest_key_time": 1764123445, "file_creation_time": 1764123545, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 13949 microseconds, and 8217 cpu microseconds.
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.255977) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 1491292 bytes OK
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.256001) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.259437) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.259460) EVENT_LOG_v1 {"time_micros": 1764123545259453, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.259482) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 1511366, prev total WAL file size 1511366, number of live WAL files 2.
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.261056) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353035' seq:72057594037927935, type:22 .. '6C6F676D0031373536' seq:0, type:0; will stop at (end)
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(1456KB)], [98(7511KB)]
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123545261160, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 9183119, "oldest_snapshot_seqno": -1}
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 5788 keys, 9077628 bytes, temperature: kUnknown
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123545324049, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 9077628, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9038979, "index_size": 23051, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 150679, "raw_average_key_size": 26, "raw_value_size": 8934384, "raw_average_value_size": 1543, "num_data_blocks": 921, "num_entries": 5788, "num_filter_entries": 5788, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764123545, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.324381) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 9077628 bytes
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.327199) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.7 rd, 144.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.3 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(12.2) write-amplify(6.1) OK, records in: 6310, records dropped: 522 output_compression: NoCompression
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.327234) EVENT_LOG_v1 {"time_micros": 1764123545327218, "job": 58, "event": "compaction_finished", "compaction_time_micros": 63017, "compaction_time_cpu_micros": 43144, "output_level": 6, "num_output_files": 1, "total_output_size": 9077628, "num_input_records": 6310, "num_output_records": 5788, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123545327955, "job": 58, "event": "table_file_deletion", "file_number": 100}
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123545330622, "job": 58, "event": "table_file_deletion", "file_number": 98}
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.260886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.331025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.331032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.331035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.331038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:19:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:19:05.331041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:19:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:19:06 compute-0 podman[459314]: 2025-11-26 02:19:06.591417046 +0000 UTC m=+0.137225566 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 26 02:19:06 compute-0 podman[459316]: 2025-11-26 02:19:06.591151058 +0000 UTC m=+0.122482103 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:19:06 compute-0 podman[459315]: 2025-11-26 02:19:06.598324939 +0000 UTC m=+0.137543475 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:19:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:19:08 compute-0 nova_compute[350387]: 2025-11-26 02:19:08.721 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:09 compute-0 nova_compute[350387]: 2025-11-26 02:19:09.377 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:19:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:19:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:19:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:19:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:19:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:19:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:19:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:19:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:13 compute-0 nova_compute[350387]: 2025-11-26 02:19:13.724 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:14 compute-0 nova_compute[350387]: 2025-11-26 02:19:14.381 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:14 compute-0 podman[459368]: 2025-11-26 02:19:14.623769639 +0000 UTC m=+0.169650495 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 26 02:19:14 compute-0 podman[459369]: 2025-11-26 02:19:14.652940706 +0000 UTC m=+0.191407984 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 02:19:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:19:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:18 compute-0 podman[459412]: 2025-11-26 02:19:18.572631379 +0000 UTC m=+0.110591550 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 26 02:19:18 compute-0 podman[459411]: 2025-11-26 02:19:18.580909551 +0000 UTC m=+0.127395000 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., name=ubi9, version=9.4, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Nov 26 02:19:18 compute-0 nova_compute[350387]: 2025-11-26 02:19:18.726 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:19 compute-0 nova_compute[350387]: 2025-11-26 02:19:19.385 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:19:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2069: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:23 compute-0 nova_compute[350387]: 2025-11-26 02:19:23.730 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:24 compute-0 nova_compute[350387]: 2025-11-26 02:19:24.390 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:24 compute-0 podman[459450]: 2025-11-26 02:19:24.563093561 +0000 UTC m=+0.108276495 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:19:24 compute-0 podman[459449]: 2025-11-26 02:19:24.584633835 +0000 UTC m=+0.129216162 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 26 02:19:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:19:25.004 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:19:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:19:25.006 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:19:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:19:25.007 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:19:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:19:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:19:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2349656403' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:19:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:19:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2349656403' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:19:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:28 compute-0 nova_compute[350387]: 2025-11-26 02:19:28.735 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:29 compute-0 nova_compute[350387]: 2025-11-26 02:19:29.394 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:29 compute-0 podman[158021]: time="2025-11-26T02:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:19:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2073: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:19:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8647 "" "Go-http-client/1.1"
Nov 26 02:19:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:19:31 compute-0 openstack_network_exporter[367323]: ERROR   02:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:19:31 compute-0 openstack_network_exporter[367323]: ERROR   02:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:19:31 compute-0 openstack_network_exporter[367323]: ERROR   02:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:19:31 compute-0 openstack_network_exporter[367323]: ERROR   02:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:19:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:19:31 compute-0 openstack_network_exporter[367323]: ERROR   02:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:19:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:19:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:33 compute-0 nova_compute[350387]: 2025-11-26 02:19:33.737 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:34 compute-0 nova_compute[350387]: 2025-11-26 02:19:34.398 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:19:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:37 compute-0 podman[459494]: 2025-11-26 02:19:37.547663026 +0000 UTC m=+0.098315146 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:19:37 compute-0 podman[459495]: 2025-11-26 02:19:37.606482984 +0000 UTC m=+0.140666702 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:19:37 compute-0 podman[459493]: 2025-11-26 02:19:37.607509293 +0000 UTC m=+0.148833271 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 26 02:19:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:38 compute-0 nova_compute[350387]: 2025-11-26 02:19:38.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:19:38 compute-0 nova_compute[350387]: 2025-11-26 02:19:38.342 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:19:38 compute-0 nova_compute[350387]: 2025-11-26 02:19:38.343 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:19:38 compute-0 nova_compute[350387]: 2025-11-26 02:19:38.345 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:19:38 compute-0 nova_compute[350387]: 2025-11-26 02:19:38.345 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:19:38 compute-0 nova_compute[350387]: 2025-11-26 02:19:38.347 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:19:38 compute-0 nova_compute[350387]: 2025-11-26 02:19:38.741 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:19:38 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3186204792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:19:38 compute-0 nova_compute[350387]: 2025-11-26 02:19:38.857 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:19:38 compute-0 nova_compute[350387]: 2025-11-26 02:19:38.961 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:19:38 compute-0 nova_compute[350387]: 2025-11-26 02:19:38.962 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:19:38 compute-0 nova_compute[350387]: 2025-11-26 02:19:38.971 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:19:38 compute-0 nova_compute[350387]: 2025-11-26 02:19:38.973 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:19:39 compute-0 nova_compute[350387]: 2025-11-26 02:19:39.401 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:39 compute-0 nova_compute[350387]: 2025-11-26 02:19:39.557 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:19:39 compute-0 nova_compute[350387]: 2025-11-26 02:19:39.559 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3556MB free_disk=59.897186279296875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:19:39 compute-0 nova_compute[350387]: 2025-11-26 02:19:39.560 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:19:39 compute-0 nova_compute[350387]: 2025-11-26 02:19:39.561 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:19:39 compute-0 nova_compute[350387]: 2025-11-26 02:19:39.664 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 74d081af-66cd-4e37-99e4-31f777885766 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:19:39 compute-0 nova_compute[350387]: 2025-11-26 02:19:39.666 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance add194b7-6a6c-48ef-8355-3344185eb43e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:19:39 compute-0 nova_compute[350387]: 2025-11-26 02:19:39.666 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:19:39 compute-0 nova_compute[350387]: 2025-11-26 02:19:39.667 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:19:39 compute-0 nova_compute[350387]: 2025-11-26 02:19:39.742 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:19:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:19:40 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4229171884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:19:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:19:40 compute-0 nova_compute[350387]: 2025-11-26 02:19:40.253 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:19:40 compute-0 nova_compute[350387]: 2025-11-26 02:19:40.263 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:19:40 compute-0 nova_compute[350387]: 2025-11-26 02:19:40.289 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:19:40 compute-0 nova_compute[350387]: 2025-11-26 02:19:40.292 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:19:40 compute-0 nova_compute[350387]: 2025-11-26 02:19:40.292 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:19:41
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', '.mgr', 'volumes']
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:19:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:19:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:19:42 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:19:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:19:42 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:19:43 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:19:43 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:19:43 compute-0 nova_compute[350387]: 2025-11-26 02:19:43.293 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:19:43 compute-0 nova_compute[350387]: 2025-11-26 02:19:43.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:19:43 compute-0 nova_compute[350387]: 2025-11-26 02:19:43.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:19:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:19:43 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:19:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:19:43 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:19:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:19:43 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:19:43 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 1be299b3-2c58-4c02-94df-f32339eafd85 does not exist
Nov 26 02:19:43 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev e755f48f-b271-406c-be54-cc70de07f06d does not exist
Nov 26 02:19:43 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev a1327fe1-2ba4-4436-a241-1cf871f54374 does not exist
Nov 26 02:19:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:19:43 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:19:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:19:43 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:19:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:19:43 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:19:43 compute-0 nova_compute[350387]: 2025-11-26 02:19:43.743 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:44 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:19:44 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:19:44 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:19:44 compute-0 nova_compute[350387]: 2025-11-26 02:19:44.406 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:44 compute-0 podman[459983]: 2025-11-26 02:19:44.624470486 +0000 UTC m=+0.056229636 container create 15e211799401374c826daf39ef462fb74c2d470d4e0a314064a8eeabc9c140cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 02:19:44 compute-0 podman[459983]: 2025-11-26 02:19:44.60639601 +0000 UTC m=+0.038155150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:19:44 compute-0 systemd[1]: Started libpod-conmon-15e211799401374c826daf39ef462fb74c2d470d4e0a314064a8eeabc9c140cf.scope.
Nov 26 02:19:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:19:44 compute-0 podman[459983]: 2025-11-26 02:19:44.791083504 +0000 UTC m=+0.222842654 container init 15e211799401374c826daf39ef462fb74c2d470d4e0a314064a8eeabc9c140cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 02:19:44 compute-0 podman[459983]: 2025-11-26 02:19:44.801863536 +0000 UTC m=+0.233622676 container start 15e211799401374c826daf39ef462fb74c2d470d4e0a314064a8eeabc9c140cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 02:19:44 compute-0 podman[459983]: 2025-11-26 02:19:44.805946281 +0000 UTC m=+0.237705421 container attach 15e211799401374c826daf39ef462fb74c2d470d4e0a314064a8eeabc9c140cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 02:19:44 compute-0 lucid_yonath[460013]: 167 167
Nov 26 02:19:44 compute-0 systemd[1]: libpod-15e211799401374c826daf39ef462fb74c2d470d4e0a314064a8eeabc9c140cf.scope: Deactivated successfully.
Nov 26 02:19:44 compute-0 podman[459995]: 2025-11-26 02:19:44.817139304 +0000 UTC m=+0.123659915 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 02:19:44 compute-0 podman[459996]: 2025-11-26 02:19:44.844389478 +0000 UTC m=+0.148484271 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 02:19:44 compute-0 podman[460043]: 2025-11-26 02:19:44.87086448 +0000 UTC m=+0.038262823 container died 15e211799401374c826daf39ef462fb74c2d470d4e0a314064a8eeabc9c140cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:19:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5441c304fca59f6745fccd48aa6c5cbc76adcb708be9b22bf3d1237cefd8144c-merged.mount: Deactivated successfully.
Nov 26 02:19:44 compute-0 podman[460043]: 2025-11-26 02:19:44.928967528 +0000 UTC m=+0.096365871 container remove 15e211799401374c826daf39ef462fb74c2d470d4e0a314064a8eeabc9c140cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:19:44 compute-0 systemd[1]: libpod-conmon-15e211799401374c826daf39ef462fb74c2d470d4e0a314064a8eeabc9c140cf.scope: Deactivated successfully.
Nov 26 02:19:45 compute-0 podman[460067]: 2025-11-26 02:19:45.215441304 +0000 UTC m=+0.089183660 container create 8402ded99501a81899087387a3c1b6b9bea939847128a5721d984356b7a40246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 02:19:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:19:45 compute-0 podman[460067]: 2025-11-26 02:19:45.192014498 +0000 UTC m=+0.065756854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:19:45 compute-0 systemd[1]: Started libpod-conmon-8402ded99501a81899087387a3c1b6b9bea939847128a5721d984356b7a40246.scope.
Nov 26 02:19:45 compute-0 nova_compute[350387]: 2025-11-26 02:19:45.305 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:19:45 compute-0 nova_compute[350387]: 2025-11-26 02:19:45.305 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:19:45 compute-0 nova_compute[350387]: 2025-11-26 02:19:45.306 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:19:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5ade40aadb5b248ae612465ea173c545780ad0e9ebf17b117f7987b8cba7ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5ade40aadb5b248ae612465ea173c545780ad0e9ebf17b117f7987b8cba7ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5ade40aadb5b248ae612465ea173c545780ad0e9ebf17b117f7987b8cba7ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5ade40aadb5b248ae612465ea173c545780ad0e9ebf17b117f7987b8cba7ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5ade40aadb5b248ae612465ea173c545780ad0e9ebf17b117f7987b8cba7ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:19:45 compute-0 podman[460067]: 2025-11-26 02:19:45.378923795 +0000 UTC m=+0.252666171 container init 8402ded99501a81899087387a3c1b6b9bea939847128a5721d984356b7a40246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 02:19:45 compute-0 podman[460067]: 2025-11-26 02:19:45.405379606 +0000 UTC m=+0.279121962 container start 8402ded99501a81899087387a3c1b6b9bea939847128a5721d984356b7a40246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:19:45 compute-0 podman[460067]: 2025-11-26 02:19:45.409893852 +0000 UTC m=+0.283636208 container attach 8402ded99501a81899087387a3c1b6b9bea939847128a5721d984356b7a40246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:19:45 compute-0 nova_compute[350387]: 2025-11-26 02:19:45.739 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:19:45 compute-0 nova_compute[350387]: 2025-11-26 02:19:45.741 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:19:45 compute-0 nova_compute[350387]: 2025-11-26 02:19:45.742 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:19:45 compute-0 nova_compute[350387]: 2025-11-26 02:19:45.743 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 74d081af-66cd-4e37-99e4-31f777885766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:19:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:46 compute-0 epic_gould[460083]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:19:46 compute-0 epic_gould[460083]: --> relative data size: 1.0
Nov 26 02:19:46 compute-0 epic_gould[460083]: --> All data devices are unavailable
Nov 26 02:19:46 compute-0 systemd[1]: libpod-8402ded99501a81899087387a3c1b6b9bea939847128a5721d984356b7a40246.scope: Deactivated successfully.
Nov 26 02:19:46 compute-0 podman[460067]: 2025-11-26 02:19:46.671581923 +0000 UTC m=+1.545324369 container died 8402ded99501a81899087387a3c1b6b9bea939847128a5721d984356b7a40246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 02:19:46 compute-0 systemd[1]: libpod-8402ded99501a81899087387a3c1b6b9bea939847128a5721d984356b7a40246.scope: Consumed 1.187s CPU time.
Nov 26 02:19:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e5ade40aadb5b248ae612465ea173c545780ad0e9ebf17b117f7987b8cba7ce-merged.mount: Deactivated successfully.
Nov 26 02:19:46 compute-0 podman[460067]: 2025-11-26 02:19:46.781318838 +0000 UTC m=+1.655061214 container remove 8402ded99501a81899087387a3c1b6b9bea939847128a5721d984356b7a40246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 02:19:46 compute-0 systemd[1]: libpod-conmon-8402ded99501a81899087387a3c1b6b9bea939847128a5721d984356b7a40246.scope: Deactivated successfully.
Nov 26 02:19:47 compute-0 nova_compute[350387]: 2025-11-26 02:19:47.287 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updating instance_info_cache with network_info: [{"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:19:47 compute-0 nova_compute[350387]: 2025-11-26 02:19:47.306 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:19:47 compute-0 nova_compute[350387]: 2025-11-26 02:19:47.307 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:19:47 compute-0 nova_compute[350387]: 2025-11-26 02:19:47.308 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:19:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:47 compute-0 podman[460261]: 2025-11-26 02:19:47.982133112 +0000 UTC m=+0.082121822 container create 7a4657efdcba0763ba6e2dbdc833d02f98970e7e344041ced85473e60dffecb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poitras, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:19:48 compute-0 podman[460261]: 2025-11-26 02:19:47.950395083 +0000 UTC m=+0.050383883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:19:48 compute-0 systemd[1]: Started libpod-conmon-7a4657efdcba0763ba6e2dbdc833d02f98970e7e344041ced85473e60dffecb2.scope.
Nov 26 02:19:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:19:48 compute-0 podman[460261]: 2025-11-26 02:19:48.101754633 +0000 UTC m=+0.201743363 container init 7a4657efdcba0763ba6e2dbdc833d02f98970e7e344041ced85473e60dffecb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 02:19:48 compute-0 podman[460261]: 2025-11-26 02:19:48.115493828 +0000 UTC m=+0.215482558 container start 7a4657efdcba0763ba6e2dbdc833d02f98970e7e344041ced85473e60dffecb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 02:19:48 compute-0 nice_poitras[460277]: 167 167
Nov 26 02:19:48 compute-0 podman[460261]: 2025-11-26 02:19:48.122436153 +0000 UTC m=+0.222424853 container attach 7a4657efdcba0763ba6e2dbdc833d02f98970e7e344041ced85473e60dffecb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 02:19:48 compute-0 systemd[1]: libpod-7a4657efdcba0763ba6e2dbdc833d02f98970e7e344041ced85473e60dffecb2.scope: Deactivated successfully.
Nov 26 02:19:48 compute-0 podman[460261]: 2025-11-26 02:19:48.124330146 +0000 UTC m=+0.224318846 container died 7a4657efdcba0763ba6e2dbdc833d02f98970e7e344041ced85473e60dffecb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poitras, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 02:19:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d776a343d50f7049faad8bcdd9c8311ac520acba9c4db85039e05950e52a577b-merged.mount: Deactivated successfully.
Nov 26 02:19:48 compute-0 podman[460261]: 2025-11-26 02:19:48.177778324 +0000 UTC m=+0.277767024 container remove 7a4657efdcba0763ba6e2dbdc833d02f98970e7e344041ced85473e60dffecb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_poitras, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 02:19:48 compute-0 systemd[1]: libpod-conmon-7a4657efdcba0763ba6e2dbdc833d02f98970e7e344041ced85473e60dffecb2.scope: Deactivated successfully.
Nov 26 02:19:48 compute-0 podman[460301]: 2025-11-26 02:19:48.484193869 +0000 UTC m=+0.090689642 container create 8396dd5db2e9ca2185f9f4bb8ac4ae0eb5194eebde855fafa333c085b2394041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lederberg, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 02:19:48 compute-0 podman[460301]: 2025-11-26 02:19:48.456012339 +0000 UTC m=+0.062508142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:19:48 compute-0 systemd[1]: Started libpod-conmon-8396dd5db2e9ca2185f9f4bb8ac4ae0eb5194eebde855fafa333c085b2394041.scope.
Nov 26 02:19:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc86e49a79e7c2a5e089801fadaee15fe16ec6455440caea9ea5efb8b983706/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc86e49a79e7c2a5e089801fadaee15fe16ec6455440caea9ea5efb8b983706/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc86e49a79e7c2a5e089801fadaee15fe16ec6455440caea9ea5efb8b983706/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc86e49a79e7c2a5e089801fadaee15fe16ec6455440caea9ea5efb8b983706/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:19:48 compute-0 podman[460301]: 2025-11-26 02:19:48.62700791 +0000 UTC m=+0.233503753 container init 8396dd5db2e9ca2185f9f4bb8ac4ae0eb5194eebde855fafa333c085b2394041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lederberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:19:48 compute-0 podman[460301]: 2025-11-26 02:19:48.660579131 +0000 UTC m=+0.267074904 container start 8396dd5db2e9ca2185f9f4bb8ac4ae0eb5194eebde855fafa333c085b2394041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lederberg, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:19:48 compute-0 podman[460301]: 2025-11-26 02:19:48.68910551 +0000 UTC m=+0.295601313 container attach 8396dd5db2e9ca2185f9f4bb8ac4ae0eb5194eebde855fafa333c085b2394041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lederberg, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:19:48 compute-0 nova_compute[350387]: 2025-11-26 02:19:48.746 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:49 compute-0 nova_compute[350387]: 2025-11-26 02:19:49.410 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]: {
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:    "0": [
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:        {
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "devices": [
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "/dev/loop3"
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            ],
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "lv_name": "ceph_lv0",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "lv_size": "21470642176",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "name": "ceph_lv0",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "tags": {
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.cluster_name": "ceph",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.crush_device_class": "",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.encrypted": "0",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.osd_id": "0",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.type": "block",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.vdo": "0"
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            },
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "type": "block",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "vg_name": "ceph_vg0"
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:        }
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:    ],
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:    "1": [
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:        {
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "devices": [
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "/dev/loop4"
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            ],
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "lv_name": "ceph_lv1",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "lv_size": "21470642176",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "name": "ceph_lv1",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "tags": {
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.cluster_name": "ceph",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.crush_device_class": "",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.encrypted": "0",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.osd_id": "1",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.type": "block",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.vdo": "0"
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            },
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "type": "block",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "vg_name": "ceph_vg1"
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:        }
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:    ],
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:    "2": [
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:        {
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "devices": [
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "/dev/loop5"
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            ],
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "lv_name": "ceph_lv2",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "lv_size": "21470642176",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "name": "ceph_lv2",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "tags": {
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.cluster_name": "ceph",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.crush_device_class": "",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.encrypted": "0",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.osd_id": "2",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.type": "block",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:                "ceph.vdo": "0"
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            },
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "type": "block",
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:            "vg_name": "ceph_vg2"
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:        }
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]:    ]
Nov 26 02:19:49 compute-0 beautiful_lederberg[460315]: }
Nov 26 02:19:49 compute-0 podman[460301]: 2025-11-26 02:19:49.521222855 +0000 UTC m=+1.127718628 container died 8396dd5db2e9ca2185f9f4bb8ac4ae0eb5194eebde855fafa333c085b2394041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:19:49 compute-0 systemd[1]: libpod-8396dd5db2e9ca2185f9f4bb8ac4ae0eb5194eebde855fafa333c085b2394041.scope: Deactivated successfully.
Nov 26 02:19:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cc86e49a79e7c2a5e089801fadaee15fe16ec6455440caea9ea5efb8b983706-merged.mount: Deactivated successfully.
Nov 26 02:19:49 compute-0 podman[460301]: 2025-11-26 02:19:49.611208916 +0000 UTC m=+1.217704689 container remove 8396dd5db2e9ca2185f9f4bb8ac4ae0eb5194eebde855fafa333c085b2394041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:19:49 compute-0 podman[460326]: 2025-11-26 02:19:49.611882605 +0000 UTC m=+0.154479409 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 26 02:19:49 compute-0 podman[460325]: 2025-11-26 02:19:49.616783792 +0000 UTC m=+0.155107356 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm)
Nov 26 02:19:49 compute-0 systemd[1]: libpod-conmon-8396dd5db2e9ca2185f9f4bb8ac4ae0eb5194eebde855fafa333c085b2394041.scope: Deactivated successfully.
Nov 26 02:19:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:19:50 compute-0 podman[460511]: 2025-11-26 02:19:50.766270839 +0000 UTC m=+0.097343988 container create fe7696168d5982e4e2cea537d746a4c5bebccbb9833278d708a193802bb77235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wescoff, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:19:50 compute-0 podman[460511]: 2025-11-26 02:19:50.731565417 +0000 UTC m=+0.062638586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:19:50 compute-0 systemd[1]: Started libpod-conmon-fe7696168d5982e4e2cea537d746a4c5bebccbb9833278d708a193802bb77235.scope.
Nov 26 02:19:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:19:50 compute-0 podman[460511]: 2025-11-26 02:19:50.917443855 +0000 UTC m=+0.248517014 container init fe7696168d5982e4e2cea537d746a4c5bebccbb9833278d708a193802bb77235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:19:50 compute-0 podman[460511]: 2025-11-26 02:19:50.93866186 +0000 UTC m=+0.269734999 container start fe7696168d5982e4e2cea537d746a4c5bebccbb9833278d708a193802bb77235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 02:19:50 compute-0 pedantic_wescoff[460525]: 167 167
Nov 26 02:19:50 compute-0 systemd[1]: libpod-fe7696168d5982e4e2cea537d746a4c5bebccbb9833278d708a193802bb77235.scope: Deactivated successfully.
Nov 26 02:19:50 compute-0 podman[460511]: 2025-11-26 02:19:50.949903655 +0000 UTC m=+0.280976784 container attach fe7696168d5982e4e2cea537d746a4c5bebccbb9833278d708a193802bb77235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wescoff, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:19:50 compute-0 podman[460511]: 2025-11-26 02:19:50.951406517 +0000 UTC m=+0.282479646 container died fe7696168d5982e4e2cea537d746a4c5bebccbb9833278d708a193802bb77235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wescoff, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 02:19:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-41cefc5a4e7104fc256e7c32d70261f6dd0dacedfdacb48bec251aeceabc8ed9-merged.mount: Deactivated successfully.
Nov 26 02:19:51 compute-0 podman[460511]: 2025-11-26 02:19:51.033698842 +0000 UTC m=+0.364772001 container remove fe7696168d5982e4e2cea537d746a4c5bebccbb9833278d708a193802bb77235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wescoff, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 02:19:51 compute-0 systemd[1]: libpod-conmon-fe7696168d5982e4e2cea537d746a4c5bebccbb9833278d708a193802bb77235.scope: Deactivated successfully.
Nov 26 02:19:51 compute-0 podman[460549]: 2025-11-26 02:19:51.267344309 +0000 UTC m=+0.076798183 container create ba16d5fe5a7a433b4a9581c7dad0b14cc48c71489b6df43237db28048671c4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jennings, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 02:19:51 compute-0 podman[460549]: 2025-11-26 02:19:51.247351359 +0000 UTC m=+0.056805253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:19:51 compute-0 systemd[1]: Started libpod-conmon-ba16d5fe5a7a433b4a9581c7dad0b14cc48c71489b6df43237db28048671c4f7.scope.
Nov 26 02:19:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533400b44f56453d3cc2454d2e2f127f5434de17dc99f0866e17f284a1eb781a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533400b44f56453d3cc2454d2e2f127f5434de17dc99f0866e17f284a1eb781a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533400b44f56453d3cc2454d2e2f127f5434de17dc99f0866e17f284a1eb781a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533400b44f56453d3cc2454d2e2f127f5434de17dc99f0866e17f284a1eb781a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:19:51 compute-0 podman[460549]: 2025-11-26 02:19:51.416411944 +0000 UTC m=+0.225865898 container init ba16d5fe5a7a433b4a9581c7dad0b14cc48c71489b6df43237db28048671c4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:19:51 compute-0 podman[460549]: 2025-11-26 02:19:51.433330249 +0000 UTC m=+0.242784123 container start ba16d5fe5a7a433b4a9581c7dad0b14cc48c71489b6df43237db28048671c4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jennings, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:19:51 compute-0 podman[460549]: 2025-11-26 02:19:51.438098212 +0000 UTC m=+0.247552176 container attach ba16d5fe5a7a433b4a9581c7dad0b14cc48c71489b6df43237db28048671c4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jennings, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015185461027544442 of space, bias 1.0, pg target 0.4555638308263333 quantized to 32 (current 32)
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:19:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]: {
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:        "osd_id": 0,
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:        "type": "bluestore"
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:    },
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:        "osd_id": 2,
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:        "type": "bluestore"
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:    },
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:        "osd_id": 1,
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:        "type": "bluestore"
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]:    }
Nov 26 02:19:52 compute-0 heuristic_jennings[460565]: }
Nov 26 02:19:52 compute-0 systemd[1]: libpod-ba16d5fe5a7a433b4a9581c7dad0b14cc48c71489b6df43237db28048671c4f7.scope: Deactivated successfully.
Nov 26 02:19:52 compute-0 systemd[1]: libpod-ba16d5fe5a7a433b4a9581c7dad0b14cc48c71489b6df43237db28048671c4f7.scope: Consumed 1.189s CPU time.
Nov 26 02:19:52 compute-0 podman[460549]: 2025-11-26 02:19:52.626798978 +0000 UTC m=+1.436252842 container died ba16d5fe5a7a433b4a9581c7dad0b14cc48c71489b6df43237db28048671c4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:19:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-533400b44f56453d3cc2454d2e2f127f5434de17dc99f0866e17f284a1eb781a-merged.mount: Deactivated successfully.
Nov 26 02:19:52 compute-0 podman[460549]: 2025-11-26 02:19:52.731182122 +0000 UTC m=+1.540635996 container remove ba16d5fe5a7a433b4a9581c7dad0b14cc48c71489b6df43237db28048671c4f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:19:52 compute-0 systemd[1]: libpod-conmon-ba16d5fe5a7a433b4a9581c7dad0b14cc48c71489b6df43237db28048671c4f7.scope: Deactivated successfully.
Nov 26 02:19:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:19:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:19:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:19:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:19:52 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 9d926c35-fb25-4a78-8391-27fd2319261a does not exist
Nov 26 02:19:52 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev b2660cd7-2121-4f9b-abed-48945767ffad does not exist
Nov 26 02:19:53 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:19:53 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:19:53 compute-0 nova_compute[350387]: 2025-11-26 02:19:53.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:19:53 compute-0 nova_compute[350387]: 2025-11-26 02:19:53.306 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:19:53 compute-0 nova_compute[350387]: 2025-11-26 02:19:53.750 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:54 compute-0 nova_compute[350387]: 2025-11-26 02:19:54.419 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:19:55 compute-0 podman[460661]: 2025-11-26 02:19:55.59746002 +0000 UTC m=+0.132752630 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 02:19:55 compute-0 podman[460660]: 2025-11-26 02:19:55.601408171 +0000 UTC m=+0.151343051 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 26 02:19:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:57 compute-0 nova_compute[350387]: 2025-11-26 02:19:57.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:19:57 compute-0 nova_compute[350387]: 2025-11-26 02:19:57.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:19:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:58 compute-0 nova_compute[350387]: 2025-11-26 02:19:58.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:19:58 compute-0 nova_compute[350387]: 2025-11-26 02:19:58.754 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:59 compute-0 nova_compute[350387]: 2025-11-26 02:19:59.424 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:19:59 compute-0 podman[158021]: time="2025-11-26T02:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:19:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:19:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:19:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8644 "" "Go-http-client/1.1"
Nov 26 02:20:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:20:01 compute-0 openstack_network_exporter[367323]: ERROR   02:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:20:01 compute-0 openstack_network_exporter[367323]: ERROR   02:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:20:01 compute-0 openstack_network_exporter[367323]: ERROR   02:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:20:01 compute-0 openstack_network_exporter[367323]: ERROR   02:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:20:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:20:01 compute-0 openstack_network_exporter[367323]: ERROR   02:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:20:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:20:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:03 compute-0 nova_compute[350387]: 2025-11-26 02:20:03.756 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:04 compute-0 nova_compute[350387]: 2025-11-26 02:20:04.429 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:20:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:08 compute-0 podman[460703]: 2025-11-26 02:20:08.596715668 +0000 UTC m=+0.119543621 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:20:08 compute-0 podman[460702]: 2025-11-26 02:20:08.596942214 +0000 UTC m=+0.127537544 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:20:08 compute-0 podman[460701]: 2025-11-26 02:20:08.612417398 +0000 UTC m=+0.151109575 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Nov 26 02:20:08 compute-0 nova_compute[350387]: 2025-11-26 02:20:08.761 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:09 compute-0 nova_compute[350387]: 2025-11-26 02:20:09.433 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:20:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:20:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:20:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:20:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:20:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:20:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:20:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:13 compute-0 nova_compute[350387]: 2025-11-26 02:20:13.769 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:14 compute-0 nova_compute[350387]: 2025-11-26 02:20:14.437 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:20:15 compute-0 podman[460759]: 2025-11-26 02:20:15.573017742 +0000 UTC m=+0.116771413 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 26 02:20:15 compute-0 podman[460760]: 2025-11-26 02:20:15.634583187 +0000 UTC m=+0.165288112 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 02:20:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:18 compute-0 nova_compute[350387]: 2025-11-26 02:20:18.777 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:19 compute-0 nova_compute[350387]: 2025-11-26 02:20:19.440 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:20:20 compute-0 podman[460805]: 2025-11-26 02:20:20.593508587 +0000 UTC m=+0.131042973 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Nov 26 02:20:20 compute-0 podman[460806]: 2025-11-26 02:20:20.598533728 +0000 UTC m=+0.139718286 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:20:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:23 compute-0 nova_compute[350387]: 2025-11-26 02:20:23.780 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:24 compute-0 nova_compute[350387]: 2025-11-26 02:20:24.444 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:20:25.006 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:20:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:20:25.007 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:20:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:20:25.008 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:20:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:20:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:26 compute-0 podman[460845]: 2025-11-26 02:20:26.599228317 +0000 UTC m=+0.143769309 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 02:20:26 compute-0 podman[460844]: 2025-11-26 02:20:26.634244419 +0000 UTC m=+0.180560240 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, release=1755695350)
Nov 26 02:20:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:20:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1753603347' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:20:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:20:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1753603347' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:20:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:28 compute-0 nova_compute[350387]: 2025-11-26 02:20:28.786 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:29 compute-0 nova_compute[350387]: 2025-11-26 02:20:29.449 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:29 compute-0 podman[158021]: time="2025-11-26T02:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:20:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:20:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8642 "" "Go-http-client/1.1"
Nov 26 02:20:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:20:31 compute-0 openstack_network_exporter[367323]: ERROR   02:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:20:31 compute-0 openstack_network_exporter[367323]: ERROR   02:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:20:31 compute-0 openstack_network_exporter[367323]: ERROR   02:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:20:31 compute-0 openstack_network_exporter[367323]: ERROR   02:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:20:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:20:31 compute-0 openstack_network_exporter[367323]: ERROR   02:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:20:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:20:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:33 compute-0 nova_compute[350387]: 2025-11-26 02:20:33.792 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:34 compute-0 nova_compute[350387]: 2025-11-26 02:20:34.452 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:20:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:38 compute-0 nova_compute[350387]: 2025-11-26 02:20:38.795 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:39 compute-0 nova_compute[350387]: 2025-11-26 02:20:39.456 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:39 compute-0 podman[460886]: 2025-11-26 02:20:39.572868126 +0000 UTC m=+0.113514081 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Nov 26 02:20:39 compute-0 podman[460887]: 2025-11-26 02:20:39.573474253 +0000 UTC m=+0.111443223 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 02:20:39 compute-0 podman[460888]: 2025-11-26 02:20:39.591671453 +0000 UTC m=+0.119189820 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:20:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:20:40 compute-0 nova_compute[350387]: 2025-11-26 02:20:40.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:20:40 compute-0 nova_compute[350387]: 2025-11-26 02:20:40.341 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:20:40 compute-0 nova_compute[350387]: 2025-11-26 02:20:40.342 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:20:40 compute-0 nova_compute[350387]: 2025-11-26 02:20:40.343 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:20:40 compute-0 nova_compute[350387]: 2025-11-26 02:20:40.344 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:20:40 compute-0 nova_compute[350387]: 2025-11-26 02:20:40.345 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:20:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:20:40 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4200076725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:20:40 compute-0 nova_compute[350387]: 2025-11-26 02:20:40.854 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:20:40 compute-0 nova_compute[350387]: 2025-11-26 02:20:40.966 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:20:40 compute-0 nova_compute[350387]: 2025-11-26 02:20:40.967 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:20:40 compute-0 nova_compute[350387]: 2025-11-26 02:20:40.974 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:20:40 compute-0 nova_compute[350387]: 2025-11-26 02:20:40.974 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:20:41
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'volumes', '.mgr', 'default.rgw.log', 'backups', '.rgw.root']
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:20:41 compute-0 nova_compute[350387]: 2025-11-26 02:20:41.605 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:20:41 compute-0 nova_compute[350387]: 2025-11-26 02:20:41.607 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3522MB free_disk=59.897186279296875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:20:41 compute-0 nova_compute[350387]: 2025-11-26 02:20:41.607 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:20:41 compute-0 nova_compute[350387]: 2025-11-26 02:20:41.608 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:20:41 compute-0 nova_compute[350387]: 2025-11-26 02:20:41.698 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 74d081af-66cd-4e37-99e4-31f777885766 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:20:41 compute-0 nova_compute[350387]: 2025-11-26 02:20:41.699 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance add194b7-6a6c-48ef-8355-3344185eb43e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:20:41 compute-0 nova_compute[350387]: 2025-11-26 02:20:41.699 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:20:41 compute-0 nova_compute[350387]: 2025-11-26 02:20:41.700 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:20:41 compute-0 nova_compute[350387]: 2025-11-26 02:20:41.756 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:20:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:20:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:20:42 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459974253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:20:42 compute-0 nova_compute[350387]: 2025-11-26 02:20:42.318 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:20:42 compute-0 nova_compute[350387]: 2025-11-26 02:20:42.332 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:20:42 compute-0 nova_compute[350387]: 2025-11-26 02:20:42.358 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:20:42 compute-0 nova_compute[350387]: 2025-11-26 02:20:42.361 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:20:42 compute-0 nova_compute[350387]: 2025-11-26 02:20:42.362 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.877 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.878 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.878 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.879 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.889 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.889 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.890 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.891 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.891 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.892 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.893 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.893 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.894 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.895 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.896 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50a909dc70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.888 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '74d081af-66cd-4e37-99e4-31f777885766', 'name': 'te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'dbaf181e-c7da-4938-bfef-7ab3aa9a19bc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'user_id': '3a9710ede02d47cbb016ff596d936633', 'hostId': '0514aa3466932c9e7b93e3dcd39fcbb186e60af35850a79a2e38f108', 'status': 'active', 'metadata': {'metering.server_group': 'bd820598-acdd-4f42-8252-1f5951161b01'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.904 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'add194b7-6a6c-48ef-8355-3344185eb43e', 'name': 'te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'dbaf181e-c7da-4938-bfef-7ab3aa9a19bc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'user_id': '3a9710ede02d47cbb016ff596d936633', 'hostId': '0514aa3466932c9e7b93e3dcd39fcbb186e60af35850a79a2e38f108', 'status': 'active', 'metadata': {'metering.server_group': 'bd820598-acdd-4f42-8252-1f5951161b01'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.905 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.905 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.905 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.905 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T02:20:42.905774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.907 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.907 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.908 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.908 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.908 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.908 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T02:20:42.908590) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.915 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.922 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.922 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.923 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.923 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.923 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.923 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.923 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.924 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T02:20:42.923813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.925 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.925 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.925 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.925 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.925 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.926 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.926 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T02:20:42.926098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.926 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.927 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.927 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.928 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.928 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.928 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.928 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.928 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.929 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T02:20:42.928733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.929 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.929 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.930 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.930 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.930 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.931 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.931 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.931 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.931 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.932 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.933 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.933 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.933 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.933 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T02:20:42.931361) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.934 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.934 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.934 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.934 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T02:20:42.934311) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:42.966 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/cpu volume: 334430000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.001 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/cpu volume: 322450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.003 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.004 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.004 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.004 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.004 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.004 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.005 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.005 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T02:20:43.004744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.005 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.006 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.006 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.007 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.007 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.007 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.007 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T02:20:43.007443) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.008 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/memory.usage volume: 42.328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.008 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/memory.usage volume: 43.23046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.009 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.009 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.009 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.009 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.009 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.010 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.010 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.010 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.010 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T02:20:43.010285) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.011 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.012 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.012 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.012 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.012 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.012 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.012 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.013 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T02:20:43.012898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.015 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.016 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.016 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.016 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.016 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.017 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.017 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.017 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T02:20:43.017287) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.018 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.018 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.019 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.019 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.019 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.019 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.020 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.020 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T02:20:43.020055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.021 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.021 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.021 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.022 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.022 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.022 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.022 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.023 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T02:20:43.022567) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.023 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.024 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.024 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.024 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.024 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.024 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.025 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T02:20:43.025169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.051 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.052 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.076 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.077 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.078 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.078 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.078 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.078 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.079 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.079 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T02:20:43.079162) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.135 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.136 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.227 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.bytes volume: 30366720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.227 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.228 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.228 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.229 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.229 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.229 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.229 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.229 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.230 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.231 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.latency volume: 2432488124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T02:20:43.229923) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.231 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.latency volume: 867897915 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.232 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.latency volume: 2700802924 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.232 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.latency volume: 184971572 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.233 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.233 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.233 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.233 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.234 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.234 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.234 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.235 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.235 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.requests volume: 1101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.236 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.236 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.237 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.237 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.237 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.237 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.238 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.238 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T02:20:43.234211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.238 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.238 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.239 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.239 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T02:20:43.238099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.239 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.239 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.240 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.240 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.240 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.240 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T02:20:43.240262) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.240 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.bytes volume: 73154560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.241 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.241 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.241 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.242 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.242 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.242 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.242 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.242 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.242 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T02:20:43.242734) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.242 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.244 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.244 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.244 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.244 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.244 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.244 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.244 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.245 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.latency volume: 9013075611 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.245 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.245 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.latency volume: 7633186066 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.245 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.246 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.246 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.246 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T02:20:43.244896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.247 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.247 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.247 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.247 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.requests volume: 335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.247 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.248 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.requests volume: 279 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.248 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T02:20:43.247141) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.248 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.248 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.248 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.249 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.249 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.249 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.249 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.249 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.249 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.250 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.250 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.250 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.251 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T02:20:43.249323) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.253 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.253 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.253 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.254 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.254 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.254 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.254 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.254 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.254 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.254 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.254 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.255 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.255 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.255 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.255 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.255 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.255 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.255 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.255 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.256 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.256 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.256 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.256 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.256 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.256 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:20:43.256 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:20:43 compute-0 nova_compute[350387]: 2025-11-26 02:20:43.363 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:20:43 compute-0 nova_compute[350387]: 2025-11-26 02:20:43.803 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:44 compute-0 nova_compute[350387]: 2025-11-26 02:20:44.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:20:44 compute-0 nova_compute[350387]: 2025-11-26 02:20:44.460 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:20:45 compute-0 nova_compute[350387]: 2025-11-26 02:20:45.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:20:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:46 compute-0 nova_compute[350387]: 2025-11-26 02:20:46.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:20:46 compute-0 nova_compute[350387]: 2025-11-26 02:20:46.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:20:46 compute-0 podman[460989]: 2025-11-26 02:20:46.595780276 +0000 UTC m=+0.142979907 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 02:20:46 compute-0 podman[460990]: 2025-11-26 02:20:46.657173746 +0000 UTC m=+0.194485330 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:20:46 compute-0 nova_compute[350387]: 2025-11-26 02:20:46.786 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:20:46 compute-0 nova_compute[350387]: 2025-11-26 02:20:46.787 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:20:46 compute-0 nova_compute[350387]: 2025-11-26 02:20:46.787 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:20:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:48 compute-0 nova_compute[350387]: 2025-11-26 02:20:48.270 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Updating instance_info_cache with network_info: [{"id": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "address": "fa:16:3e:6e:b7:00", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaa46d5d-d6", "ovs_interfaceid": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:20:48 compute-0 nova_compute[350387]: 2025-11-26 02:20:48.291 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:20:48 compute-0 nova_compute[350387]: 2025-11-26 02:20:48.292 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:20:48 compute-0 nova_compute[350387]: 2025-11-26 02:20:48.294 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:20:48 compute-0 nova_compute[350387]: 2025-11-26 02:20:48.808 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:49 compute-0 nova_compute[350387]: 2025-11-26 02:20:49.464 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:20:51 compute-0 podman[461035]: 2025-11-26 02:20:51.588134055 +0000 UTC m=+0.131571328 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=edpm, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.tags=base rhel9, vcs-type=git, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 02:20:51 compute-0 podman[461036]: 2025-11-26 02:20:51.611682325 +0000 UTC m=+0.144950703 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015185461027544442 of space, bias 1.0, pg target 0.4555638308263333 quantized to 32 (current 32)
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:20:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:20:53 compute-0 nova_compute[350387]: 2025-11-26 02:20:53.811 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:54 compute-0 nova_compute[350387]: 2025-11-26 02:20:54.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:20:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 26 02:20:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 02:20:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:20:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:20:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:20:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:20:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:20:54 compute-0 nova_compute[350387]: 2025-11-26 02:20:54.469 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:20:54 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 959da06e-4cb2-4ed1-a0f7-1c43f273c335 does not exist
Nov 26 02:20:54 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev c43b7dbf-cfd5-4cb9-bb6c-33a685972928 does not exist
Nov 26 02:20:54 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 00d73457-14b0-4347-b09d-4230c028fa27 does not exist
Nov 26 02:20:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:20:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:20:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:20:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:20:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:20:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:20:55 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 02:20:55 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:20:55 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:20:55 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:20:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:20:55 compute-0 nova_compute[350387]: 2025-11-26 02:20:55.294 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:20:55 compute-0 podman[461342]: 2025-11-26 02:20:55.688238173 +0000 UTC m=+0.070672941 container create 415571469d41c5be87f140a32cd8f20ac82e54e845fdb69d6fcbf9ac70895e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 02:20:55 compute-0 podman[461342]: 2025-11-26 02:20:55.658992103 +0000 UTC m=+0.041426871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:20:55 compute-0 systemd[1]: Started libpod-conmon-415571469d41c5be87f140a32cd8f20ac82e54e845fdb69d6fcbf9ac70895e39.scope.
Nov 26 02:20:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 1 op/s
Nov 26 02:20:55 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:20:55 compute-0 podman[461342]: 2025-11-26 02:20:55.848804701 +0000 UTC m=+0.231239509 container init 415571469d41c5be87f140a32cd8f20ac82e54e845fdb69d6fcbf9ac70895e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 02:20:55 compute-0 podman[461342]: 2025-11-26 02:20:55.864534411 +0000 UTC m=+0.246969179 container start 415571469d41c5be87f140a32cd8f20ac82e54e845fdb69d6fcbf9ac70895e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:20:55 compute-0 podman[461342]: 2025-11-26 02:20:55.870976222 +0000 UTC m=+0.253411030 container attach 415571469d41c5be87f140a32cd8f20ac82e54e845fdb69d6fcbf9ac70895e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meitner, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:20:55 compute-0 serene_meitner[461359]: 167 167
Nov 26 02:20:55 compute-0 systemd[1]: libpod-415571469d41c5be87f140a32cd8f20ac82e54e845fdb69d6fcbf9ac70895e39.scope: Deactivated successfully.
Nov 26 02:20:55 compute-0 conmon[461359]: conmon 415571469d41c5be87f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-415571469d41c5be87f140a32cd8f20ac82e54e845fdb69d6fcbf9ac70895e39.scope/container/memory.events
Nov 26 02:20:55 compute-0 podman[461364]: 2025-11-26 02:20:55.947725782 +0000 UTC m=+0.054433566 container died 415571469d41c5be87f140a32cd8f20ac82e54e845fdb69d6fcbf9ac70895e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:20:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1cb4be37310289efbbd31aa2da82d6d1e6d160b8658b50307518c33871feb5b-merged.mount: Deactivated successfully.
Nov 26 02:20:56 compute-0 podman[461364]: 2025-11-26 02:20:56.042139148 +0000 UTC m=+0.148846882 container remove 415571469d41c5be87f140a32cd8f20ac82e54e845fdb69d6fcbf9ac70895e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 02:20:56 compute-0 systemd[1]: libpod-conmon-415571469d41c5be87f140a32cd8f20ac82e54e845fdb69d6fcbf9ac70895e39.scope: Deactivated successfully.
Nov 26 02:20:56 compute-0 podman[461384]: 2025-11-26 02:20:56.367534395 +0000 UTC m=+0.089842529 container create d151c5693ca7f49054c508e6501c50363e8d62eebd10016d8bbfc6daac2de9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:20:56 compute-0 podman[461384]: 2025-11-26 02:20:56.333428169 +0000 UTC m=+0.055736303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:20:56 compute-0 systemd[1]: Started libpod-conmon-d151c5693ca7f49054c508e6501c50363e8d62eebd10016d8bbfc6daac2de9d7.scope.
Nov 26 02:20:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:20:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3600e956117337a16a749821562799551803e8aaa895dc459921cb2489a55aa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:20:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3600e956117337a16a749821562799551803e8aaa895dc459921cb2489a55aa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:20:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3600e956117337a16a749821562799551803e8aaa895dc459921cb2489a55aa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:20:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3600e956117337a16a749821562799551803e8aaa895dc459921cb2489a55aa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:20:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3600e956117337a16a749821562799551803e8aaa895dc459921cb2489a55aa2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:20:56 compute-0 podman[461384]: 2025-11-26 02:20:56.532223049 +0000 UTC m=+0.254531193 container init d151c5693ca7f49054c508e6501c50363e8d62eebd10016d8bbfc6daac2de9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:20:56 compute-0 podman[461384]: 2025-11-26 02:20:56.566498949 +0000 UTC m=+0.288807083 container start d151c5693ca7f49054c508e6501c50363e8d62eebd10016d8bbfc6daac2de9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 02:20:56 compute-0 podman[461384]: 2025-11-26 02:20:56.573121845 +0000 UTC m=+0.295429989 container attach d151c5693ca7f49054c508e6501c50363e8d62eebd10016d8bbfc6daac2de9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:20:57 compute-0 podman[461414]: 2025-11-26 02:20:57.561268111 +0000 UTC m=+0.102124582 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 02:20:57 compute-0 podman[461413]: 2025-11-26 02:20:57.61403202 +0000 UTC m=+0.161064674 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.6, io.buildah.version=1.33.7, architecture=x86_64, name=ubi9-minimal)
Nov 26 02:20:57 compute-0 objective_thompson[461400]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:20:57 compute-0 objective_thompson[461400]: --> relative data size: 1.0
Nov 26 02:20:57 compute-0 objective_thompson[461400]: --> All data devices are unavailable
Nov 26 02:20:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 170 B/s wr, 3 op/s
Nov 26 02:20:57 compute-0 systemd[1]: libpod-d151c5693ca7f49054c508e6501c50363e8d62eebd10016d8bbfc6daac2de9d7.scope: Deactivated successfully.
Nov 26 02:20:57 compute-0 podman[461384]: 2025-11-26 02:20:57.856333629 +0000 UTC m=+1.578641783 container died d151c5693ca7f49054c508e6501c50363e8d62eebd10016d8bbfc6daac2de9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 26 02:20:57 compute-0 systemd[1]: libpod-d151c5693ca7f49054c508e6501c50363e8d62eebd10016d8bbfc6daac2de9d7.scope: Consumed 1.210s CPU time.
Nov 26 02:20:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3600e956117337a16a749821562799551803e8aaa895dc459921cb2489a55aa2-merged.mount: Deactivated successfully.
Nov 26 02:20:57 compute-0 podman[461384]: 2025-11-26 02:20:57.958077219 +0000 UTC m=+1.680385333 container remove d151c5693ca7f49054c508e6501c50363e8d62eebd10016d8bbfc6daac2de9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:20:57 compute-0 systemd[1]: libpod-conmon-d151c5693ca7f49054c508e6501c50363e8d62eebd10016d8bbfc6daac2de9d7.scope: Deactivated successfully.
Nov 26 02:20:58 compute-0 nova_compute[350387]: 2025-11-26 02:20:58.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:20:58 compute-0 nova_compute[350387]: 2025-11-26 02:20:58.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:20:58 compute-0 nova_compute[350387]: 2025-11-26 02:20:58.814 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:59 compute-0 podman[461623]: 2025-11-26 02:20:59.188529895 +0000 UTC m=+0.077327958 container create d6fa57ddb38ce67b1bad7f0b883cb72d03b9c1ef710569de0de6c3c9d341547a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_solomon, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:20:59 compute-0 podman[461623]: 2025-11-26 02:20:59.161071396 +0000 UTC m=+0.049869489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:20:59 compute-0 systemd[1]: Started libpod-conmon-d6fa57ddb38ce67b1bad7f0b883cb72d03b9c1ef710569de0de6c3c9d341547a.scope.
Nov 26 02:20:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:20:59 compute-0 podman[461623]: 2025-11-26 02:20:59.335487323 +0000 UTC m=+0.224285436 container init d6fa57ddb38ce67b1bad7f0b883cb72d03b9c1ef710569de0de6c3c9d341547a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_solomon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 02:20:59 compute-0 podman[461623]: 2025-11-26 02:20:59.347570961 +0000 UTC m=+0.236369034 container start d6fa57ddb38ce67b1bad7f0b883cb72d03b9c1ef710569de0de6c3c9d341547a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_solomon, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 02:20:59 compute-0 elastic_solomon[461639]: 167 167
Nov 26 02:20:59 compute-0 podman[461623]: 2025-11-26 02:20:59.355710779 +0000 UTC m=+0.244508892 container attach d6fa57ddb38ce67b1bad7f0b883cb72d03b9c1ef710569de0de6c3c9d341547a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_solomon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 02:20:59 compute-0 systemd[1]: libpod-d6fa57ddb38ce67b1bad7f0b883cb72d03b9c1ef710569de0de6c3c9d341547a.scope: Deactivated successfully.
Nov 26 02:20:59 compute-0 podman[461623]: 2025-11-26 02:20:59.357900131 +0000 UTC m=+0.246698204 container died d6fa57ddb38ce67b1bad7f0b883cb72d03b9c1ef710569de0de6c3c9d341547a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_solomon, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 02:20:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-448a9ae8c73f325fa2ae36392ac9e7f7cd4fde8daec0137f332692431d68f971-merged.mount: Deactivated successfully.
Nov 26 02:20:59 compute-0 podman[461623]: 2025-11-26 02:20:59.409940879 +0000 UTC m=+0.298738952 container remove d6fa57ddb38ce67b1bad7f0b883cb72d03b9c1ef710569de0de6c3c9d341547a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_solomon, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:20:59 compute-0 systemd[1]: libpod-conmon-d6fa57ddb38ce67b1bad7f0b883cb72d03b9c1ef710569de0de6c3c9d341547a.scope: Deactivated successfully.
Nov 26 02:20:59 compute-0 nova_compute[350387]: 2025-11-26 02:20:59.473 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:20:59 compute-0 podman[461661]: 2025-11-26 02:20:59.697516145 +0000 UTC m=+0.102102792 container create bf2fb25ce196fedcca71f2621d0c85b265f2d443acd771dc8d4e13fe002660da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:20:59 compute-0 podman[461661]: 2025-11-26 02:20:59.65628452 +0000 UTC m=+0.060871217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:20:59 compute-0 podman[158021]: time="2025-11-26T02:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:20:59 compute-0 systemd[1]: Started libpod-conmon-bf2fb25ce196fedcca71f2621d0c85b265f2d443acd771dc8d4e13fe002660da.scope.
Nov 26 02:20:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Nov 26 02:20:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a537a63d82dd9b4e169471f3cc4e222cd3e9da05f3da157352aa246cdcba8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a537a63d82dd9b4e169471f3cc4e222cd3e9da05f3da157352aa246cdcba8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a537a63d82dd9b4e169471f3cc4e222cd3e9da05f3da157352aa246cdcba8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a537a63d82dd9b4e169471f3cc4e222cd3e9da05f3da157352aa246cdcba8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:20:59 compute-0 podman[461661]: 2025-11-26 02:20:59.913444905 +0000 UTC m=+0.318031572 container init bf2fb25ce196fedcca71f2621d0c85b265f2d443acd771dc8d4e13fe002660da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:20:59 compute-0 podman[461661]: 2025-11-26 02:20:59.931036398 +0000 UTC m=+0.335623045 container start bf2fb25ce196fedcca71f2621d0c85b265f2d443acd771dc8d4e13fe002660da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 02:20:59 compute-0 podman[461661]: 2025-11-26 02:20:59.938553069 +0000 UTC m=+0.343139756 container attach bf2fb25ce196fedcca71f2621d0c85b265f2d443acd771dc8d4e13fe002660da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:20:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45380 "" "Go-http-client/1.1"
Nov 26 02:20:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9062 "" "Go-http-client/1.1"
Nov 26 02:21:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]: {
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:    "0": [
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:        {
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "devices": [
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "/dev/loop3"
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            ],
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "lv_name": "ceph_lv0",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "lv_size": "21470642176",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "name": "ceph_lv0",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "tags": {
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.cluster_name": "ceph",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.crush_device_class": "",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.encrypted": "0",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.osd_id": "0",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.type": "block",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.vdo": "0"
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            },
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "type": "block",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "vg_name": "ceph_vg0"
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:        }
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:    ],
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:    "1": [
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:        {
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "devices": [
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "/dev/loop4"
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            ],
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "lv_name": "ceph_lv1",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "lv_size": "21470642176",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "name": "ceph_lv1",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "tags": {
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.cluster_name": "ceph",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.crush_device_class": "",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.encrypted": "0",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.osd_id": "1",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.type": "block",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.vdo": "0"
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            },
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "type": "block",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "vg_name": "ceph_vg1"
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:        }
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:    ],
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:    "2": [
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:        {
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "devices": [
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "/dev/loop5"
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            ],
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "lv_name": "ceph_lv2",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "lv_size": "21470642176",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "name": "ceph_lv2",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "tags": {
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.cluster_name": "ceph",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.crush_device_class": "",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.encrypted": "0",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.osd_id": "2",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.type": "block",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:                "ceph.vdo": "0"
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            },
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "type": "block",
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:            "vg_name": "ceph_vg2"
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:        }
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]:    ]
Nov 26 02:21:00 compute-0 wizardly_agnesi[461677]: }
Nov 26 02:21:00 compute-0 systemd[1]: libpod-bf2fb25ce196fedcca71f2621d0c85b265f2d443acd771dc8d4e13fe002660da.scope: Deactivated successfully.
Nov 26 02:21:00 compute-0 podman[461661]: 2025-11-26 02:21:00.744924082 +0000 UTC m=+1.149510729 container died bf2fb25ce196fedcca71f2621d0c85b265f2d443acd771dc8d4e13fe002660da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 02:21:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8a537a63d82dd9b4e169471f3cc4e222cd3e9da05f3da157352aa246cdcba8c-merged.mount: Deactivated successfully.
Nov 26 02:21:00 compute-0 podman[461661]: 2025-11-26 02:21:00.85120274 +0000 UTC m=+1.255789397 container remove bf2fb25ce196fedcca71f2621d0c85b265f2d443acd771dc8d4e13fe002660da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_agnesi, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:21:00 compute-0 systemd[1]: libpod-conmon-bf2fb25ce196fedcca71f2621d0c85b265f2d443acd771dc8d4e13fe002660da.scope: Deactivated successfully.
Nov 26 02:21:01 compute-0 openstack_network_exporter[367323]: ERROR   02:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:21:01 compute-0 openstack_network_exporter[367323]: ERROR   02:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:21:01 compute-0 openstack_network_exporter[367323]: ERROR   02:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:21:01 compute-0 openstack_network_exporter[367323]: ERROR   02:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:21:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:21:01 compute-0 openstack_network_exporter[367323]: ERROR   02:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:21:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:21:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Nov 26 02:21:02 compute-0 podman[461837]: 2025-11-26 02:21:02.055431911 +0000 UTC m=+0.083401128 container create 9a861c72b2f07093c2f84472ab3f22f9bc8c2d43b9afcfaadf1ff34356478bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lamport, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 26 02:21:02 compute-0 podman[461837]: 2025-11-26 02:21:02.030146402 +0000 UTC m=+0.058115719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:21:02 compute-0 systemd[1]: Started libpod-conmon-9a861c72b2f07093c2f84472ab3f22f9bc8c2d43b9afcfaadf1ff34356478bcf.scope.
Nov 26 02:21:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:21:02 compute-0 podman[461837]: 2025-11-26 02:21:02.19675491 +0000 UTC m=+0.224724187 container init 9a861c72b2f07093c2f84472ab3f22f9bc8c2d43b9afcfaadf1ff34356478bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lamport, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:21:02 compute-0 podman[461837]: 2025-11-26 02:21:02.215539987 +0000 UTC m=+0.243509234 container start 9a861c72b2f07093c2f84472ab3f22f9bc8c2d43b9afcfaadf1ff34356478bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lamport, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 26 02:21:02 compute-0 podman[461837]: 2025-11-26 02:21:02.221442902 +0000 UTC m=+0.249412149 container attach 9a861c72b2f07093c2f84472ab3f22f9bc8c2d43b9afcfaadf1ff34356478bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 02:21:02 compute-0 exciting_lamport[461852]: 167 167
Nov 26 02:21:02 compute-0 systemd[1]: libpod-9a861c72b2f07093c2f84472ab3f22f9bc8c2d43b9afcfaadf1ff34356478bcf.scope: Deactivated successfully.
Nov 26 02:21:02 compute-0 podman[461837]: 2025-11-26 02:21:02.226396491 +0000 UTC m=+0.254365738 container died 9a861c72b2f07093c2f84472ab3f22f9bc8c2d43b9afcfaadf1ff34356478bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc68b7db9583cd653bab519723c4bd879a2fec69f8b63fb55e8d8204bc428337-merged.mount: Deactivated successfully.
Nov 26 02:21:02 compute-0 podman[461837]: 2025-11-26 02:21:02.313208703 +0000 UTC m=+0.341177950 container remove 9a861c72b2f07093c2f84472ab3f22f9bc8c2d43b9afcfaadf1ff34356478bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 26 02:21:02 compute-0 systemd[1]: libpod-conmon-9a861c72b2f07093c2f84472ab3f22f9bc8c2d43b9afcfaadf1ff34356478bcf.scope: Deactivated successfully.
Nov 26 02:21:02 compute-0 podman[461877]: 2025-11-26 02:21:02.624150445 +0000 UTC m=+0.099116028 container create 0c15a97c80008e4ebec533e32c40b86bcce5b7691971937b6e31fe90c22cfea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 02:21:02 compute-0 podman[461877]: 2025-11-26 02:21:02.593678611 +0000 UTC m=+0.068644274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:21:02 compute-0 systemd[1]: Started libpod-conmon-0c15a97c80008e4ebec533e32c40b86bcce5b7691971937b6e31fe90c22cfea6.scope.
Nov 26 02:21:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29bd169745413b589cf53eb73490d341d8674b8acffb713adfe53b1dd17efe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29bd169745413b589cf53eb73490d341d8674b8acffb713adfe53b1dd17efe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29bd169745413b589cf53eb73490d341d8674b8acffb713adfe53b1dd17efe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29bd169745413b589cf53eb73490d341d8674b8acffb713adfe53b1dd17efe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:21:02 compute-0 podman[461877]: 2025-11-26 02:21:02.797375909 +0000 UTC m=+0.272341502 container init 0c15a97c80008e4ebec533e32c40b86bcce5b7691971937b6e31fe90c22cfea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_sanderson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:21:02 compute-0 podman[461877]: 2025-11-26 02:21:02.81811509 +0000 UTC m=+0.293080683 container start 0c15a97c80008e4ebec533e32c40b86bcce5b7691971937b6e31fe90c22cfea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_sanderson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:21:02 compute-0 podman[461877]: 2025-11-26 02:21:02.823953263 +0000 UTC m=+0.298918856 container attach 0c15a97c80008e4ebec533e32c40b86bcce5b7691971937b6e31fe90c22cfea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 02:21:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 7.9 KiB/s wr, 4 op/s
Nov 26 02:21:03 compute-0 nova_compute[350387]: 2025-11-26 02:21:03.818 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:03 compute-0 xenodochial_sanderson[461893]: {
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:        "osd_id": 0,
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:        "type": "bluestore"
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:    },
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:        "osd_id": 2,
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:        "type": "bluestore"
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:    },
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:        "osd_id": 1,
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:        "type": "bluestore"
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]:    }
Nov 26 02:21:04 compute-0 xenodochial_sanderson[461893]: }
Nov 26 02:21:04 compute-0 systemd[1]: libpod-0c15a97c80008e4ebec533e32c40b86bcce5b7691971937b6e31fe90c22cfea6.scope: Deactivated successfully.
Nov 26 02:21:04 compute-0 systemd[1]: libpod-0c15a97c80008e4ebec533e32c40b86bcce5b7691971937b6e31fe90c22cfea6.scope: Consumed 1.212s CPU time.
Nov 26 02:21:04 compute-0 podman[461877]: 2025-11-26 02:21:04.045582601 +0000 UTC m=+1.520548164 container died 0c15a97c80008e4ebec533e32c40b86bcce5b7691971937b6e31fe90c22cfea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 02:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd29bd169745413b589cf53eb73490d341d8674b8acffb713adfe53b1dd17efe-merged.mount: Deactivated successfully.
Nov 26 02:21:04 compute-0 podman[461877]: 2025-11-26 02:21:04.145085469 +0000 UTC m=+1.620051062 container remove 0c15a97c80008e4ebec533e32c40b86bcce5b7691971937b6e31fe90c22cfea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_sanderson, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 02:21:04 compute-0 systemd[1]: libpod-conmon-0c15a97c80008e4ebec533e32c40b86bcce5b7691971937b6e31fe90c22cfea6.scope: Deactivated successfully.
Nov 26 02:21:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:21:04 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:21:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:21:04 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:21:04 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 3358ad3d-970a-46f9-b309-577d32080b46 does not exist
Nov 26 02:21:04 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 6c104565-535f-4a2a-9d31-a978056ac4f4 does not exist
Nov 26 02:21:04 compute-0 nova_compute[350387]: 2025-11-26 02:21:04.477 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:05 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:21:05 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:21:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:21:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Nov 26 02:21:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 8.6 KiB/s wr, 3 op/s
Nov 26 02:21:08 compute-0 nova_compute[350387]: 2025-11-26 02:21:08.822 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:09 compute-0 nova_compute[350387]: 2025-11-26 02:21:09.481 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 8.4 KiB/s wr, 1 op/s
Nov 26 02:21:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:21:10 compute-0 podman[461988]: 2025-11-26 02:21:10.607399301 +0000 UTC m=+0.135587660 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 02:21:10 compute-0 podman[461987]: 2025-11-26 02:21:10.611182437 +0000 UTC m=+0.145282192 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Nov 26 02:21:10 compute-0 podman[461989]: 2025-11-26 02:21:10.617752041 +0000 UTC m=+0.144529380 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 02:21:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:21:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:21:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:21:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:21:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:21:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:21:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Nov 26 02:21:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 8.4 KiB/s wr, 0 op/s
Nov 26 02:21:13 compute-0 nova_compute[350387]: 2025-11-26 02:21:13.826 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:14 compute-0 nova_compute[350387]: 2025-11-26 02:21:14.485 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.315961) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123675316007, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1269, "num_deletes": 251, "total_data_size": 1984953, "memory_usage": 2014088, "flush_reason": "Manual Compaction"}
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123675331435, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 1944309, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42701, "largest_seqno": 43969, "table_properties": {"data_size": 1938232, "index_size": 3408, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12669, "raw_average_key_size": 19, "raw_value_size": 1926053, "raw_average_value_size": 3014, "num_data_blocks": 153, "num_entries": 639, "num_filter_entries": 639, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764123546, "oldest_key_time": 1764123546, "file_creation_time": 1764123675, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 15570 microseconds, and 9449 cpu microseconds.
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.331529) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 1944309 bytes OK
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.331558) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.334770) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.334794) EVENT_LOG_v1 {"time_micros": 1764123675334786, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.334897) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1979227, prev total WAL file size 1979227, number of live WAL files 2.
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.336305) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(1898KB)], [101(8864KB)]
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123675336359, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11021937, "oldest_snapshot_seqno": -1}
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 5913 keys, 9347754 bytes, temperature: kUnknown
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123675403976, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9347754, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9307988, "index_size": 23912, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 153921, "raw_average_key_size": 26, "raw_value_size": 9200798, "raw_average_value_size": 1556, "num_data_blocks": 953, "num_entries": 5913, "num_filter_entries": 5913, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764123675, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.405279) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9347754 bytes
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.408604) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.6 rd, 136.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 8.7 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(10.5) write-amplify(4.8) OK, records in: 6427, records dropped: 514 output_compression: NoCompression
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.408635) EVENT_LOG_v1 {"time_micros": 1764123675408621, "job": 60, "event": "compaction_finished", "compaction_time_micros": 68612, "compaction_time_cpu_micros": 49544, "output_level": 6, "num_output_files": 1, "total_output_size": 9347754, "num_input_records": 6427, "num_output_records": 5913, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123675412695, "job": 60, "event": "table_file_deletion", "file_number": 103}
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123675417638, "job": 60, "event": "table_file_deletion", "file_number": 101}
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.336131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.418994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.419003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.419006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.419009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:21:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:21:15.419012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:21:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 682 B/s wr, 0 op/s
Nov 26 02:21:17 compute-0 podman[462044]: 2025-11-26 02:21:17.625452805 +0000 UTC m=+0.166231739 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Nov 26 02:21:17 compute-0 podman[462045]: 2025-11-26 02:21:17.670162717 +0000 UTC m=+0.204387747 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:21:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.0 KiB/s wr, 0 op/s
Nov 26 02:21:18 compute-0 nova_compute[350387]: 2025-11-26 02:21:18.829 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:19 compute-0 nova_compute[350387]: 2025-11-26 02:21:19.489 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.0 KiB/s wr, 0 op/s
Nov 26 02:21:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:21:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Nov 26 02:21:22 compute-0 podman[462092]: 2025-11-26 02:21:22.607856193 +0000 UTC m=+0.148621885 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:21:22 compute-0 podman[462091]: 2025-11-26 02:21:22.623955074 +0000 UTC m=+0.170641502 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.component=ubi9-container, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 26 02:21:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Nov 26 02:21:23 compute-0 nova_compute[350387]: 2025-11-26 02:21:23.833 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:24 compute-0 nova_compute[350387]: 2025-11-26 02:21:24.492 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:21:25.006 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:21:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:21:25.008 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:21:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:21:25.009 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:21:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:21:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Nov 26 02:21:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:21:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/496469914' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:21:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:21:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/496469914' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:21:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 7.3 KiB/s wr, 0 op/s
Nov 26 02:21:28 compute-0 podman[462130]: 2025-11-26 02:21:28.588494821 +0000 UTC m=+0.119958662 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 02:21:28 compute-0 podman[462129]: 2025-11-26 02:21:28.593407629 +0000 UTC m=+0.131825785 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_id=edpm, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible)
Nov 26 02:21:28 compute-0 nova_compute[350387]: 2025-11-26 02:21:28.836 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:29 compute-0 nova_compute[350387]: 2025-11-26 02:21:29.496 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:29 compute-0 podman[158021]: time="2025-11-26T02:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:21:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:21:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8654 "" "Go-http-client/1.1"
Nov 26 02:21:29 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2133: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:21:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:21:31 compute-0 openstack_network_exporter[367323]: ERROR   02:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:21:31 compute-0 openstack_network_exporter[367323]: ERROR   02:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:21:31 compute-0 openstack_network_exporter[367323]: ERROR   02:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:21:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:21:31 compute-0 openstack_network_exporter[367323]: ERROR   02:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:21:31 compute-0 openstack_network_exporter[367323]: ERROR   02:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:21:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:21:31 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:21:33 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Nov 26 02:21:33 compute-0 nova_compute[350387]: 2025-11-26 02:21:33.839 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:34 compute-0 nova_compute[350387]: 2025-11-26 02:21:34.500 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:21:35 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Nov 26 02:21:37 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Nov 26 02:21:38 compute-0 nova_compute[350387]: 2025-11-26 02:21:38.845 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:39 compute-0 nova_compute[350387]: 2025-11-26 02:21:39.505 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:39 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Nov 26 02:21:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:21:41
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.log']
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:21:41 compute-0 podman[462172]: 2025-11-26 02:21:41.593536962 +0000 UTC m=+0.135446976 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 26 02:21:41 compute-0 podman[462173]: 2025-11-26 02:21:41.600476977 +0000 UTC m=+0.136088294 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 26 02:21:41 compute-0 podman[462174]: 2025-11-26 02:21:41.608612345 +0000 UTC m=+0.142120763 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:21:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:21:42 compute-0 nova_compute[350387]: 2025-11-26 02:21:42.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:21:42 compute-0 nova_compute[350387]: 2025-11-26 02:21:42.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:21:42 compute-0 nova_compute[350387]: 2025-11-26 02:21:42.337 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:21:42 compute-0 nova_compute[350387]: 2025-11-26 02:21:42.338 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:21:42 compute-0 nova_compute[350387]: 2025-11-26 02:21:42.339 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:21:42 compute-0 nova_compute[350387]: 2025-11-26 02:21:42.340 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:21:42 compute-0 nova_compute[350387]: 2025-11-26 02:21:42.340 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:21:42 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:21:42 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/976780470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:21:42 compute-0 nova_compute[350387]: 2025-11-26 02:21:42.858 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:21:42 compute-0 nova_compute[350387]: 2025-11-26 02:21:42.981 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:21:42 compute-0 nova_compute[350387]: 2025-11-26 02:21:42.983 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:21:42 compute-0 nova_compute[350387]: 2025-11-26 02:21:42.993 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:21:42 compute-0 nova_compute[350387]: 2025-11-26 02:21:42.994 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:21:43 compute-0 nova_compute[350387]: 2025-11-26 02:21:43.574 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:21:43 compute-0 nova_compute[350387]: 2025-11-26 02:21:43.576 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3531MB free_disk=59.897003173828125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:21:43 compute-0 nova_compute[350387]: 2025-11-26 02:21:43.576 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:21:43 compute-0 nova_compute[350387]: 2025-11-26 02:21:43.577 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:21:43 compute-0 nova_compute[350387]: 2025-11-26 02:21:43.675 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 74d081af-66cd-4e37-99e4-31f777885766 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:21:43 compute-0 nova_compute[350387]: 2025-11-26 02:21:43.676 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance add194b7-6a6c-48ef-8355-3344185eb43e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:21:43 compute-0 nova_compute[350387]: 2025-11-26 02:21:43.677 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:21:43 compute-0 nova_compute[350387]: 2025-11-26 02:21:43.678 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:21:43 compute-0 nova_compute[350387]: 2025-11-26 02:21:43.695 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing inventories for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 02:21:43 compute-0 nova_compute[350387]: 2025-11-26 02:21:43.718 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating ProviderTree inventory for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 02:21:43 compute-0 nova_compute[350387]: 2025-11-26 02:21:43.719 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating inventory in ProviderTree for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 02:21:43 compute-0 nova_compute[350387]: 2025-11-26 02:21:43.743 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing aggregate associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 02:21:43 compute-0 nova_compute[350387]: 2025-11-26 02:21:43.769 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing trait associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, traits: COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 02:21:43 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Nov 26 02:21:43 compute-0 nova_compute[350387]: 2025-11-26 02:21:43.851 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:21:43 compute-0 nova_compute[350387]: 2025-11-26 02:21:43.878 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:21:44 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2374336704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:21:44 compute-0 nova_compute[350387]: 2025-11-26 02:21:44.337 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:21:44 compute-0 nova_compute[350387]: 2025-11-26 02:21:44.351 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:21:44 compute-0 nova_compute[350387]: 2025-11-26 02:21:44.372 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:21:44 compute-0 nova_compute[350387]: 2025-11-26 02:21:44.377 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:21:44 compute-0 nova_compute[350387]: 2025-11-26 02:21:44.377 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:21:44 compute-0 nova_compute[350387]: 2025-11-26 02:21:44.507 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:21:45 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Nov 26 02:21:47 compute-0 nova_compute[350387]: 2025-11-26 02:21:47.378 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:21:47 compute-0 nova_compute[350387]: 2025-11-26 02:21:47.379 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:21:47 compute-0 nova_compute[350387]: 2025-11-26 02:21:47.379 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:21:47 compute-0 nova_compute[350387]: 2025-11-26 02:21:47.799 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:21:47 compute-0 nova_compute[350387]: 2025-11-26 02:21:47.800 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:21:47 compute-0 nova_compute[350387]: 2025-11-26 02:21:47.801 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:21:47 compute-0 nova_compute[350387]: 2025-11-26 02:21:47.801 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 74d081af-66cd-4e37-99e4-31f777885766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:21:47 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:21:48 compute-0 podman[462273]: 2025-11-26 02:21:48.573451338 +0000 UTC m=+0.119038496 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 02:21:48 compute-0 podman[462274]: 2025-11-26 02:21:48.645700142 +0000 UTC m=+0.186224819 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 02:21:48 compute-0 nova_compute[350387]: 2025-11-26 02:21:48.851 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:49 compute-0 nova_compute[350387]: 2025-11-26 02:21:49.161 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updating instance_info_cache with network_info: [{"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:21:49 compute-0 nova_compute[350387]: 2025-11-26 02:21:49.176 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:21:49 compute-0 nova_compute[350387]: 2025-11-26 02:21:49.176 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:21:49 compute-0 nova_compute[350387]: 2025-11-26 02:21:49.176 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:21:49 compute-0 nova_compute[350387]: 2025-11-26 02:21:49.177 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:21:49 compute-0 nova_compute[350387]: 2025-11-26 02:21:49.177 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:21:49 compute-0 nova_compute[350387]: 2025-11-26 02:21:49.511 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:49 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:21:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00152159845672983 of space, bias 1.0, pg target 0.456479537018949 quantized to 32 (current 32)
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:21:51 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:21:53 compute-0 podman[462320]: 2025-11-26 02:21:53.59244852 +0000 UTC m=+0.130463426 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:21:53 compute-0 podman[462319]: 2025-11-26 02:21:53.596625848 +0000 UTC m=+0.141215508 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.openshift.expose-services=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 26 02:21:53 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:21:53 compute-0 nova_compute[350387]: 2025-11-26 02:21:53.855 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:54 compute-0 nova_compute[350387]: 2025-11-26 02:21:54.515 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:21:55 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:21:56 compute-0 nova_compute[350387]: 2025-11-26 02:21:56.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:21:57 compute-0 nova_compute[350387]: 2025-11-26 02:21:57.294 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:21:57 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:21:58 compute-0 nova_compute[350387]: 2025-11-26 02:21:58.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:21:58 compute-0 nova_compute[350387]: 2025-11-26 02:21:58.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:21:58 compute-0 nova_compute[350387]: 2025-11-26 02:21:58.861 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:59 compute-0 nova_compute[350387]: 2025-11-26 02:21:59.519 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:21:59 compute-0 podman[462358]: 2025-11-26 02:21:59.588037226 +0000 UTC m=+0.129347805 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Nov 26 02:21:59 compute-0 podman[462359]: 2025-11-26 02:21:59.613883551 +0000 UTC m=+0.147580866 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:21:59 compute-0 podman[158021]: time="2025-11-26T02:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:21:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:21:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8645 "" "Go-http-client/1.1"
Nov 26 02:21:59 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:22:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:22:00 compute-0 nova_compute[350387]: 2025-11-26 02:22:00.295 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:22:01 compute-0 openstack_network_exporter[367323]: ERROR   02:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:22:01 compute-0 openstack_network_exporter[367323]: ERROR   02:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:22:01 compute-0 openstack_network_exporter[367323]: ERROR   02:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:22:01 compute-0 openstack_network_exporter[367323]: ERROR   02:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:22:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:22:01 compute-0 openstack_network_exporter[367323]: ERROR   02:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:22:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:22:01 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 26 02:22:03 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:03 compute-0 nova_compute[350387]: 2025-11-26 02:22:03.866 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:04 compute-0 nova_compute[350387]: 2025-11-26 02:22:04.522 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.305548) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123725305587, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 611, "num_deletes": 250, "total_data_size": 716446, "memory_usage": 728544, "flush_reason": "Manual Compaction"}
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123725310473, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 460104, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43970, "largest_seqno": 44580, "table_properties": {"data_size": 457266, "index_size": 810, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7527, "raw_average_key_size": 20, "raw_value_size": 451462, "raw_average_value_size": 1220, "num_data_blocks": 37, "num_entries": 370, "num_filter_entries": 370, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764123676, "oldest_key_time": 1764123676, "file_creation_time": 1764123725, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 4973 microseconds, and 1857 cpu microseconds.
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.310523) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 460104 bytes OK
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.310538) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.312815) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.312850) EVENT_LOG_v1 {"time_micros": 1764123725312845, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.312863) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 713143, prev total WAL file size 713143, number of live WAL files 2.
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.313986) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373536' seq:72057594037927935, type:22 .. '6D6772737461740032303037' seq:0, type:0; will stop at (end)
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(449KB)], [104(9128KB)]
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123725314092, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 9807858, "oldest_snapshot_seqno": -1}
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 5794 keys, 6768595 bytes, temperature: kUnknown
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123725366390, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 6768595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6733811, "index_size": 19186, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 151619, "raw_average_key_size": 26, "raw_value_size": 6632845, "raw_average_value_size": 1144, "num_data_blocks": 758, "num_entries": 5794, "num_filter_entries": 5794, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764123725, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.366716) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 6768595 bytes
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.372100) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.2 rd, 129.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 8.9 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(36.0) write-amplify(14.7) OK, records in: 6283, records dropped: 489 output_compression: NoCompression
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.372204) EVENT_LOG_v1 {"time_micros": 1764123725372131, "job": 62, "event": "compaction_finished", "compaction_time_micros": 52379, "compaction_time_cpu_micros": 38847, "output_level": 6, "num_output_files": 1, "total_output_size": 6768595, "num_input_records": 6283, "num_output_records": 5794, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123725372721, "job": 62, "event": "table_file_deletion", "file_number": 106}
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123725376547, "job": 62, "event": "table_file_deletion", "file_number": 104}
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.313458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.376807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.376813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.376816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.376880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:22:05 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:22:05.376883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:22:05 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:22:05 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:22:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:22:05 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:22:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:22:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:22:06 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev e2dc7b2a-54ea-4dea-96e0-10ce363e51ae does not exist
Nov 26 02:22:06 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev a3777afd-9a25-4160-a6a9-1ebb51d8ad3b does not exist
Nov 26 02:22:06 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 258d47f0-712f-4e75-88de-73c3e723f6b8 does not exist
Nov 26 02:22:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:22:06 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:22:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:22:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:22:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:22:06 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:22:06 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:22:06 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:22:06 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:22:07 compute-0 podman[462667]: 2025-11-26 02:22:07.120528075 +0000 UTC m=+0.089821678 container create 3f66e3cef900c41f74691ca5ca8ce4542b40a964ff482d01c542f01afaf7b797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 02:22:07 compute-0 podman[462667]: 2025-11-26 02:22:07.078740934 +0000 UTC m=+0.048034627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:22:07 compute-0 systemd[1]: Started libpod-conmon-3f66e3cef900c41f74691ca5ca8ce4542b40a964ff482d01c542f01afaf7b797.scope.
Nov 26 02:22:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:22:07 compute-0 podman[462667]: 2025-11-26 02:22:07.293109711 +0000 UTC m=+0.262403394 container init 3f66e3cef900c41f74691ca5ca8ce4542b40a964ff482d01c542f01afaf7b797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 02:22:07 compute-0 podman[462667]: 2025-11-26 02:22:07.314177041 +0000 UTC m=+0.283470634 container start 3f66e3cef900c41f74691ca5ca8ce4542b40a964ff482d01c542f01afaf7b797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:22:07 compute-0 podman[462667]: 2025-11-26 02:22:07.319889891 +0000 UTC m=+0.289183564 container attach 3f66e3cef900c41f74691ca5ca8ce4542b40a964ff482d01c542f01afaf7b797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:22:07 compute-0 kind_jang[462683]: 167 167
Nov 26 02:22:07 compute-0 systemd[1]: libpod-3f66e3cef900c41f74691ca5ca8ce4542b40a964ff482d01c542f01afaf7b797.scope: Deactivated successfully.
Nov 26 02:22:07 compute-0 podman[462667]: 2025-11-26 02:22:07.330052946 +0000 UTC m=+0.299346549 container died 3f66e3cef900c41f74691ca5ca8ce4542b40a964ff482d01c542f01afaf7b797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 02:22:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad444ceaadc9536d9cc9df9229df8526a84680ecc894f7229d4cc09094990bd7-merged.mount: Deactivated successfully.
Nov 26 02:22:07 compute-0 podman[462667]: 2025-11-26 02:22:07.413597826 +0000 UTC m=+0.382891429 container remove 3f66e3cef900c41f74691ca5ca8ce4542b40a964ff482d01c542f01afaf7b797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 02:22:07 compute-0 systemd[1]: libpod-conmon-3f66e3cef900c41f74691ca5ca8ce4542b40a964ff482d01c542f01afaf7b797.scope: Deactivated successfully.
Nov 26 02:22:07 compute-0 podman[462706]: 2025-11-26 02:22:07.75995795 +0000 UTC m=+0.127618307 container create 0fff21d31df35269066ad4815d1f41fe7f43b9e2b2f34b40764efe8f6b35b663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:22:07 compute-0 podman[462706]: 2025-11-26 02:22:07.686228504 +0000 UTC m=+0.053888911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:22:07 compute-0 systemd[1]: Started libpod-conmon-0fff21d31df35269066ad4815d1f41fe7f43b9e2b2f34b40764efe8f6b35b663.scope.
Nov 26 02:22:07 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2915e08aa3a2a6a2ef1e3f155aca396830d16a5ab59e23d6b4fa5dbc8a2468a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2915e08aa3a2a6a2ef1e3f155aca396830d16a5ab59e23d6b4fa5dbc8a2468a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2915e08aa3a2a6a2ef1e3f155aca396830d16a5ab59e23d6b4fa5dbc8a2468a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2915e08aa3a2a6a2ef1e3f155aca396830d16a5ab59e23d6b4fa5dbc8a2468a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2915e08aa3a2a6a2ef1e3f155aca396830d16a5ab59e23d6b4fa5dbc8a2468a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:22:07 compute-0 podman[462706]: 2025-11-26 02:22:07.907378641 +0000 UTC m=+0.275039008 container init 0fff21d31df35269066ad4815d1f41fe7f43b9e2b2f34b40764efe8f6b35b663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wu, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 26 02:22:07 compute-0 podman[462706]: 2025-11-26 02:22:07.929235683 +0000 UTC m=+0.296896010 container start 0fff21d31df35269066ad4815d1f41fe7f43b9e2b2f34b40764efe8f6b35b663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:22:07 compute-0 podman[462706]: 2025-11-26 02:22:07.935633642 +0000 UTC m=+0.303294009 container attach 0fff21d31df35269066ad4815d1f41fe7f43b9e2b2f34b40764efe8f6b35b663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wu, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:22:08 compute-0 nova_compute[350387]: 2025-11-26 02:22:08.869 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:09 compute-0 beautiful_wu[462723]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:22:09 compute-0 beautiful_wu[462723]: --> relative data size: 1.0
Nov 26 02:22:09 compute-0 beautiful_wu[462723]: --> All data devices are unavailable
Nov 26 02:22:09 compute-0 systemd[1]: libpod-0fff21d31df35269066ad4815d1f41fe7f43b9e2b2f34b40764efe8f6b35b663.scope: Deactivated successfully.
Nov 26 02:22:09 compute-0 systemd[1]: libpod-0fff21d31df35269066ad4815d1f41fe7f43b9e2b2f34b40764efe8f6b35b663.scope: Consumed 1.222s CPU time.
Nov 26 02:22:09 compute-0 podman[462706]: 2025-11-26 02:22:09.228019043 +0000 UTC m=+1.595679370 container died 0fff21d31df35269066ad4815d1f41fe7f43b9e2b2f34b40764efe8f6b35b663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wu, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 02:22:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-2915e08aa3a2a6a2ef1e3f155aca396830d16a5ab59e23d6b4fa5dbc8a2468a8-merged.mount: Deactivated successfully.
Nov 26 02:22:09 compute-0 podman[462706]: 2025-11-26 02:22:09.330237487 +0000 UTC m=+1.697897814 container remove 0fff21d31df35269066ad4815d1f41fe7f43b9e2b2f34b40764efe8f6b35b663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wu, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 02:22:09 compute-0 systemd[1]: libpod-conmon-0fff21d31df35269066ad4815d1f41fe7f43b9e2b2f34b40764efe8f6b35b663.scope: Deactivated successfully.
Nov 26 02:22:09 compute-0 nova_compute[350387]: 2025-11-26 02:22:09.525 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:09 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:22:10 compute-0 podman[462902]: 2025-11-26 02:22:10.62378888 +0000 UTC m=+0.097875083 container create 40255b77050330e3aff8d2a96da1b016c73b4f312e09e84e73a647009c765a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heyrovsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 02:22:10 compute-0 podman[462902]: 2025-11-26 02:22:10.58414884 +0000 UTC m=+0.058235093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:22:10 compute-0 systemd[1]: Started libpod-conmon-40255b77050330e3aff8d2a96da1b016c73b4f312e09e84e73a647009c765a5b.scope.
Nov 26 02:22:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:22:10 compute-0 podman[462902]: 2025-11-26 02:22:10.760965654 +0000 UTC m=+0.235051907 container init 40255b77050330e3aff8d2a96da1b016c73b4f312e09e84e73a647009c765a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:22:10 compute-0 podman[462902]: 2025-11-26 02:22:10.776546851 +0000 UTC m=+0.250633064 container start 40255b77050330e3aff8d2a96da1b016c73b4f312e09e84e73a647009c765a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:22:10 compute-0 podman[462902]: 2025-11-26 02:22:10.784303858 +0000 UTC m=+0.258390061 container attach 40255b77050330e3aff8d2a96da1b016c73b4f312e09e84e73a647009c765a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heyrovsky, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:22:10 compute-0 optimistic_heyrovsky[462919]: 167 167
Nov 26 02:22:10 compute-0 systemd[1]: libpod-40255b77050330e3aff8d2a96da1b016c73b4f312e09e84e73a647009c765a5b.scope: Deactivated successfully.
Nov 26 02:22:10 compute-0 podman[462902]: 2025-11-26 02:22:10.787810946 +0000 UTC m=+0.261897179 container died 40255b77050330e3aff8d2a96da1b016c73b4f312e09e84e73a647009c765a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:22:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-30aff3209bc1cd1a16c5fa3807d42e11f76de4bfe48de4c67c0eddc8d7d0c07a-merged.mount: Deactivated successfully.
Nov 26 02:22:10 compute-0 podman[462902]: 2025-11-26 02:22:10.860274437 +0000 UTC m=+0.334360640 container remove 40255b77050330e3aff8d2a96da1b016c73b4f312e09e84e73a647009c765a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_heyrovsky, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:22:10 compute-0 systemd[1]: libpod-conmon-40255b77050330e3aff8d2a96da1b016c73b4f312e09e84e73a647009c765a5b.scope: Deactivated successfully.
Nov 26 02:22:11 compute-0 podman[462942]: 2025-11-26 02:22:11.17558831 +0000 UTC m=+0.092663447 container create f5e189066c42129e7bb7abb8284fadf4c225d29c6e59c63f2e6581b08e3f69b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elion, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 02:22:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:22:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:22:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:22:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:22:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:22:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:22:11 compute-0 podman[462942]: 2025-11-26 02:22:11.136444683 +0000 UTC m=+0.053519870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:22:11 compute-0 systemd[1]: Started libpod-conmon-f5e189066c42129e7bb7abb8284fadf4c225d29c6e59c63f2e6581b08e3f69b5.scope.
Nov 26 02:22:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:22:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde1caba69cb693a602a3b46641550f97170c0d4ef074bf155d12665d690592b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:22:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde1caba69cb693a602a3b46641550f97170c0d4ef074bf155d12665d690592b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:22:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde1caba69cb693a602a3b46641550f97170c0d4ef074bf155d12665d690592b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:22:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde1caba69cb693a602a3b46641550f97170c0d4ef074bf155d12665d690592b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:22:11 compute-0 podman[462942]: 2025-11-26 02:22:11.415652766 +0000 UTC m=+0.332727943 container init f5e189066c42129e7bb7abb8284fadf4c225d29c6e59c63f2e6581b08e3f69b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elion, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:22:11 compute-0 podman[462942]: 2025-11-26 02:22:11.434564576 +0000 UTC m=+0.351639713 container start f5e189066c42129e7bb7abb8284fadf4c225d29c6e59c63f2e6581b08e3f69b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 02:22:11 compute-0 podman[462942]: 2025-11-26 02:22:11.442634762 +0000 UTC m=+0.359709889 container attach f5e189066c42129e7bb7abb8284fadf4c225d29c6e59c63f2e6581b08e3f69b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elion, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:22:11 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:12 compute-0 heuristic_elion[462958]: {
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:    "0": [
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:        {
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "devices": [
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "/dev/loop3"
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            ],
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "lv_name": "ceph_lv0",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "lv_size": "21470642176",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "name": "ceph_lv0",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "tags": {
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.cluster_name": "ceph",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.crush_device_class": "",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.encrypted": "0",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.osd_id": "0",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.type": "block",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.vdo": "0"
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            },
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "type": "block",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "vg_name": "ceph_vg0"
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:        }
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:    ],
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:    "1": [
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:        {
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "devices": [
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "/dev/loop4"
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            ],
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "lv_name": "ceph_lv1",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "lv_size": "21470642176",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "name": "ceph_lv1",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "tags": {
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.cluster_name": "ceph",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.crush_device_class": "",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.encrypted": "0",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.osd_id": "1",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.type": "block",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.vdo": "0"
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            },
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "type": "block",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "vg_name": "ceph_vg1"
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:        }
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:    ],
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:    "2": [
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:        {
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "devices": [
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "/dev/loop5"
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            ],
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "lv_name": "ceph_lv2",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "lv_size": "21470642176",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "name": "ceph_lv2",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "tags": {
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.cluster_name": "ceph",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.crush_device_class": "",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.encrypted": "0",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.osd_id": "2",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.type": "block",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:                "ceph.vdo": "0"
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            },
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "type": "block",
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:            "vg_name": "ceph_vg2"
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:        }
Nov 26 02:22:12 compute-0 heuristic_elion[462958]:    ]
Nov 26 02:22:12 compute-0 heuristic_elion[462958]: }
Nov 26 02:22:12 compute-0 systemd[1]: libpod-f5e189066c42129e7bb7abb8284fadf4c225d29c6e59c63f2e6581b08e3f69b5.scope: Deactivated successfully.
Nov 26 02:22:12 compute-0 podman[462942]: 2025-11-26 02:22:12.263485841 +0000 UTC m=+1.180560978 container died f5e189066c42129e7bb7abb8284fadf4c225d29c6e59c63f2e6581b08e3f69b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 02:22:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-fde1caba69cb693a602a3b46641550f97170c0d4ef074bf155d12665d690592b-merged.mount: Deactivated successfully.
Nov 26 02:22:12 compute-0 podman[462942]: 2025-11-26 02:22:12.370598213 +0000 UTC m=+1.287673320 container remove f5e189066c42129e7bb7abb8284fadf4c225d29c6e59c63f2e6581b08e3f69b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elion, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:22:12 compute-0 systemd[1]: libpod-conmon-f5e189066c42129e7bb7abb8284fadf4c225d29c6e59c63f2e6581b08e3f69b5.scope: Deactivated successfully.
Nov 26 02:22:12 compute-0 podman[462976]: 2025-11-26 02:22:12.440193532 +0000 UTC m=+0.116959709 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 26 02:22:12 compute-0 podman[462977]: 2025-11-26 02:22:12.447451896 +0000 UTC m=+0.121020222 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:22:12 compute-0 podman[462969]: 2025-11-26 02:22:12.472436066 +0000 UTC m=+0.162124954 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 26 02:22:13 compute-0 podman[463176]: 2025-11-26 02:22:13.489922684 +0000 UTC m=+0.083106779 container create ba19bb16131944c37f65c12ed8558f6d2d2e6e3385f05e74fa363ba7f900c5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_carson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 02:22:13 compute-0 podman[463176]: 2025-11-26 02:22:13.46299473 +0000 UTC m=+0.056178805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:22:13 compute-0 systemd[1]: Started libpod-conmon-ba19bb16131944c37f65c12ed8558f6d2d2e6e3385f05e74fa363ba7f900c5e3.scope.
Nov 26 02:22:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:22:13 compute-0 podman[463176]: 2025-11-26 02:22:13.641303706 +0000 UTC m=+0.234487871 container init ba19bb16131944c37f65c12ed8558f6d2d2e6e3385f05e74fa363ba7f900c5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 02:22:13 compute-0 podman[463176]: 2025-11-26 02:22:13.663876738 +0000 UTC m=+0.257060833 container start ba19bb16131944c37f65c12ed8558f6d2d2e6e3385f05e74fa363ba7f900c5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:22:13 compute-0 podman[463176]: 2025-11-26 02:22:13.670297648 +0000 UTC m=+0.263481793 container attach ba19bb16131944c37f65c12ed8558f6d2d2e6e3385f05e74fa363ba7f900c5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_carson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:22:13 compute-0 sleepy_carson[463191]: 167 167
Nov 26 02:22:13 compute-0 systemd[1]: libpod-ba19bb16131944c37f65c12ed8558f6d2d2e6e3385f05e74fa363ba7f900c5e3.scope: Deactivated successfully.
Nov 26 02:22:13 compute-0 podman[463176]: 2025-11-26 02:22:13.67927133 +0000 UTC m=+0.272455425 container died ba19bb16131944c37f65c12ed8558f6d2d2e6e3385f05e74fa363ba7f900c5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_carson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:22:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-707b54d34d6ce519de1921321399b824d3d09c12c360a9082884150201572c60-merged.mount: Deactivated successfully.
Nov 26 02:22:13 compute-0 podman[463176]: 2025-11-26 02:22:13.758061327 +0000 UTC m=+0.351245392 container remove ba19bb16131944c37f65c12ed8558f6d2d2e6e3385f05e74fa363ba7f900c5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 02:22:13 compute-0 systemd[1]: libpod-conmon-ba19bb16131944c37f65c12ed8558f6d2d2e6e3385f05e74fa363ba7f900c5e3.scope: Deactivated successfully.
Nov 26 02:22:13 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:13 compute-0 nova_compute[350387]: 2025-11-26 02:22:13.872 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:14 compute-0 podman[463214]: 2025-11-26 02:22:14.018729901 +0000 UTC m=+0.082669477 container create b87bc75526181dfd24bf2af419eb63da0dc30be237bfcf5be153efce18352249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 02:22:14 compute-0 podman[463214]: 2025-11-26 02:22:13.98193174 +0000 UTC m=+0.045871356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:22:14 compute-0 systemd[1]: Started libpod-conmon-b87bc75526181dfd24bf2af419eb63da0dc30be237bfcf5be153efce18352249.scope.
Nov 26 02:22:14 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:22:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff83543baf6bd2d8478e9fba11d9b5ebc4805a12eb7d2a4887af879f519008b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:22:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff83543baf6bd2d8478e9fba11d9b5ebc4805a12eb7d2a4887af879f519008b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:22:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff83543baf6bd2d8478e9fba11d9b5ebc4805a12eb7d2a4887af879f519008b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:22:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff83543baf6bd2d8478e9fba11d9b5ebc4805a12eb7d2a4887af879f519008b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:22:14 compute-0 podman[463214]: 2025-11-26 02:22:14.196905953 +0000 UTC m=+0.260845529 container init b87bc75526181dfd24bf2af419eb63da0dc30be237bfcf5be153efce18352249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_booth, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:22:14 compute-0 podman[463214]: 2025-11-26 02:22:14.217556502 +0000 UTC m=+0.281496078 container start b87bc75526181dfd24bf2af419eb63da0dc30be237bfcf5be153efce18352249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:22:14 compute-0 podman[463214]: 2025-11-26 02:22:14.225505024 +0000 UTC m=+0.289444660 container attach b87bc75526181dfd24bf2af419eb63da0dc30be237bfcf5be153efce18352249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_booth, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:22:14 compute-0 nova_compute[350387]: 2025-11-26 02:22:14.529 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:22:15 compute-0 friendly_booth[463230]: {
Nov 26 02:22:15 compute-0 friendly_booth[463230]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:22:15 compute-0 friendly_booth[463230]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:22:15 compute-0 friendly_booth[463230]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:22:15 compute-0 friendly_booth[463230]:        "osd_id": 0,
Nov 26 02:22:15 compute-0 friendly_booth[463230]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:22:15 compute-0 friendly_booth[463230]:        "type": "bluestore"
Nov 26 02:22:15 compute-0 friendly_booth[463230]:    },
Nov 26 02:22:15 compute-0 friendly_booth[463230]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:22:15 compute-0 friendly_booth[463230]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:22:15 compute-0 friendly_booth[463230]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:22:15 compute-0 friendly_booth[463230]:        "osd_id": 2,
Nov 26 02:22:15 compute-0 friendly_booth[463230]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:22:15 compute-0 friendly_booth[463230]:        "type": "bluestore"
Nov 26 02:22:15 compute-0 friendly_booth[463230]:    },
Nov 26 02:22:15 compute-0 friendly_booth[463230]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:22:15 compute-0 friendly_booth[463230]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:22:15 compute-0 friendly_booth[463230]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:22:15 compute-0 friendly_booth[463230]:        "osd_id": 1,
Nov 26 02:22:15 compute-0 friendly_booth[463230]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:22:15 compute-0 friendly_booth[463230]:        "type": "bluestore"
Nov 26 02:22:15 compute-0 friendly_booth[463230]:    }
Nov 26 02:22:15 compute-0 friendly_booth[463230]: }
Nov 26 02:22:15 compute-0 systemd[1]: libpod-b87bc75526181dfd24bf2af419eb63da0dc30be237bfcf5be153efce18352249.scope: Deactivated successfully.
Nov 26 02:22:15 compute-0 systemd[1]: libpod-b87bc75526181dfd24bf2af419eb63da0dc30be237bfcf5be153efce18352249.scope: Consumed 1.278s CPU time.
Nov 26 02:22:15 compute-0 podman[463214]: 2025-11-26 02:22:15.506704651 +0000 UTC m=+1.570644227 container died b87bc75526181dfd24bf2af419eb63da0dc30be237bfcf5be153efce18352249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_booth, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:22:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff83543baf6bd2d8478e9fba11d9b5ebc4805a12eb7d2a4887af879f519008b9-merged.mount: Deactivated successfully.
Nov 26 02:22:15 compute-0 podman[463214]: 2025-11-26 02:22:15.603702308 +0000 UTC m=+1.667641854 container remove b87bc75526181dfd24bf2af419eb63da0dc30be237bfcf5be153efce18352249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 02:22:15 compute-0 systemd[1]: libpod-conmon-b87bc75526181dfd24bf2af419eb63da0dc30be237bfcf5be153efce18352249.scope: Deactivated successfully.
Nov 26 02:22:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:22:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:22:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:22:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:22:15 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev cd04a225-37cd-4f88-870f-fba987e9c9e8 does not exist
Nov 26 02:22:15 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 75d8363c-f4cb-4414-9e2e-8fb12951b180 does not exist
Nov 26 02:22:15 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:16 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:22:16 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:22:17 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:18 compute-0 nova_compute[350387]: 2025-11-26 02:22:18.875 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:19 compute-0 nova_compute[350387]: 2025-11-26 02:22:19.532 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:19 compute-0 podman[463324]: 2025-11-26 02:22:19.577511307 +0000 UTC m=+0.121133505 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:22:19 compute-0 podman[463325]: 2025-11-26 02:22:19.639559446 +0000 UTC m=+0.180281172 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:22:19 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:22:21 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:23 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:23 compute-0 nova_compute[350387]: 2025-11-26 02:22:23.878 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:24 compute-0 nova_compute[350387]: 2025-11-26 02:22:24.534 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:24 compute-0 podman[463370]: 2025-11-26 02:22:24.582543631 +0000 UTC m=+0.126442434 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 26 02:22:24 compute-0 podman[463369]: 2025-11-26 02:22:24.618401446 +0000 UTC m=+0.165067746 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vendor=Red Hat, Inc., version=9.4, name=ubi9, release-0.7.12=, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 02:22:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:22:25.007 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:22:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:22:25.008 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:22:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:22:25.009 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:22:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:22:25 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:22:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2435009382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:22:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:22:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2435009382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:22:27 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:28 compute-0 nova_compute[350387]: 2025-11-26 02:22:28.880 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:29 compute-0 nova_compute[350387]: 2025-11-26 02:22:29.538 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:30 compute-0 podman[158021]: time="2025-11-26T02:22:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:22:30 compute-0 podman[158021]: @ - - [26/Nov/2025:02:22:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:22:30 compute-0 podman[158021]: @ - - [26/Nov/2025:02:22:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8652 "" "Go-http-client/1.1"
Nov 26 02:22:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:22:30 compute-0 podman[463405]: 2025-11-26 02:22:30.556710397 +0000 UTC m=+0.101399152 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, config_id=edpm, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, distribution-scope=public)
Nov 26 02:22:30 compute-0 podman[463406]: 2025-11-26 02:22:30.597218682 +0000 UTC m=+0.133535402 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:22:31 compute-0 openstack_network_exporter[367323]: ERROR   02:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:22:31 compute-0 openstack_network_exporter[367323]: ERROR   02:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:22:31 compute-0 openstack_network_exporter[367323]: ERROR   02:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:22:31 compute-0 openstack_network_exporter[367323]: ERROR   02:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:22:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:22:31 compute-0 openstack_network_exporter[367323]: ERROR   02:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:22:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:22:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:33 compute-0 nova_compute[350387]: 2025-11-26 02:22:33.883 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:34 compute-0 nova_compute[350387]: 2025-11-26 02:22:34.541 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:22:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:38 compute-0 nova_compute[350387]: 2025-11-26 02:22:38.887 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:39 compute-0 nova_compute[350387]: 2025-11-26 02:22:39.545 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:22:41
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'volumes', 'vms', 'cephfs.cephfs.meta', '.mgr', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'default.rgw.control']
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:22:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:22:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.877 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.879 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.880 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.889 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.888 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '74d081af-66cd-4e37-99e4-31f777885766', 'name': 'te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'dbaf181e-c7da-4938-bfef-7ab3aa9a19bc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'user_id': '3a9710ede02d47cbb016ff596d936633', 'hostId': '0514aa3466932c9e7b93e3dcd39fcbb186e60af35850a79a2e38f108', 'status': 'active', 'metadata': {'metering.server_group': 'bd820598-acdd-4f42-8252-1f5951161b01'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.889 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.894 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'add194b7-6a6c-48ef-8355-3344185eb43e', 'name': 'te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'dbaf181e-c7da-4938-bfef-7ab3aa9a19bc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'user_id': '3a9710ede02d47cbb016ff596d936633', 'hostId': '0514aa3466932c9e7b93e3dcd39fcbb186e60af35850a79a2e38f108', 'status': 'active', 'metadata': {'metering.server_group': 'bd820598-acdd-4f42-8252-1f5951161b01'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.894 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.894 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.894 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.894 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.895 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.896 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.896 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.896 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.896 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.896 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.896 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T02:22:42.894789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.897 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T02:22:42.896850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.904 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.909 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.910 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.910 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.911 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.911 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.911 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.911 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.912 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.912 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.912 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.912 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.912 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.912 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.912 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.913 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.914 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.914 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.914 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.914 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.914 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T02:22:42.911386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.915 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T02:22:42.912651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.915 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.915 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.916 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.916 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.916 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.916 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T02:22:42.915024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.917 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.917 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.917 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T02:22:42.917126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.917 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.918 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.918 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.918 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.918 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.918 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.918 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T02:22:42.918787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:42.974 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/cpu volume: 336460000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.010 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/cpu volume: 337730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.011 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.012 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.012 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.012 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.012 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.013 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.013 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T02:22:43.013022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.014 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.015 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.015 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.016 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.016 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.016 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.016 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.016 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/memory.usage volume: 42.328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.017 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/memory.usage volume: 42.34375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T02:22:43.016546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.018 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.018 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.018 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.018 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.019 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.019 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.019 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.019 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.019 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.020 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T02:22:43.019542) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.021 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.022 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.022 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.022 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.022 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.022 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.023 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.023 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.024 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.024 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.025 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.025 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T02:22:43.022814) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.026 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.026 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.026 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.027 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.028 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.028 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T02:22:43.026297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.029 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.029 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.029 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.030 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.030 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.031 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.031 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T02:22:43.030203) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.032 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.032 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.032 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.032 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.033 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.033 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.033 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.033 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T02:22:43.033206) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.034 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.035 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.035 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.035 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.035 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.035 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.036 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T02:22:43.035934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.063 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.064 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.089 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.089 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.091 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.091 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.091 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.091 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.092 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.092 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.092 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T02:22:43.092277) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.161 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.162 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.232 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.bytes volume: 31291904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.232 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.233 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.234 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.234 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.234 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.234 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.235 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.235 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.235 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.236 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.latency volume: 2432488124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.236 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T02:22:43.235534) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.236 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.latency volume: 867897915 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.237 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.latency volume: 2793486770 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.237 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.latency volume: 209467376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.238 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.238 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.238 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.239 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.239 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.239 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.239 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.240 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.240 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.requests volume: 1145 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.241 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.242 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.242 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.242 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T02:22:43.239430) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.243 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.243 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.243 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.243 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.243 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.244 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.244 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.245 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.246 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.246 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.247 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T02:22:43.243546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.246 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.247 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.247 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.247 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.248 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T02:22:43.247577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.248 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.bytes volume: 73154560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.248 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.249 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.249 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.250 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.250 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.250 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.250 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.251 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.251 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.251 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T02:22:43.251287) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.251 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.252 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.252 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.253 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.253 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.253 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.253 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.254 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.254 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T02:22:43.254022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.254 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.latency volume: 9013075611 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.255 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.255 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.latency volume: 8178329181 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.256 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.256 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.257 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.257 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.257 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.257 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.257 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.258 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T02:22:43.257688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.258 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.requests volume: 335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.258 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.258 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.requests volume: 304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.259 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.259 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.259 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.259 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.259 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.260 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.260 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.260 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.260 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.260 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.261 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.261 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.264 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T02:22:43.260148) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.265 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.265 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.265 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.265 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.265 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.265 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.265 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.266 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.266 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.266 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.266 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.266 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.266 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:22:43.267 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:22:43 compute-0 nova_compute[350387]: 2025-11-26 02:22:43.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:22:43 compute-0 podman[463452]: 2025-11-26 02:22:43.593042424 +0000 UTC m=+0.128238454 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:22:43 compute-0 podman[463451]: 2025-11-26 02:22:43.594793413 +0000 UTC m=+0.135037715 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 02:22:43 compute-0 podman[463453]: 2025-11-26 02:22:43.614872796 +0000 UTC m=+0.143269406 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:22:43 compute-0 nova_compute[350387]: 2025-11-26 02:22:43.889 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:44 compute-0 nova_compute[350387]: 2025-11-26 02:22:44.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:22:44 compute-0 nova_compute[350387]: 2025-11-26 02:22:44.345 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:22:44 compute-0 nova_compute[350387]: 2025-11-26 02:22:44.346 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:22:44 compute-0 nova_compute[350387]: 2025-11-26 02:22:44.346 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:22:44 compute-0 nova_compute[350387]: 2025-11-26 02:22:44.347 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:22:44 compute-0 nova_compute[350387]: 2025-11-26 02:22:44.349 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:22:44 compute-0 nova_compute[350387]: 2025-11-26 02:22:44.548 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:22:44 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1277366490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:22:44 compute-0 nova_compute[350387]: 2025-11-26 02:22:44.880 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:22:44 compute-0 nova_compute[350387]: 2025-11-26 02:22:44.990 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:22:44 compute-0 nova_compute[350387]: 2025-11-26 02:22:44.991 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:22:45 compute-0 nova_compute[350387]: 2025-11-26 02:22:45.004 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:22:45 compute-0 nova_compute[350387]: 2025-11-26 02:22:45.005 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:22:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:22:45 compute-0 nova_compute[350387]: 2025-11-26 02:22:45.660 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:22:45 compute-0 nova_compute[350387]: 2025-11-26 02:22:45.663 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3519MB free_disk=59.897003173828125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:22:45 compute-0 nova_compute[350387]: 2025-11-26 02:22:45.665 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:22:45 compute-0 nova_compute[350387]: 2025-11-26 02:22:45.666 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:22:46 compute-0 nova_compute[350387]: 2025-11-26 02:22:46.037 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 74d081af-66cd-4e37-99e4-31f777885766 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:22:46 compute-0 nova_compute[350387]: 2025-11-26 02:22:46.038 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance add194b7-6a6c-48ef-8355-3344185eb43e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:22:46 compute-0 nova_compute[350387]: 2025-11-26 02:22:46.038 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:22:46 compute-0 nova_compute[350387]: 2025-11-26 02:22:46.038 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:22:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:46 compute-0 nova_compute[350387]: 2025-11-26 02:22:46.206 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:22:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:22:46 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1740663077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:22:46 compute-0 nova_compute[350387]: 2025-11-26 02:22:46.716 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:22:46 compute-0 nova_compute[350387]: 2025-11-26 02:22:46.729 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:22:46 compute-0 nova_compute[350387]: 2025-11-26 02:22:46.758 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:22:46 compute-0 nova_compute[350387]: 2025-11-26 02:22:46.761 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:22:46 compute-0 nova_compute[350387]: 2025-11-26 02:22:46.761 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:22:47 compute-0 nova_compute[350387]: 2025-11-26 02:22:47.764 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:22:47 compute-0 nova_compute[350387]: 2025-11-26 02:22:47.765 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:22:47 compute-0 nova_compute[350387]: 2025-11-26 02:22:47.765 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:22:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:48 compute-0 nova_compute[350387]: 2025-11-26 02:22:48.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:22:48 compute-0 nova_compute[350387]: 2025-11-26 02:22:48.301 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:22:48 compute-0 nova_compute[350387]: 2025-11-26 02:22:48.671 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:22:48 compute-0 nova_compute[350387]: 2025-11-26 02:22:48.672 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:22:48 compute-0 nova_compute[350387]: 2025-11-26 02:22:48.672 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:22:48 compute-0 nova_compute[350387]: 2025-11-26 02:22:48.893 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:49 compute-0 nova_compute[350387]: 2025-11-26 02:22:49.553 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:50 compute-0 nova_compute[350387]: 2025-11-26 02:22:50.232 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Updating instance_info_cache with network_info: [{"id": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "address": "fa:16:3e:6e:b7:00", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaa46d5d-d6", "ovs_interfaceid": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:22:50 compute-0 nova_compute[350387]: 2025-11-26 02:22:50.257 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:22:50 compute-0 nova_compute[350387]: 2025-11-26 02:22:50.258 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:22:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:22:50 compute-0 podman[463556]: 2025-11-26 02:22:50.612366654 +0000 UTC m=+0.153934454 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Nov 26 02:22:50 compute-0 podman[463557]: 2025-11-26 02:22:50.690593836 +0000 UTC m=+0.232290420 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00152159845672983 of space, bias 1.0, pg target 0.456479537018949 quantized to 32 (current 32)
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:22:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:22:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:53 compute-0 nova_compute[350387]: 2025-11-26 02:22:53.897 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:54 compute-0 nova_compute[350387]: 2025-11-26 02:22:54.559 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:22:55 compute-0 podman[463602]: 2025-11-26 02:22:55.575471133 +0000 UTC m=+0.109127659 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:22:55 compute-0 podman[463601]: 2025-11-26 02:22:55.589236448 +0000 UTC m=+0.140933769 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, release=1214.1726694543, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 02:22:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:56 compute-0 nova_compute[350387]: 2025-11-26 02:22:56.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:22:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:22:58 compute-0 nova_compute[350387]: 2025-11-26 02:22:58.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:22:58 compute-0 nova_compute[350387]: 2025-11-26 02:22:58.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:22:58 compute-0 nova_compute[350387]: 2025-11-26 02:22:58.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:22:58 compute-0 nova_compute[350387]: 2025-11-26 02:22:58.899 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:59 compute-0 nova_compute[350387]: 2025-11-26 02:22:59.311 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:22:59 compute-0 nova_compute[350387]: 2025-11-26 02:22:59.562 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:22:59 compute-0 podman[158021]: time="2025-11-26T02:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:22:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:22:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8632 "" "Go-http-client/1.1"
Nov 26 02:23:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:23:01 compute-0 openstack_network_exporter[367323]: ERROR   02:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:23:01 compute-0 openstack_network_exporter[367323]: ERROR   02:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:23:01 compute-0 openstack_network_exporter[367323]: ERROR   02:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:23:01 compute-0 openstack_network_exporter[367323]: ERROR   02:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:23:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:23:01 compute-0 openstack_network_exporter[367323]: ERROR   02:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:23:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:23:01 compute-0 podman[463637]: 2025-11-26 02:23:01.592398676 +0000 UTC m=+0.139739466 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, version=9.6, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git)
Nov 26 02:23:01 compute-0 podman[463638]: 2025-11-26 02:23:01.622494849 +0000 UTC m=+0.163937884 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:23:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:03 compute-0 nova_compute[350387]: 2025-11-26 02:23:03.903 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:04 compute-0 nova_compute[350387]: 2025-11-26 02:23:04.566 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:23:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:08 compute-0 nova_compute[350387]: 2025-11-26 02:23:08.908 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:09 compute-0 nova_compute[350387]: 2025-11-26 02:23:09.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:23:09 compute-0 nova_compute[350387]: 2025-11-26 02:23:09.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 02:23:09 compute-0 nova_compute[350387]: 2025-11-26 02:23:09.569 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:23:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:23:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:23:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:23:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:23:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:23:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:23:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:13 compute-0 nova_compute[350387]: 2025-11-26 02:23:13.910 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:14 compute-0 nova_compute[350387]: 2025-11-26 02:23:14.574 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:14 compute-0 podman[463681]: 2025-11-26 02:23:14.589951668 +0000 UTC m=+0.134692055 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251118, io.buildah.version=1.41.4)
Nov 26 02:23:14 compute-0 podman[463682]: 2025-11-26 02:23:14.592278593 +0000 UTC m=+0.127040011 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 02:23:14 compute-0 podman[463683]: 2025-11-26 02:23:14.594662139 +0000 UTC m=+0.128695876 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:23:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:23:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:17 compute-0 podman[463912]: 2025-11-26 02:23:17.583291616 +0000 UTC m=+0.146307720 container exec 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:23:17 compute-0 podman[463912]: 2025-11-26 02:23:17.700692446 +0000 UTC m=+0.263708520 container exec_died 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:23:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:23:18 compute-0 nova_compute[350387]: 2025-11-26 02:23:18.913 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:23:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:23:18 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:23:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:23:19 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:23:19 compute-0 nova_compute[350387]: 2025-11-26 02:23:19.577 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:23:20 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:23:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:23:20 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:23:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:23:20 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:23:20 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev ec64248f-add9-40e7-8e40-aae6236a6bd9 does not exist
Nov 26 02:23:20 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev c9a422f2-0f65-43c0-ba2f-94ab70cdd8d3 does not exist
Nov 26 02:23:20 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev d8be4cec-729a-4959-b06d-1908441c93d0 does not exist
Nov 26 02:23:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:23:20 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:23:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:23:20 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:23:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:23:20 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:23:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:23:20 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:23:20 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:23:20 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:23:20 compute-0 podman[464269]: 2025-11-26 02:23:20.941870618 +0000 UTC m=+0.165579550 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 02:23:20 compute-0 podman[464270]: 2025-11-26 02:23:20.959969505 +0000 UTC m=+0.182418482 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 02:23:21 compute-0 podman[464377]: 2025-11-26 02:23:21.497167746 +0000 UTC m=+0.064083856 container create 82a4a98c3fdb25201533e2463cd2a41db34500017215b143f362ad2c0868b83d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:23:21 compute-0 podman[464377]: 2025-11-26 02:23:21.470493099 +0000 UTC m=+0.037409189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:23:21 compute-0 systemd[1]: Started libpod-conmon-82a4a98c3fdb25201533e2463cd2a41db34500017215b143f362ad2c0868b83d.scope.
Nov 26 02:23:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:23:21 compute-0 podman[464377]: 2025-11-26 02:23:21.643686742 +0000 UTC m=+0.210602902 container init 82a4a98c3fdb25201533e2463cd2a41db34500017215b143f362ad2c0868b83d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shamir, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 02:23:21 compute-0 podman[464377]: 2025-11-26 02:23:21.656570863 +0000 UTC m=+0.223486963 container start 82a4a98c3fdb25201533e2463cd2a41db34500017215b143f362ad2c0868b83d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shamir, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 02:23:21 compute-0 podman[464377]: 2025-11-26 02:23:21.662495429 +0000 UTC m=+0.229411529 container attach 82a4a98c3fdb25201533e2463cd2a41db34500017215b143f362ad2c0868b83d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shamir, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:23:21 compute-0 wonderful_shamir[464393]: 167 167
Nov 26 02:23:21 compute-0 systemd[1]: libpod-82a4a98c3fdb25201533e2463cd2a41db34500017215b143f362ad2c0868b83d.scope: Deactivated successfully.
Nov 26 02:23:21 compute-0 podman[464377]: 2025-11-26 02:23:21.668738214 +0000 UTC m=+0.235654284 container died 82a4a98c3fdb25201533e2463cd2a41db34500017215b143f362ad2c0868b83d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shamir, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 02:23:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-58160d65f1de7769a983a8e42e10921508d862c2b7e8080014a8c36669456984-merged.mount: Deactivated successfully.
Nov 26 02:23:21 compute-0 podman[464377]: 2025-11-26 02:23:21.738680353 +0000 UTC m=+0.305596423 container remove 82a4a98c3fdb25201533e2463cd2a41db34500017215b143f362ad2c0868b83d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:23:21 compute-0 systemd[1]: libpod-conmon-82a4a98c3fdb25201533e2463cd2a41db34500017215b143f362ad2c0868b83d.scope: Deactivated successfully.
Nov 26 02:23:21 compute-0 podman[464415]: 2025-11-26 02:23:21.983672088 +0000 UTC m=+0.080790845 container create 9e4f1c93db44732efeccd1f655d98b8fa92925651371bb020e9091a9121e6f8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dirac, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:23:22 compute-0 podman[464415]: 2025-11-26 02:23:21.94450647 +0000 UTC m=+0.041625247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:23:22 compute-0 systemd[1]: Started libpod-conmon-9e4f1c93db44732efeccd1f655d98b8fa92925651371bb020e9091a9121e6f8c.scope.
Nov 26 02:23:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:23:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f7bbe54276ea8db7898daa28d29b1ebabd2bb2039e822c4f37b75454dd5d4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:23:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f7bbe54276ea8db7898daa28d29b1ebabd2bb2039e822c4f37b75454dd5d4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:23:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f7bbe54276ea8db7898daa28d29b1ebabd2bb2039e822c4f37b75454dd5d4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:23:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f7bbe54276ea8db7898daa28d29b1ebabd2bb2039e822c4f37b75454dd5d4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:23:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f7bbe54276ea8db7898daa28d29b1ebabd2bb2039e822c4f37b75454dd5d4e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:23:22 compute-0 podman[464415]: 2025-11-26 02:23:22.157748685 +0000 UTC m=+0.254867482 container init 9e4f1c93db44732efeccd1f655d98b8fa92925651371bb020e9091a9121e6f8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:23:22 compute-0 podman[464415]: 2025-11-26 02:23:22.181662695 +0000 UTC m=+0.278781432 container start 9e4f1c93db44732efeccd1f655d98b8fa92925651371bb020e9091a9121e6f8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:23:22 compute-0 podman[464415]: 2025-11-26 02:23:22.187433747 +0000 UTC m=+0.284552564 container attach 9e4f1c93db44732efeccd1f655d98b8fa92925651371bb020e9091a9121e6f8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dirac, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 02:23:23 compute-0 nova_compute[350387]: 2025-11-26 02:23:23.323 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:23:23 compute-0 nova_compute[350387]: 2025-11-26 02:23:23.323 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 02:23:23 compute-0 nova_compute[350387]: 2025-11-26 02:23:23.345 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 02:23:23 compute-0 crazy_dirac[464432]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:23:23 compute-0 crazy_dirac[464432]: --> relative data size: 1.0
Nov 26 02:23:23 compute-0 crazy_dirac[464432]: --> All data devices are unavailable
Nov 26 02:23:23 compute-0 systemd[1]: libpod-9e4f1c93db44732efeccd1f655d98b8fa92925651371bb020e9091a9121e6f8c.scope: Deactivated successfully.
Nov 26 02:23:23 compute-0 systemd[1]: libpod-9e4f1c93db44732efeccd1f655d98b8fa92925651371bb020e9091a9121e6f8c.scope: Consumed 1.256s CPU time.
Nov 26 02:23:23 compute-0 podman[464415]: 2025-11-26 02:23:23.541785863 +0000 UTC m=+1.638904630 container died 9e4f1c93db44732efeccd1f655d98b8fa92925651371bb020e9091a9121e6f8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 02:23:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9f7bbe54276ea8db7898daa28d29b1ebabd2bb2039e822c4f37b75454dd5d4e-merged.mount: Deactivated successfully.
Nov 26 02:23:23 compute-0 podman[464415]: 2025-11-26 02:23:23.653006069 +0000 UTC m=+1.750124836 container remove 9e4f1c93db44732efeccd1f655d98b8fa92925651371bb020e9091a9121e6f8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 02:23:23 compute-0 systemd[1]: libpod-conmon-9e4f1c93db44732efeccd1f655d98b8fa92925651371bb020e9091a9121e6f8c.scope: Deactivated successfully.
Nov 26 02:23:23 compute-0 nova_compute[350387]: 2025-11-26 02:23:23.916 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:24 compute-0 nova_compute[350387]: 2025-11-26 02:23:24.580 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:23:25.009 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:23:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:23:25.011 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:23:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:23:25.012 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:23:25 compute-0 podman[464611]: 2025-11-26 02:23:25.053645433 +0000 UTC m=+0.114465198 container create 069fd30a3f764f595f9286b15dc5bacd2ac223166539a6c29ce749d2b40520d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mahavira, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 26 02:23:25 compute-0 podman[464611]: 2025-11-26 02:23:25.006609325 +0000 UTC m=+0.067429110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:23:25 compute-0 systemd[1]: Started libpod-conmon-069fd30a3f764f595f9286b15dc5bacd2ac223166539a6c29ce749d2b40520d0.scope.
Nov 26 02:23:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:23:25 compute-0 podman[464611]: 2025-11-26 02:23:25.214202831 +0000 UTC m=+0.275022586 container init 069fd30a3f764f595f9286b15dc5bacd2ac223166539a6c29ce749d2b40520d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 02:23:25 compute-0 podman[464611]: 2025-11-26 02:23:25.233693698 +0000 UTC m=+0.294513463 container start 069fd30a3f764f595f9286b15dc5bacd2ac223166539a6c29ce749d2b40520d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mahavira, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 02:23:25 compute-0 stupefied_mahavira[464626]: 167 167
Nov 26 02:23:25 compute-0 podman[464611]: 2025-11-26 02:23:25.240352864 +0000 UTC m=+0.301172629 container attach 069fd30a3f764f595f9286b15dc5bacd2ac223166539a6c29ce749d2b40520d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:23:25 compute-0 podman[464611]: 2025-11-26 02:23:25.241430664 +0000 UTC m=+0.302250399 container died 069fd30a3f764f595f9286b15dc5bacd2ac223166539a6c29ce749d2b40520d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 02:23:25 compute-0 systemd[1]: libpod-069fd30a3f764f595f9286b15dc5bacd2ac223166539a6c29ce749d2b40520d0.scope: Deactivated successfully.
Nov 26 02:23:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-625bbb5eeff20f5e5358a5ac4f768bf6f6a6525cc247a0b931c396d823cdb6d5-merged.mount: Deactivated successfully.
Nov 26 02:23:25 compute-0 podman[464611]: 2025-11-26 02:23:25.297966628 +0000 UTC m=+0.358786363 container remove 069fd30a3f764f595f9286b15dc5bacd2ac223166539a6c29ce749d2b40520d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mahavira, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 02:23:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:23:25 compute-0 systemd[1]: libpod-conmon-069fd30a3f764f595f9286b15dc5bacd2ac223166539a6c29ce749d2b40520d0.scope: Deactivated successfully.
Nov 26 02:23:25 compute-0 podman[464648]: 2025-11-26 02:23:25.572796729 +0000 UTC m=+0.101986059 container create 05d2aac32d32e5e67b421261ef3a99f2f62a863ba33df7f63f7014973ada6784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:23:25 compute-0 podman[464648]: 2025-11-26 02:23:25.545190125 +0000 UTC m=+0.074379405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:23:25 compute-0 systemd[1]: Started libpod-conmon-05d2aac32d32e5e67b421261ef3a99f2f62a863ba33df7f63f7014973ada6784.scope.
Nov 26 02:23:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:23:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e554f6c1804ca0ce60ef5e3ab3af6b622dec09f9087b3c57b88ffabb2ba84aa1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:23:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e554f6c1804ca0ce60ef5e3ab3af6b622dec09f9087b3c57b88ffabb2ba84aa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:23:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e554f6c1804ca0ce60ef5e3ab3af6b622dec09f9087b3c57b88ffabb2ba84aa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:23:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e554f6c1804ca0ce60ef5e3ab3af6b622dec09f9087b3c57b88ffabb2ba84aa1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:23:25 compute-0 podman[464648]: 2025-11-26 02:23:25.700666891 +0000 UTC m=+0.229856201 container init 05d2aac32d32e5e67b421261ef3a99f2f62a863ba33df7f63f7014973ada6784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 02:23:25 compute-0 podman[464648]: 2025-11-26 02:23:25.726171306 +0000 UTC m=+0.255360546 container start 05d2aac32d32e5e67b421261ef3a99f2f62a863ba33df7f63f7014973ada6784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 26 02:23:25 compute-0 podman[464648]: 2025-11-26 02:23:25.737998817 +0000 UTC m=+0.267188077 container attach 05d2aac32d32e5e67b421261ef3a99f2f62a863ba33df7f63f7014973ada6784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 02:23:25 compute-0 podman[464665]: 2025-11-26 02:23:25.76629398 +0000 UTC m=+0.106326800 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, vcs-type=git, build-date=2024-09-18T21:23:30)
Nov 26 02:23:25 compute-0 podman[464667]: 2025-11-26 02:23:25.790703454 +0000 UTC m=+0.125638151 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 26 02:23:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:26 compute-0 cranky_einstein[464664]: {
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:    "0": [
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:        {
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "devices": [
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "/dev/loop3"
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            ],
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "lv_name": "ceph_lv0",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "lv_size": "21470642176",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "name": "ceph_lv0",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "tags": {
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.cluster_name": "ceph",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.crush_device_class": "",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.encrypted": "0",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.osd_id": "0",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.type": "block",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.vdo": "0"
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            },
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "type": "block",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "vg_name": "ceph_vg0"
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:        }
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:    ],
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:    "1": [
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:        {
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "devices": [
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "/dev/loop4"
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            ],
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "lv_name": "ceph_lv1",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "lv_size": "21470642176",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "name": "ceph_lv1",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "tags": {
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.cluster_name": "ceph",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.crush_device_class": "",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.encrypted": "0",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.osd_id": "1",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.type": "block",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.vdo": "0"
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            },
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "type": "block",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "vg_name": "ceph_vg1"
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:        }
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:    ],
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:    "2": [
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:        {
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "devices": [
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "/dev/loop5"
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            ],
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "lv_name": "ceph_lv2",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "lv_size": "21470642176",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "name": "ceph_lv2",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "tags": {
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.cluster_name": "ceph",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.crush_device_class": "",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.encrypted": "0",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.osd_id": "2",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.type": "block",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:                "ceph.vdo": "0"
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            },
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "type": "block",
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:            "vg_name": "ceph_vg2"
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:        }
Nov 26 02:23:26 compute-0 cranky_einstein[464664]:    ]
Nov 26 02:23:26 compute-0 cranky_einstein[464664]: }
Nov 26 02:23:26 compute-0 systemd[1]: libpod-05d2aac32d32e5e67b421261ef3a99f2f62a863ba33df7f63f7014973ada6784.scope: Deactivated successfully.
Nov 26 02:23:26 compute-0 podman[464712]: 2025-11-26 02:23:26.664968429 +0000 UTC m=+0.055223518 container died 05d2aac32d32e5e67b421261ef3a99f2f62a863ba33df7f63f7014973ada6784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 02:23:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e554f6c1804ca0ce60ef5e3ab3af6b622dec09f9087b3c57b88ffabb2ba84aa1-merged.mount: Deactivated successfully.
Nov 26 02:23:26 compute-0 podman[464712]: 2025-11-26 02:23:26.805176387 +0000 UTC m=+0.195431406 container remove 05d2aac32d32e5e67b421261ef3a99f2f62a863ba33df7f63f7014973ada6784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 02:23:26 compute-0 systemd[1]: libpod-conmon-05d2aac32d32e5e67b421261ef3a99f2f62a863ba33df7f63f7014973ada6784.scope: Deactivated successfully.
Nov 26 02:23:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:23:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/936929969' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:23:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:23:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/936929969' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:23:28 compute-0 podman[464867]: 2025-11-26 02:23:28.014685236 +0000 UTC m=+0.062426760 container create b21e613928bc866a22213f5f38779b8288858ff8eb2433caa6c7ad5b49b18416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ride, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 26 02:23:28 compute-0 systemd[1]: Started libpod-conmon-b21e613928bc866a22213f5f38779b8288858ff8eb2433caa6c7ad5b49b18416.scope.
Nov 26 02:23:28 compute-0 podman[464867]: 2025-11-26 02:23:27.992075982 +0000 UTC m=+0.039817546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:23:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:23:28 compute-0 podman[464867]: 2025-11-26 02:23:28.121366385 +0000 UTC m=+0.169107959 container init b21e613928bc866a22213f5f38779b8288858ff8eb2433caa6c7ad5b49b18416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ride, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:23:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:28 compute-0 podman[464867]: 2025-11-26 02:23:28.133470874 +0000 UTC m=+0.181212378 container start b21e613928bc866a22213f5f38779b8288858ff8eb2433caa6c7ad5b49b18416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 26 02:23:28 compute-0 podman[464867]: 2025-11-26 02:23:28.137654541 +0000 UTC m=+0.185396085 container attach b21e613928bc866a22213f5f38779b8288858ff8eb2433caa6c7ad5b49b18416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ride, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 02:23:28 compute-0 zen_ride[464883]: 167 167
Nov 26 02:23:28 compute-0 systemd[1]: libpod-b21e613928bc866a22213f5f38779b8288858ff8eb2433caa6c7ad5b49b18416.scope: Deactivated successfully.
Nov 26 02:23:28 compute-0 conmon[464883]: conmon b21e613928bc866a2221 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b21e613928bc866a22213f5f38779b8288858ff8eb2433caa6c7ad5b49b18416.scope/container/memory.events
Nov 26 02:23:28 compute-0 podman[464867]: 2025-11-26 02:23:28.150238684 +0000 UTC m=+0.197980208 container died b21e613928bc866a22213f5f38779b8288858ff8eb2433caa6c7ad5b49b18416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ride, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 02:23:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f7c779df6d65d1d4a438a197b57414e5adf07ed72c993a57bbd5e0069a50ca9-merged.mount: Deactivated successfully.
Nov 26 02:23:28 compute-0 podman[464867]: 2025-11-26 02:23:28.212137068 +0000 UTC m=+0.259878572 container remove b21e613928bc866a22213f5f38779b8288858ff8eb2433caa6c7ad5b49b18416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 02:23:28 compute-0 systemd[1]: libpod-conmon-b21e613928bc866a22213f5f38779b8288858ff8eb2433caa6c7ad5b49b18416.scope: Deactivated successfully.
Nov 26 02:23:28 compute-0 podman[464908]: 2025-11-26 02:23:28.472097572 +0000 UTC m=+0.075897188 container create 61d4145b8a58a0df8790ca9995165d8876556afb5aab1c658c89436f6009a331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 26 02:23:28 compute-0 podman[464908]: 2025-11-26 02:23:28.438885781 +0000 UTC m=+0.042685457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:23:28 compute-0 systemd[1]: Started libpod-conmon-61d4145b8a58a0df8790ca9995165d8876556afb5aab1c658c89436f6009a331.scope.
Nov 26 02:23:28 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:23:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c6d01fc38d599d5555e7b48b6ed015b1be15f181e8939af6d3cc8657c422a84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:23:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c6d01fc38d599d5555e7b48b6ed015b1be15f181e8939af6d3cc8657c422a84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:23:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c6d01fc38d599d5555e7b48b6ed015b1be15f181e8939af6d3cc8657c422a84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:23:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c6d01fc38d599d5555e7b48b6ed015b1be15f181e8939af6d3cc8657c422a84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:23:28 compute-0 podman[464908]: 2025-11-26 02:23:28.637444675 +0000 UTC m=+0.241244291 container init 61d4145b8a58a0df8790ca9995165d8876556afb5aab1c658c89436f6009a331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 02:23:28 compute-0 podman[464908]: 2025-11-26 02:23:28.649255266 +0000 UTC m=+0.253054852 container start 61d4145b8a58a0df8790ca9995165d8876556afb5aab1c658c89436f6009a331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:23:28 compute-0 podman[464908]: 2025-11-26 02:23:28.653175846 +0000 UTC m=+0.256975442 container attach 61d4145b8a58a0df8790ca9995165d8876556afb5aab1c658c89436f6009a331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:23:28 compute-0 nova_compute[350387]: 2025-11-26 02:23:28.918 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:29 compute-0 nova_compute[350387]: 2025-11-26 02:23:29.583 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:29 compute-0 podman[158021]: time="2025-11-26T02:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:23:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45383 "" "Go-http-client/1.1"
Nov 26 02:23:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9077 "" "Go-http-client/1.1"
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]: {
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:        "osd_id": 0,
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:        "type": "bluestore"
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:    },
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:        "osd_id": 2,
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:        "type": "bluestore"
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:    },
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:        "osd_id": 1,
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:        "type": "bluestore"
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]:    }
Nov 26 02:23:29 compute-0 gifted_aryabhata[464924]: }
Nov 26 02:23:29 compute-0 systemd[1]: libpod-61d4145b8a58a0df8790ca9995165d8876556afb5aab1c658c89436f6009a331.scope: Deactivated successfully.
Nov 26 02:23:29 compute-0 systemd[1]: libpod-61d4145b8a58a0df8790ca9995165d8876556afb5aab1c658c89436f6009a331.scope: Consumed 1.182s CPU time.
Nov 26 02:23:29 compute-0 podman[464957]: 2025-11-26 02:23:29.908382304 +0000 UTC m=+0.052689898 container died 61d4145b8a58a0df8790ca9995165d8876556afb5aab1c658c89436f6009a331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:23:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c6d01fc38d599d5555e7b48b6ed015b1be15f181e8939af6d3cc8657c422a84-merged.mount: Deactivated successfully.
Nov 26 02:23:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:30 compute-0 podman[464957]: 2025-11-26 02:23:30.128074139 +0000 UTC m=+0.272381673 container remove 61d4145b8a58a0df8790ca9995165d8876556afb5aab1c658c89436f6009a331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 02:23:30 compute-0 systemd[1]: libpod-conmon-61d4145b8a58a0df8790ca9995165d8876556afb5aab1c658c89436f6009a331.scope: Deactivated successfully.
Nov 26 02:23:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:23:30 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:23:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:23:30 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:23:30 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 4828823d-07fe-4760-9a91-6860481a2316 does not exist
Nov 26 02:23:30 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev a4c852db-0331-49e6-baba-ef129a9920a3 does not exist
Nov 26 02:23:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:23:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:23:31 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:23:31 compute-0 openstack_network_exporter[367323]: ERROR   02:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:23:31 compute-0 openstack_network_exporter[367323]: ERROR   02:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:23:31 compute-0 openstack_network_exporter[367323]: ERROR   02:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:23:31 compute-0 openstack_network_exporter[367323]: ERROR   02:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:23:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:23:31 compute-0 openstack_network_exporter[367323]: ERROR   02:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:23:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:23:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:32 compute-0 podman[465024]: 2025-11-26 02:23:32.572524809 +0000 UTC m=+0.121895096 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:23:32 compute-0 podman[465023]: 2025-11-26 02:23:32.581284204 +0000 UTC m=+0.132591276 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, config_id=edpm)
Nov 26 02:23:33 compute-0 nova_compute[350387]: 2025-11-26 02:23:33.919 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:34 compute-0 nova_compute[350387]: 2025-11-26 02:23:34.585 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:23:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:38 compute-0 nova_compute[350387]: 2025-11-26 02:23:38.927 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:39 compute-0 nova_compute[350387]: 2025-11-26 02:23:39.588 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:23:41
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', '.mgr', 'backups', 'default.rgw.log', '.rgw.root', 'volumes']
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:23:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:23:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:43 compute-0 nova_compute[350387]: 2025-11-26 02:23:43.322 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:23:43 compute-0 nova_compute[350387]: 2025-11-26 02:23:43.931 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:44 compute-0 nova_compute[350387]: 2025-11-26 02:23:44.592 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:44 compute-0 podman[465067]: 2025-11-26 02:23:44.870317612 +0000 UTC m=+0.116672990 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 26 02:23:44 compute-0 podman[465068]: 2025-11-26 02:23:44.888195943 +0000 UTC m=+0.127558565 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 02:23:44 compute-0 podman[465066]: 2025-11-26 02:23:44.893033978 +0000 UTC m=+0.144643504 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 02:23:45 compute-0 nova_compute[350387]: 2025-11-26 02:23:45.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:23:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:23:45 compute-0 nova_compute[350387]: 2025-11-26 02:23:45.342 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:23:45 compute-0 nova_compute[350387]: 2025-11-26 02:23:45.343 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:23:45 compute-0 nova_compute[350387]: 2025-11-26 02:23:45.343 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:23:45 compute-0 nova_compute[350387]: 2025-11-26 02:23:45.343 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:23:45 compute-0 nova_compute[350387]: 2025-11-26 02:23:45.344 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:23:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:23:45 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1824386748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:23:45 compute-0 nova_compute[350387]: 2025-11-26 02:23:45.929 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:23:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:46 compute-0 nova_compute[350387]: 2025-11-26 02:23:46.150 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:23:46 compute-0 nova_compute[350387]: 2025-11-26 02:23:46.151 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:23:46 compute-0 nova_compute[350387]: 2025-11-26 02:23:46.163 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:23:46 compute-0 nova_compute[350387]: 2025-11-26 02:23:46.164 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:23:46 compute-0 nova_compute[350387]: 2025-11-26 02:23:46.814 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:23:46 compute-0 nova_compute[350387]: 2025-11-26 02:23:46.816 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3492MB free_disk=59.897003173828125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:23:46 compute-0 nova_compute[350387]: 2025-11-26 02:23:46.817 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:23:46 compute-0 nova_compute[350387]: 2025-11-26 02:23:46.817 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:23:46 compute-0 nova_compute[350387]: 2025-11-26 02:23:46.933 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 74d081af-66cd-4e37-99e4-31f777885766 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:23:46 compute-0 nova_compute[350387]: 2025-11-26 02:23:46.934 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance add194b7-6a6c-48ef-8355-3344185eb43e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:23:46 compute-0 nova_compute[350387]: 2025-11-26 02:23:46.935 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:23:46 compute-0 nova_compute[350387]: 2025-11-26 02:23:46.936 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:23:47 compute-0 nova_compute[350387]: 2025-11-26 02:23:47.030 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:23:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:23:47 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/469114079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:23:47 compute-0 nova_compute[350387]: 2025-11-26 02:23:47.568 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:23:47 compute-0 nova_compute[350387]: 2025-11-26 02:23:47.581 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:23:47 compute-0 nova_compute[350387]: 2025-11-26 02:23:47.599 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:23:47 compute-0 nova_compute[350387]: 2025-11-26 02:23:47.602 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:23:47 compute-0 nova_compute[350387]: 2025-11-26 02:23:47.602 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:23:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:48 compute-0 nova_compute[350387]: 2025-11-26 02:23:48.934 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:49 compute-0 nova_compute[350387]: 2025-11-26 02:23:49.597 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:49 compute-0 nova_compute[350387]: 2025-11-26 02:23:49.603 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:23:49 compute-0 nova_compute[350387]: 2025-11-26 02:23:49.603 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:23:49 compute-0 nova_compute[350387]: 2025-11-26 02:23:49.604 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:23:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:50 compute-0 nova_compute[350387]: 2025-11-26 02:23:50.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:23:50 compute-0 nova_compute[350387]: 2025-11-26 02:23:50.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:23:50 compute-0 nova_compute[350387]: 2025-11-26 02:23:50.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:23:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:23:50 compute-0 nova_compute[350387]: 2025-11-26 02:23:50.819 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:23:50 compute-0 nova_compute[350387]: 2025-11-26 02:23:50.819 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:23:50 compute-0 nova_compute[350387]: 2025-11-26 02:23:50.820 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:23:50 compute-0 nova_compute[350387]: 2025-11-26 02:23:50.820 350391 DEBUG nova.objects.instance [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 74d081af-66cd-4e37-99e4-31f777885766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:23:51 compute-0 podman[465169]: 2025-11-26 02:23:51.60911888 +0000 UTC m=+0.152605007 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 02:23:51 compute-0 podman[465170]: 2025-11-26 02:23:51.683597757 +0000 UTC m=+0.222128955 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00152159845672983 of space, bias 1.0, pg target 0.456479537018949 quantized to 32 (current 32)
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:23:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:23:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:52 compute-0 nova_compute[350387]: 2025-11-26 02:23:52.224 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updating instance_info_cache with network_info: [{"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:23:52 compute-0 nova_compute[350387]: 2025-11-26 02:23:52.251 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-74d081af-66cd-4e37-99e4-31f777885766" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:23:52 compute-0 nova_compute[350387]: 2025-11-26 02:23:52.252 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:23:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:23:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 10K writes, 45K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1327 writes, 6010 keys, 1327 commit groups, 1.0 writes per commit group, ingest: 8.67 MB, 0.01 MB/s#012Interval WAL: 1327 writes, 1327 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    112.6      0.48              0.26        31    0.016       0      0       0.0       0.0#012  L6      1/0    6.46 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.1    166.2    136.9      1.64              1.00        30    0.055    159K    16K       0.0       0.0#012 Sum      1/0    6.46 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1    128.3    131.3      2.12              1.26        61    0.035    159K    16K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.5    124.7    124.7      0.36              0.25        10    0.036     31K   2555       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    166.2    136.9      1.64              1.00        30    0.055    159K    16K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    114.2      0.48              0.26        30    0.016       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.053, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.27 GB write, 0.07 MB/s write, 0.27 GB read, 0.06 MB/s read, 2.1 seconds#012Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.08 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5636b955b1f0#2 capacity: 304.00 MB usage: 32.42 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000266 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2055,31.24 MB,10.2771%) FilterBlock(62,452.61 KB,0.145395%) IndexBlock(62,748.22 KB,0.240356%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 26 02:23:53 compute-0 nova_compute[350387]: 2025-11-26 02:23:53.938 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:54 compute-0 nova_compute[350387]: 2025-11-26 02:23:54.600 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:23:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:56 compute-0 nova_compute[350387]: 2025-11-26 02:23:56.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:23:56 compute-0 podman[465217]: 2025-11-26 02:23:56.577303231 +0000 UTC m=+0.114396556 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 26 02:23:56 compute-0 podman[465216]: 2025-11-26 02:23:56.602517527 +0000 UTC m=+0.143931243 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9)
Nov 26 02:23:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:23:58 compute-0 nova_compute[350387]: 2025-11-26 02:23:58.942 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:59 compute-0 nova_compute[350387]: 2025-11-26 02:23:59.603 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:23:59 compute-0 podman[158021]: time="2025-11-26T02:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:23:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:23:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8655 "" "Go-http-client/1.1"
Nov 26 02:24:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:00 compute-0 nova_compute[350387]: 2025-11-26 02:24:00.294 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:24:00 compute-0 nova_compute[350387]: 2025-11-26 02:24:00.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:24:00 compute-0 nova_compute[350387]: 2025-11-26 02:24:00.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:24:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:24:01 compute-0 openstack_network_exporter[367323]: ERROR   02:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:24:01 compute-0 openstack_network_exporter[367323]: ERROR   02:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:24:01 compute-0 openstack_network_exporter[367323]: ERROR   02:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:24:01 compute-0 openstack_network_exporter[367323]: ERROR   02:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:24:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:24:01 compute-0 openstack_network_exporter[367323]: ERROR   02:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:24:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:24:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:03 compute-0 podman[465253]: 2025-11-26 02:24:03.60513219 +0000 UTC m=+0.143323206 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, vcs-type=git, container_name=openstack_network_exporter, name=ubi9-minimal, io.buildah.version=1.33.7, version=9.6, io.openshift.tags=minimal rhel9, release=1755695350, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 26 02:24:03 compute-0 podman[465254]: 2025-11-26 02:24:03.627649761 +0000 UTC m=+0.160135288 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:24:03 compute-0 nova_compute[350387]: 2025-11-26 02:24:03.945 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:04 compute-0 nova_compute[350387]: 2025-11-26 02:24:04.607 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:05 compute-0 nova_compute[350387]: 2025-11-26 02:24:05.294 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:24:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:24:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:08 compute-0 nova_compute[350387]: 2025-11-26 02:24:08.948 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:09 compute-0 nova_compute[350387]: 2025-11-26 02:24:09.611 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:24:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:24:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:24:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:24:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:24:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:24:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:24:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:13 compute-0 nova_compute[350387]: 2025-11-26 02:24:13.951 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:14 compute-0 nova_compute[350387]: 2025-11-26 02:24:14.616 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:15 compute-0 podman[465296]: 2025-11-26 02:24:15.086140549 +0000 UTC m=+0.104534430 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 02:24:15 compute-0 podman[465295]: 2025-11-26 02:24:15.097891038 +0000 UTC m=+0.127367219 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 26 02:24:15 compute-0 podman[465297]: 2025-11-26 02:24:15.114610367 +0000 UTC m=+0.130056175 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:24:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:24:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:18 compute-0 nova_compute[350387]: 2025-11-26 02:24:18.955 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:19 compute-0 nova_compute[350387]: 2025-11-26 02:24:19.620 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:24:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:22 compute-0 podman[465357]: 2025-11-26 02:24:22.592640299 +0000 UTC m=+0.136857146 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 02:24:22 compute-0 podman[465358]: 2025-11-26 02:24:22.635563602 +0000 UTC m=+0.177802023 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 26 02:24:23 compute-0 nova_compute[350387]: 2025-11-26 02:24:23.957 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:24 compute-0 nova_compute[350387]: 2025-11-26 02:24:24.623 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:24:25.011 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:24:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:24:25.012 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:24:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:24:25.013 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:24:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:24:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:24:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/583897539' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:24:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:24:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/583897539' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:24:27 compute-0 podman[465404]: 2025-11-26 02:24:27.590005677 +0000 UTC m=+0.130554329 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 02:24:27 compute-0 podman[465403]: 2025-11-26 02:24:27.608742862 +0000 UTC m=+0.156414644 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, container_name=kepler, managed_by=edpm_ansible, name=ubi9, io.openshift.tags=base rhel9, release-0.7.12=, vendor=Red Hat, Inc., config_id=edpm, version=9.4)
Nov 26 02:24:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:28 compute-0 nova_compute[350387]: 2025-11-26 02:24:28.961 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:29 compute-0 nova_compute[350387]: 2025-11-26 02:24:29.627 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:29 compute-0 podman[158021]: time="2025-11-26T02:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:24:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:24:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8651 "" "Go-http-client/1.1"
Nov 26 02:24:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:24:31 compute-0 openstack_network_exporter[367323]: ERROR   02:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:24:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:24:31 compute-0 openstack_network_exporter[367323]: ERROR   02:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:24:31 compute-0 openstack_network_exporter[367323]: ERROR   02:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:24:31 compute-0 openstack_network_exporter[367323]: ERROR   02:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:24:31 compute-0 openstack_network_exporter[367323]: ERROR   02:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:24:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:24:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:24:32 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:24:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:24:32 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:24:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:24:32 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:24:32 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 3c47ab8d-2ff0-4efd-a08b-048895e859fe does not exist
Nov 26 02:24:32 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev f40db4b4-7650-4105-9972-437789e4e570 does not exist
Nov 26 02:24:32 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 18de1dce-5530-4a3a-a28e-ca0183c56114 does not exist
Nov 26 02:24:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:24:32 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:24:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:24:32 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:24:32 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:24:32 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:24:33 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:24:33 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:24:33 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:24:33 compute-0 podman[465712]: 2025-11-26 02:24:33.385680373 +0000 UTC m=+0.094446078 container create e18450cff8b8e4c8054667649f207df124a9b68c71c0c0ab7fd61caa2aa7b908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:24:33 compute-0 podman[465712]: 2025-11-26 02:24:33.350079175 +0000 UTC m=+0.058844950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:24:33 compute-0 systemd[1]: Started libpod-conmon-e18450cff8b8e4c8054667649f207df124a9b68c71c0c0ab7fd61caa2aa7b908.scope.
Nov 26 02:24:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:24:33 compute-0 podman[465712]: 2025-11-26 02:24:33.566351475 +0000 UTC m=+0.275117160 container init e18450cff8b8e4c8054667649f207df124a9b68c71c0c0ab7fd61caa2aa7b908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 02:24:33 compute-0 podman[465712]: 2025-11-26 02:24:33.582547398 +0000 UTC m=+0.291313083 container start e18450cff8b8e4c8054667649f207df124a9b68c71c0c0ab7fd61caa2aa7b908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_moore, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 02:24:33 compute-0 podman[465712]: 2025-11-26 02:24:33.588274369 +0000 UTC m=+0.297040074 container attach e18450cff8b8e4c8054667649f207df124a9b68c71c0c0ab7fd61caa2aa7b908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_moore, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 02:24:33 compute-0 stupefied_moore[465728]: 167 167
Nov 26 02:24:33 compute-0 systemd[1]: libpod-e18450cff8b8e4c8054667649f207df124a9b68c71c0c0ab7fd61caa2aa7b908.scope: Deactivated successfully.
Nov 26 02:24:33 compute-0 podman[465712]: 2025-11-26 02:24:33.598325341 +0000 UTC m=+0.307091046 container died e18450cff8b8e4c8054667649f207df124a9b68c71c0c0ab7fd61caa2aa7b908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_moore, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 02:24:33 compute-0 podman[465712]: 2025-11-26 02:24:33.673858897 +0000 UTC m=+0.382624582 container remove e18450cff8b8e4c8054667649f207df124a9b68c71c0c0ab7fd61caa2aa7b908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_moore, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 02:24:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bf4698f394c6a1942d67f3d5dc53a0e4497c18040839e622b9436acc8295639-merged.mount: Deactivated successfully.
Nov 26 02:24:33 compute-0 systemd[1]: libpod-conmon-e18450cff8b8e4c8054667649f207df124a9b68c71c0c0ab7fd61caa2aa7b908.scope: Deactivated successfully.
Nov 26 02:24:33 compute-0 podman[465746]: 2025-11-26 02:24:33.80316137 +0000 UTC m=+0.086897406 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:24:33 compute-0 podman[465741]: 2025-11-26 02:24:33.827494102 +0000 UTC m=+0.133947614 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, distribution-scope=public, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 26 02:24:33 compute-0 podman[465796]: 2025-11-26 02:24:33.941941978 +0000 UTC m=+0.083568942 container create d3ccbda92cc5c783199d4e317bb98bb5d697bb9d4304306b957355d1233a909f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 02:24:33 compute-0 nova_compute[350387]: 2025-11-26 02:24:33.965 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:33 compute-0 podman[465796]: 2025-11-26 02:24:33.905581529 +0000 UTC m=+0.047208503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:24:34 compute-0 systemd[1]: Started libpod-conmon-d3ccbda92cc5c783199d4e317bb98bb5d697bb9d4304306b957355d1233a909f.scope.
Nov 26 02:24:34 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2cb3cebc026e9c1e0b8e59d609a0d99201c07f3a35c104ce1d3f7f1450e0d1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2cb3cebc026e9c1e0b8e59d609a0d99201c07f3a35c104ce1d3f7f1450e0d1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2cb3cebc026e9c1e0b8e59d609a0d99201c07f3a35c104ce1d3f7f1450e0d1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2cb3cebc026e9c1e0b8e59d609a0d99201c07f3a35c104ce1d3f7f1450e0d1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2cb3cebc026e9c1e0b8e59d609a0d99201c07f3a35c104ce1d3f7f1450e0d1a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:24:34 compute-0 podman[465796]: 2025-11-26 02:24:34.115744048 +0000 UTC m=+0.257371062 container init d3ccbda92cc5c783199d4e317bb98bb5d697bb9d4304306b957355d1233a909f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:24:34 compute-0 podman[465796]: 2025-11-26 02:24:34.128522436 +0000 UTC m=+0.270149400 container start d3ccbda92cc5c783199d4e317bb98bb5d697bb9d4304306b957355d1233a909f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 02:24:34 compute-0 podman[465796]: 2025-11-26 02:24:34.134951026 +0000 UTC m=+0.276578050 container attach d3ccbda92cc5c783199d4e317bb98bb5d697bb9d4304306b957355d1233a909f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:24:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:34 compute-0 nova_compute[350387]: 2025-11-26 02:24:34.630 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:35 compute-0 priceless_kowalevski[465812]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:24:35 compute-0 priceless_kowalevski[465812]: --> relative data size: 1.0
Nov 26 02:24:35 compute-0 priceless_kowalevski[465812]: --> All data devices are unavailable
Nov 26 02:24:35 compute-0 systemd[1]: libpod-d3ccbda92cc5c783199d4e317bb98bb5d697bb9d4304306b957355d1233a909f.scope: Deactivated successfully.
Nov 26 02:24:35 compute-0 systemd[1]: libpod-d3ccbda92cc5c783199d4e317bb98bb5d697bb9d4304306b957355d1233a909f.scope: Consumed 1.290s CPU time.
Nov 26 02:24:35 compute-0 podman[465796]: 2025-11-26 02:24:35.504638622 +0000 UTC m=+1.646265616 container died d3ccbda92cc5c783199d4e317bb98bb5d697bb9d4304306b957355d1233a909f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kowalevski, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 02:24:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2cb3cebc026e9c1e0b8e59d609a0d99201c07f3a35c104ce1d3f7f1450e0d1a-merged.mount: Deactivated successfully.
Nov 26 02:24:35 compute-0 podman[465796]: 2025-11-26 02:24:35.597544695 +0000 UTC m=+1.739171629 container remove d3ccbda92cc5c783199d4e317bb98bb5d697bb9d4304306b957355d1233a909f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 26 02:24:35 compute-0 systemd[1]: libpod-conmon-d3ccbda92cc5c783199d4e317bb98bb5d697bb9d4304306b957355d1233a909f.scope: Deactivated successfully.
Nov 26 02:24:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:24:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:36 compute-0 podman[465990]: 2025-11-26 02:24:36.829717999 +0000 UTC m=+0.090285171 container create dd463e8553b49b416b10aac9bdba5a5e7ef7dd185f5ff08f4d4d9d5de6bb8240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 02:24:36 compute-0 podman[465990]: 2025-11-26 02:24:36.792750483 +0000 UTC m=+0.053317675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:24:36 compute-0 systemd[1]: Started libpod-conmon-dd463e8553b49b416b10aac9bdba5a5e7ef7dd185f5ff08f4d4d9d5de6bb8240.scope.
Nov 26 02:24:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:24:36 compute-0 podman[465990]: 2025-11-26 02:24:36.966187102 +0000 UTC m=+0.226754274 container init dd463e8553b49b416b10aac9bdba5a5e7ef7dd185f5ff08f4d4d9d5de6bb8240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_moser, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:24:36 compute-0 podman[465990]: 2025-11-26 02:24:36.984795794 +0000 UTC m=+0.245362986 container start dd463e8553b49b416b10aac9bdba5a5e7ef7dd185f5ff08f4d4d9d5de6bb8240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:24:36 compute-0 podman[465990]: 2025-11-26 02:24:36.991719448 +0000 UTC m=+0.252286600 container attach dd463e8553b49b416b10aac9bdba5a5e7ef7dd185f5ff08f4d4d9d5de6bb8240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 02:24:36 compute-0 quirky_moser[466006]: 167 167
Nov 26 02:24:36 compute-0 systemd[1]: libpod-dd463e8553b49b416b10aac9bdba5a5e7ef7dd185f5ff08f4d4d9d5de6bb8240.scope: Deactivated successfully.
Nov 26 02:24:36 compute-0 podman[465990]: 2025-11-26 02:24:36.995509084 +0000 UTC m=+0.256076246 container died dd463e8553b49b416b10aac9bdba5a5e7ef7dd185f5ff08f4d4d9d5de6bb8240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 02:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b68236276558861bc326f5e795366cb9164544d886c76efb3135b726c6830e21-merged.mount: Deactivated successfully.
Nov 26 02:24:37 compute-0 podman[465990]: 2025-11-26 02:24:37.069167708 +0000 UTC m=+0.329734880 container remove dd463e8553b49b416b10aac9bdba5a5e7ef7dd185f5ff08f4d4d9d5de6bb8240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:24:37 compute-0 systemd[1]: libpod-conmon-dd463e8553b49b416b10aac9bdba5a5e7ef7dd185f5ff08f4d4d9d5de6bb8240.scope: Deactivated successfully.
Nov 26 02:24:37 compute-0 podman[466028]: 2025-11-26 02:24:37.377560408 +0000 UTC m=+0.090821815 container create d1839147b0148a5c7abb5ae41f9561238af3bc2ff7154897dfa3ac17b5061808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 02:24:37 compute-0 podman[466028]: 2025-11-26 02:24:37.3369178 +0000 UTC m=+0.050179267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:24:37 compute-0 systemd[1]: Started libpod-conmon-d1839147b0148a5c7abb5ae41f9561238af3bc2ff7154897dfa3ac17b5061808.scope.
Nov 26 02:24:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d45f1678d002cce17c48316b3b695b0bd3d8089a0fbd78e8a1d5489d41c1e023/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d45f1678d002cce17c48316b3b695b0bd3d8089a0fbd78e8a1d5489d41c1e023/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d45f1678d002cce17c48316b3b695b0bd3d8089a0fbd78e8a1d5489d41c1e023/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d45f1678d002cce17c48316b3b695b0bd3d8089a0fbd78e8a1d5489d41c1e023/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:24:37 compute-0 podman[466028]: 2025-11-26 02:24:37.525580376 +0000 UTC m=+0.238841833 container init d1839147b0148a5c7abb5ae41f9561238af3bc2ff7154897dfa3ac17b5061808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feynman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:24:37 compute-0 podman[466028]: 2025-11-26 02:24:37.552475869 +0000 UTC m=+0.265737246 container start d1839147b0148a5c7abb5ae41f9561238af3bc2ff7154897dfa3ac17b5061808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feynman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:24:37 compute-0 podman[466028]: 2025-11-26 02:24:37.558002194 +0000 UTC m=+0.271263611 container attach d1839147b0148a5c7abb5ae41f9561238af3bc2ff7154897dfa3ac17b5061808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feynman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:24:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]: {
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:    "0": [
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:        {
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "devices": [
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "/dev/loop3"
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            ],
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "lv_name": "ceph_lv0",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "lv_size": "21470642176",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "name": "ceph_lv0",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "tags": {
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.cluster_name": "ceph",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.crush_device_class": "",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.encrypted": "0",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.osd_id": "0",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.type": "block",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.vdo": "0"
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            },
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "type": "block",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "vg_name": "ceph_vg0"
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:        }
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:    ],
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:    "1": [
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:        {
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "devices": [
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "/dev/loop4"
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            ],
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "lv_name": "ceph_lv1",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "lv_size": "21470642176",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "name": "ceph_lv1",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "tags": {
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.cluster_name": "ceph",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.crush_device_class": "",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.encrypted": "0",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.osd_id": "1",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.type": "block",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.vdo": "0"
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            },
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "type": "block",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "vg_name": "ceph_vg1"
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:        }
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:    ],
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:    "2": [
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:        {
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "devices": [
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "/dev/loop5"
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            ],
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "lv_name": "ceph_lv2",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "lv_size": "21470642176",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "name": "ceph_lv2",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "tags": {
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.cluster_name": "ceph",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.crush_device_class": "",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.encrypted": "0",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.osd_id": "2",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.type": "block",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:                "ceph.vdo": "0"
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            },
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "type": "block",
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:            "vg_name": "ceph_vg2"
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:        }
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]:    ]
Nov 26 02:24:38 compute-0 stupefied_feynman[466044]: }
Nov 26 02:24:38 compute-0 systemd[1]: libpod-d1839147b0148a5c7abb5ae41f9561238af3bc2ff7154897dfa3ac17b5061808.scope: Deactivated successfully.
Nov 26 02:24:38 compute-0 podman[466028]: 2025-11-26 02:24:38.439324126 +0000 UTC m=+1.152585513 container died d1839147b0148a5c7abb5ae41f9561238af3bc2ff7154897dfa3ac17b5061808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feynman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 02:24:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d45f1678d002cce17c48316b3b695b0bd3d8089a0fbd78e8a1d5489d41c1e023-merged.mount: Deactivated successfully.
Nov 26 02:24:38 compute-0 podman[466028]: 2025-11-26 02:24:38.550948394 +0000 UTC m=+1.264209811 container remove d1839147b0148a5c7abb5ae41f9561238af3bc2ff7154897dfa3ac17b5061808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 02:24:38 compute-0 systemd[1]: libpod-conmon-d1839147b0148a5c7abb5ae41f9561238af3bc2ff7154897dfa3ac17b5061808.scope: Deactivated successfully.
Nov 26 02:24:38 compute-0 nova_compute[350387]: 2025-11-26 02:24:38.967 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:39 compute-0 nova_compute[350387]: 2025-11-26 02:24:39.635 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:39 compute-0 podman[466201]: 2025-11-26 02:24:39.727425547 +0000 UTC m=+0.087189474 container create cb921ea3b6f71ce4c3a07b3c8c8bf611edd42b47050398aa0f4ffa002d58c20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bouman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 26 02:24:39 compute-0 podman[466201]: 2025-11-26 02:24:39.695439371 +0000 UTC m=+0.055203318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:24:39 compute-0 systemd[1]: Started libpod-conmon-cb921ea3b6f71ce4c3a07b3c8c8bf611edd42b47050398aa0f4ffa002d58c20a.scope.
Nov 26 02:24:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:24:39 compute-0 podman[466201]: 2025-11-26 02:24:39.884348784 +0000 UTC m=+0.244112721 container init cb921ea3b6f71ce4c3a07b3c8c8bf611edd42b47050398aa0f4ffa002d58c20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bouman, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 02:24:39 compute-0 podman[466201]: 2025-11-26 02:24:39.900473806 +0000 UTC m=+0.260237743 container start cb921ea3b6f71ce4c3a07b3c8c8bf611edd42b47050398aa0f4ffa002d58c20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 02:24:39 compute-0 podman[466201]: 2025-11-26 02:24:39.908321106 +0000 UTC m=+0.268085153 container attach cb921ea3b6f71ce4c3a07b3c8c8bf611edd42b47050398aa0f4ffa002d58c20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bouman, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 02:24:39 compute-0 quirky_bouman[466216]: 167 167
Nov 26 02:24:39 compute-0 systemd[1]: libpod-cb921ea3b6f71ce4c3a07b3c8c8bf611edd42b47050398aa0f4ffa002d58c20a.scope: Deactivated successfully.
Nov 26 02:24:39 compute-0 podman[466201]: 2025-11-26 02:24:39.918148101 +0000 UTC m=+0.277912028 container died cb921ea3b6f71ce4c3a07b3c8c8bf611edd42b47050398aa0f4ffa002d58c20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:24:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a07e34611b1a012c8f26bbe5d6d9ca261d10a22528864d054e448d8ca576c174-merged.mount: Deactivated successfully.
Nov 26 02:24:40 compute-0 podman[466201]: 2025-11-26 02:24:40.000683813 +0000 UTC m=+0.360447780 container remove cb921ea3b6f71ce4c3a07b3c8c8bf611edd42b47050398aa0f4ffa002d58c20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bouman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 02:24:40 compute-0 systemd[1]: libpod-conmon-cb921ea3b6f71ce4c3a07b3c8c8bf611edd42b47050398aa0f4ffa002d58c20a.scope: Deactivated successfully.
Nov 26 02:24:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:40 compute-0 podman[466238]: 2025-11-26 02:24:40.286992075 +0000 UTC m=+0.085913488 container create 38a12ff9e094dcd76344e7456576fa54a2e8e97afa4695309b3a054ab8d68086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_einstein, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 26 02:24:40 compute-0 podman[466238]: 2025-11-26 02:24:40.253743104 +0000 UTC m=+0.052664547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:24:40 compute-0 systemd[1]: Started libpod-conmon-38a12ff9e094dcd76344e7456576fa54a2e8e97afa4695309b3a054ab8d68086.scope.
Nov 26 02:24:40 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2134fd35e73bfb423f26d8bbff90daba307c56743143ff5207750f87b5c42e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2134fd35e73bfb423f26d8bbff90daba307c56743143ff5207750f87b5c42e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2134fd35e73bfb423f26d8bbff90daba307c56743143ff5207750f87b5c42e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2134fd35e73bfb423f26d8bbff90daba307c56743143ff5207750f87b5c42e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:24:40 compute-0 podman[466238]: 2025-11-26 02:24:40.508701817 +0000 UTC m=+0.307623240 container init 38a12ff9e094dcd76344e7456576fa54a2e8e97afa4695309b3a054ab8d68086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 02:24:40 compute-0 podman[466238]: 2025-11-26 02:24:40.551419964 +0000 UTC m=+0.350341357 container start 38a12ff9e094dcd76344e7456576fa54a2e8e97afa4695309b3a054ab8d68086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:24:40 compute-0 podman[466238]: 2025-11-26 02:24:40.557357871 +0000 UTC m=+0.356279284 container attach 38a12ff9e094dcd76344e7456576fa54a2e8e97afa4695309b3a054ab8d68086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 02:24:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:24:41
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'vms', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.mgr', 'images']
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:24:41 compute-0 amazing_einstein[466254]: {
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:        "osd_id": 0,
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:        "type": "bluestore"
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:    },
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:        "osd_id": 2,
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:        "type": "bluestore"
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:    },
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:        "osd_id": 1,
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:        "type": "bluestore"
Nov 26 02:24:41 compute-0 amazing_einstein[466254]:    }
Nov 26 02:24:41 compute-0 amazing_einstein[466254]: }
Nov 26 02:24:41 compute-0 systemd[1]: libpod-38a12ff9e094dcd76344e7456576fa54a2e8e97afa4695309b3a054ab8d68086.scope: Deactivated successfully.
Nov 26 02:24:41 compute-0 systemd[1]: libpod-38a12ff9e094dcd76344e7456576fa54a2e8e97afa4695309b3a054ab8d68086.scope: Consumed 1.127s CPU time.
Nov 26 02:24:41 compute-0 podman[466238]: 2025-11-26 02:24:41.681005953 +0000 UTC m=+1.479927376 container died 38a12ff9e094dcd76344e7456576fa54a2e8e97afa4695309b3a054ab8d68086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_einstein, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 02:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e2134fd35e73bfb423f26d8bbff90daba307c56743143ff5207750f87b5c42e-merged.mount: Deactivated successfully.
Nov 26 02:24:41 compute-0 podman[466238]: 2025-11-26 02:24:41.791152669 +0000 UTC m=+1.590074062 container remove 38a12ff9e094dcd76344e7456576fa54a2e8e97afa4695309b3a054ab8d68086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:24:41 compute-0 systemd[1]: libpod-conmon-38a12ff9e094dcd76344e7456576fa54a2e8e97afa4695309b3a054ab8d68086.scope: Deactivated successfully.
Nov 26 02:24:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:24:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:24:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:24:41 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 51bdd976-7bcf-413f-aa05-27b662f18647 does not exist
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 6fc6b599-14e9-4acc-895d-a3cbe1956876 does not exist
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:24:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:24:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:24:42 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.878 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.879 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.879 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.880 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.881 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.892 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '74d081af-66cd-4e37-99e4-31f777885766', 'name': 'te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'dbaf181e-c7da-4938-bfef-7ab3aa9a19bc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'user_id': '3a9710ede02d47cbb016ff596d936633', 'hostId': '0514aa3466932c9e7b93e3dcd39fcbb186e60af35850a79a2e38f108', 'status': 'active', 'metadata': {'metering.server_group': 'bd820598-acdd-4f42-8252-1f5951161b01'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.898 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'add194b7-6a6c-48ef-8355-3344185eb43e', 'name': 'te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i', 'flavor': {'id': '6db4d080-ab1e-4a78-a6d9-858137b0ba8b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'dbaf181e-c7da-4938-bfef-7ab3aa9a19bc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'user_id': '3a9710ede02d47cbb016ff596d936633', 'hostId': '0514aa3466932c9e7b93e3dcd39fcbb186e60af35850a79a2e38f108', 'status': 'active', 'metadata': {'metering.server_group': 'bd820598-acdd-4f42-8252-1f5951161b01'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.899 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.899 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.900 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.900 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.901 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.902 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.902 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.903 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.903 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T02:24:42.900368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.903 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T02:24:42.903380) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.913 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.921 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.923 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.923 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.924 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.924 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.924 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.924 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.925 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.926 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.926 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.926 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T02:24:42.924562) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.926 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.927 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.927 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.927 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T02:24:42.927245) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.927 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.928 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.929 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.929 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.929 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.930 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.930 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.930 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.931 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T02:24:42.930567) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.931 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.932 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.932 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.933 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.933 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.933 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.933 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.933 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.935 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.936 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.937 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.937 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.937 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.938 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.938 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.939 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T02:24:42.933554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.940 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T02:24:42.938211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:42.979 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/cpu volume: 338460000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.019 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/cpu volume: 339770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.020 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.020 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.020 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.020 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.021 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.021 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.021 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T02:24:43.021387) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.022 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.023 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.023 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.023 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.023 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.023 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.023 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.024 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/memory.usage volume: 42.328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.024 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/memory.usage volume: 42.34375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.025 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.025 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.025 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.025 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.025 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.026 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T02:24:43.023710) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.026 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.026 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.026 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.027 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.028 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.028 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.028 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.028 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.029 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T02:24:43.026509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.029 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.029 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.030 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.030 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.031 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.031 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.031 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.031 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.031 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.032 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.032 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.033 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.033 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.033 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.033 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.034 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T02:24:43.029434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T02:24:43.031888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.034 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.035 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.035 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T02:24:43.034573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.035 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.036 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.037 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.037 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.037 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.037 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.037 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.038 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.038 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.039 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.039 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.039 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.040 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.040 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T02:24:43.037729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.040 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.040 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T02:24:43.040684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.064 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.065 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.089 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.090 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.090 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.091 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.091 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.091 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.091 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.091 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.092 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T02:24:43.091737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.157 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.158 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.225 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.bytes volume: 31291904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.226 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.227 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.227 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.227 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.227 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.227 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.228 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.228 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.228 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.228 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.latency volume: 2432488124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.229 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.latency volume: 867897915 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.230 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.latency volume: 2793486770 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.230 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T02:24:43.228457) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.230 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.latency volume: 209467376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.231 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.231 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.231 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.232 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.232 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.232 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.232 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.233 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.233 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T02:24:43.232330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.234 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.requests volume: 1145 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.234 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.235 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.235 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.236 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.236 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.236 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.236 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.236 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.237 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T02:24:43.236437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.237 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.237 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.238 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.239 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.239 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.239 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.239 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.240 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.240 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.240 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.bytes volume: 73154560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.241 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.241 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T02:24:43.240277) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.241 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.242 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.243 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.243 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.243 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.243 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.243 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.244 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.244 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.244 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.245 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T02:24:43.244059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.245 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.246 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.246 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.246 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.246 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.247 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.247 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.latency volume: 9013075611 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.247 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.248 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.latency volume: 8178329181 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.249 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.250 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.250 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.250 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.251 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.251 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T02:24:43.246995) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.251 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.252 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.252 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.requests volume: 335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.253 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.253 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.requests volume: 304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.254 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.255 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.255 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.256 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.256 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T02:24:43.251962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.256 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.256 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.257 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.257 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.257 15 DEBUG ceilometer.compute.pollsters [-] 74d081af-66cd-4e37-99e4-31f777885766/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.258 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.259 15 DEBUG ceilometer.compute.pollsters [-] add194b7-6a6c-48ef-8355-3344185eb43e/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.259 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.260 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T02:24:43.257018) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.260 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.260 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.260 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.261 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.261 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.261 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.265 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.265 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:24:43.265 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:24:43 compute-0 nova_compute[350387]: 2025-11-26 02:24:43.969 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:44 compute-0 nova_compute[350387]: 2025-11-26 02:24:44.638 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:45 compute-0 nova_compute[350387]: 2025-11-26 02:24:45.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:24:45 compute-0 nova_compute[350387]: 2025-11-26 02:24:45.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:24:45 compute-0 nova_compute[350387]: 2025-11-26 02:24:45.346 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:24:45 compute-0 nova_compute[350387]: 2025-11-26 02:24:45.347 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:24:45 compute-0 nova_compute[350387]: 2025-11-26 02:24:45.348 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:24:45 compute-0 nova_compute[350387]: 2025-11-26 02:24:45.348 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:24:45 compute-0 nova_compute[350387]: 2025-11-26 02:24:45.349 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:24:45 compute-0 podman[466352]: 2025-11-26 02:24:45.565555621 +0000 UTC m=+0.109606852 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 02:24:45 compute-0 podman[466353]: 2025-11-26 02:24:45.565665444 +0000 UTC m=+0.104436007 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 26 02:24:45 compute-0 podman[466354]: 2025-11-26 02:24:45.59619629 +0000 UTC m=+0.130518538 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 02:24:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:24:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:24:45 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1635324331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:24:45 compute-0 nova_compute[350387]: 2025-11-26 02:24:45.879 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:24:46 compute-0 nova_compute[350387]: 2025-11-26 02:24:46.006 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:24:46 compute-0 nova_compute[350387]: 2025-11-26 02:24:46.006 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:24:46 compute-0 nova_compute[350387]: 2025-11-26 02:24:46.013 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:24:46 compute-0 nova_compute[350387]: 2025-11-26 02:24:46.013 350391 DEBUG nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 26 02:24:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:46 compute-0 nova_compute[350387]: 2025-11-26 02:24:46.541 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:24:46 compute-0 nova_compute[350387]: 2025-11-26 02:24:46.543 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3495MB free_disk=59.897003173828125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:24:46 compute-0 nova_compute[350387]: 2025-11-26 02:24:46.543 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:24:46 compute-0 nova_compute[350387]: 2025-11-26 02:24:46.544 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:24:46 compute-0 nova_compute[350387]: 2025-11-26 02:24:46.668 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance 74d081af-66cd-4e37-99e4-31f777885766 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:24:46 compute-0 nova_compute[350387]: 2025-11-26 02:24:46.669 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Instance add194b7-6a6c-48ef-8355-3344185eb43e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 02:24:46 compute-0 nova_compute[350387]: 2025-11-26 02:24:46.670 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:24:46 compute-0 nova_compute[350387]: 2025-11-26 02:24:46.671 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:24:46 compute-0 nova_compute[350387]: 2025-11-26 02:24:46.738 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:24:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:24:47 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1673101286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:24:47 compute-0 nova_compute[350387]: 2025-11-26 02:24:47.289 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:24:47 compute-0 nova_compute[350387]: 2025-11-26 02:24:47.299 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:24:47 compute-0 nova_compute[350387]: 2025-11-26 02:24:47.322 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:24:47 compute-0 nova_compute[350387]: 2025-11-26 02:24:47.324 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:24:47 compute-0 nova_compute[350387]: 2025-11-26 02:24:47.325 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:24:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:48 compute-0 nova_compute[350387]: 2025-11-26 02:24:48.973 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:49 compute-0 nova_compute[350387]: 2025-11-26 02:24:49.642 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:50 compute-0 nova_compute[350387]: 2025-11-26 02:24:50.324 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:24:50 compute-0 nova_compute[350387]: 2025-11-26 02:24:50.326 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:24:50 compute-0 nova_compute[350387]: 2025-11-26 02:24:50.327 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:24:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00152159845672983 of space, bias 1.0, pg target 0.456479537018949 quantized to 32 (current 32)
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:24:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:24:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:52 compute-0 nova_compute[350387]: 2025-11-26 02:24:52.301 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:24:52 compute-0 nova_compute[350387]: 2025-11-26 02:24:52.302 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:24:53 compute-0 nova_compute[350387]: 2025-11-26 02:24:53.068 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 02:24:53 compute-0 nova_compute[350387]: 2025-11-26 02:24:53.069 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquired lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 02:24:53 compute-0 nova_compute[350387]: 2025-11-26 02:24:53.070 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 02:24:53 compute-0 podman[466455]: 2025-11-26 02:24:53.647453482 +0000 UTC m=+0.178490192 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 02:24:53 compute-0 podman[466454]: 2025-11-26 02:24:53.645273931 +0000 UTC m=+0.186014143 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Nov 26 02:24:53 compute-0 nova_compute[350387]: 2025-11-26 02:24:53.975 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:54 compute-0 nova_compute[350387]: 2025-11-26 02:24:54.646 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:55 compute-0 nova_compute[350387]: 2025-11-26 02:24:55.504 350391 DEBUG nova.network.neutron [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Updating instance_info_cache with network_info: [{"id": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "address": "fa:16:3e:6e:b7:00", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaa46d5d-d6", "ovs_interfaceid": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:24:55 compute-0 nova_compute[350387]: 2025-11-26 02:24:55.527 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Releasing lock "refresh_cache-add194b7-6a6c-48ef-8355-3344185eb43e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 02:24:55 compute-0 nova_compute[350387]: 2025-11-26 02:24:55.528 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 02:24:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:24:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:57 compute-0 nova_compute[350387]: 2025-11-26 02:24:57.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:24:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:24:58 compute-0 podman[466495]: 2025-11-26 02:24:58.599793809 +0000 UTC m=+0.146574777 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, architecture=x86_64, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, container_name=kepler, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git)
Nov 26 02:24:58 compute-0 podman[466496]: 2025-11-26 02:24:58.634812931 +0000 UTC m=+0.169187392 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:24:58 compute-0 nova_compute[350387]: 2025-11-26 02:24:58.978 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:59 compute-0 nova_compute[350387]: 2025-11-26 02:24:59.650 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:24:59 compute-0 podman[158021]: time="2025-11-26T02:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:24:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Nov 26 02:24:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8660 "" "Go-http-client/1.1"
Nov 26 02:25:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:25:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:25:01 compute-0 nova_compute[350387]: 2025-11-26 02:25:01.293 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:25:01 compute-0 nova_compute[350387]: 2025-11-26 02:25:01.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:25:01 compute-0 nova_compute[350387]: 2025-11-26 02:25:01.298 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:25:01 compute-0 openstack_network_exporter[367323]: ERROR   02:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:25:01 compute-0 openstack_network_exporter[367323]: ERROR   02:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:25:01 compute-0 openstack_network_exporter[367323]: ERROR   02:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:25:01 compute-0 openstack_network_exporter[367323]: ERROR   02:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:25:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:25:01 compute-0 openstack_network_exporter[367323]: ERROR   02:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:25:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:25:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:25:02 compute-0 nova_compute[350387]: 2025-11-26 02:25:02.990 350391 DEBUG oslo_concurrency.lockutils [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "74d081af-66cd-4e37-99e4-31f777885766" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:25:02 compute-0 nova_compute[350387]: 2025-11-26 02:25:02.992 350391 DEBUG oslo_concurrency.lockutils [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:25:02 compute-0 nova_compute[350387]: 2025-11-26 02:25:02.992 350391 DEBUG oslo_concurrency.lockutils [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "74d081af-66cd-4e37-99e4-31f777885766-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:25:02 compute-0 nova_compute[350387]: 2025-11-26 02:25:02.993 350391 DEBUG oslo_concurrency.lockutils [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:25:02 compute-0 nova_compute[350387]: 2025-11-26 02:25:02.994 350391 DEBUG oslo_concurrency.lockutils [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:25:02 compute-0 nova_compute[350387]: 2025-11-26 02:25:02.996 350391 INFO nova.compute.manager [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Terminating instance#033[00m
Nov 26 02:25:02 compute-0 nova_compute[350387]: 2025-11-26 02:25:02.998 350391 DEBUG nova.compute.manager [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 02:25:03 compute-0 kernel: tap0659d4f2-a7 (unregistering): left promiscuous mode
Nov 26 02:25:03 compute-0 NetworkManager[48886]: <info>  [1764123903.1220] device (tap0659d4f2-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.142 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:03 compute-0 ovn_controller[89102]: 2025-11-26T02:25:03Z|00169|binding|INFO|Releasing lport 0659d4f2-a740-4ecb-92df-7e2267226c3e from this chassis (sb_readonly=0)
Nov 26 02:25:03 compute-0 ovn_controller[89102]: 2025-11-26T02:25:03Z|00170|binding|INFO|Setting lport 0659d4f2-a740-4ecb-92df-7e2267226c3e down in Southbound
Nov 26 02:25:03 compute-0 ovn_controller[89102]: 2025-11-26T02:25:03Z|00171|binding|INFO|Removing iface tap0659d4f2-a7 ovn-installed in OVS
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.150 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.154 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:80:c9 10.100.2.57'], port_security=['fa:16:3e:91:80:c9 10.100.2.57'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.57/16', 'neutron:device_id': '74d081af-66cd-4e37-99e4-31f777885766', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02245f78-e221-4ecd-ae3b-975782a68c5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20511ddf-b2cd-472a-84f8-e35fd6d0c575', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61c2d3e7-61df-4898-a297-774785d24b01, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=0659d4f2-a740-4ecb-92df-7e2267226c3e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.155 286844 INFO neutron.agent.ovn.metadata.agent [-] Port 0659d4f2-a740-4ecb-92df-7e2267226c3e in datapath 02245f78-e221-4ecd-ae3b-975782a68c5e unbound from our chassis#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.157 286844 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02245f78-e221-4ecd-ae3b-975782a68c5e#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.170 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.174 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[2afa532e-a838-4d7a-b531-d8d455927bb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:25:03 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 26 02:25:03 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 7min 23.251s CPU time.
Nov 26 02:25:03 compute-0 systemd-machined[138512]: Machine qemu-11-instance-0000000b terminated.
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.209 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[45dac983-6459-42e7-a9ab-8791a94834b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.215 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[17512168-bc69-4f95-b42d-0b8705deeb43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:25:03 compute-0 kernel: tap0659d4f2-a7: entered promiscuous mode
Nov 26 02:25:03 compute-0 kernel: tap0659d4f2-a7 (unregistering): left promiscuous mode
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.245 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.262 413526 DEBUG oslo.privsep.daemon [-] privsep: reply[339b9bc8-68a1-46d9-9c44-a4b84d2c9544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.263 350391 INFO nova.virt.libvirt.driver [-] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Instance destroyed successfully.#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.264 350391 DEBUG nova.objects.instance [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lazy-loading 'resources' on Instance uuid 74d081af-66cd-4e37-99e4-31f777885766 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.287 350391 DEBUG nova.virt.libvirt.vif [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T02:12:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9551628-asg-agzqqfkj5yfv-752asjmjwjmn-utbvgw2zui7n',id=11,image_ref='dbaf181e-c7da-4938-bfef-7ab3aa9a19bc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-26T02:12:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bd820598-acdd-4f42-8252-1f5951161b01'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cb4e9e1ffe494961ba45f8f24f21b106',ramdisk_id='',reservation_id='r-sdlvzrp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='dbaf181e-c7da-4938-bfef-7ab3aa9a19bc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-624283200',owner_user_name='tempest-PrometheusGabbiTest-624283200-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T02:12:29Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='3a9710ede02d47cbb016ff596d936633',uuid=74d081af-66cd-4e37-99e4-31f777885766,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.288 350391 DEBUG nova.network.os_vif_util [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Converting VIF {"id": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "address": "fa:16:3e:91:80:c9", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.57", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0659d4f2-a7", "ovs_interfaceid": "0659d4f2-a740-4ecb-92df-7e2267226c3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.289 350391 DEBUG nova.network.os_vif_util [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:91:80:c9,bridge_name='br-int',has_traffic_filtering=True,id=0659d4f2-a740-4ecb-92df-7e2267226c3e,network=Network(02245f78-e221-4ecd-ae3b-975782a68c5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0659d4f2-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.290 350391 DEBUG os_vif [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:91:80:c9,bridge_name='br-int',has_traffic_filtering=True,id=0659d4f2-a740-4ecb-92df-7e2267226c3e,network=Network(02245f78-e221-4ecd-ae3b-975782a68c5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0659d4f2-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.293 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.291 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[f07a389a-f5e0-4270-b83d-f64c5f127fbf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02245f78-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:c1:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677802, 'reachable_time': 21931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 466554, 'error': None, 'target': 'ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.294 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0659d4f2-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.299 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.302 350391 INFO os_vif [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:91:80:c9,bridge_name='br-int',has_traffic_filtering=True,id=0659d4f2-a740-4ecb-92df-7e2267226c3e,network=Network(02245f78-e221-4ecd-ae3b-975782a68c5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0659d4f2-a7')#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.318 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[4b12c370-2ef3-4698-9473-85bc44181a56]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap02245f78-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677815, 'tstamp': 677815}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 466555, 'error': None, 'target': 'ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap02245f78-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677819, 'tstamp': 677819}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 466555, 'error': None, 'target': 'ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.322 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02245f78-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.326 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02245f78-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.327 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.327 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02245f78-e0, col_values=(('external_ids', {'iface-id': 'b6066942-f0e5-4ff0-92ae-a027fdd86fa7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.328 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.332 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.583 350391 DEBUG nova.compute.manager [req-9e6385eb-c718-47a9-901a-ee0e0c26519b req-0625d7f0-be49-4a6b-bfe9-f1b1a438aa24 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Received event network-vif-unplugged-0659d4f2-a740-4ecb-92df-7e2267226c3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.583 350391 DEBUG oslo_concurrency.lockutils [req-9e6385eb-c718-47a9-901a-ee0e0c26519b req-0625d7f0-be49-4a6b-bfe9-f1b1a438aa24 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "74d081af-66cd-4e37-99e4-31f777885766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.584 350391 DEBUG oslo_concurrency.lockutils [req-9e6385eb-c718-47a9-901a-ee0e0c26519b req-0625d7f0-be49-4a6b-bfe9-f1b1a438aa24 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.584 350391 DEBUG oslo_concurrency.lockutils [req-9e6385eb-c718-47a9-901a-ee0e0c26519b req-0625d7f0-be49-4a6b-bfe9-f1b1a438aa24 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.587 350391 DEBUG nova.compute.manager [req-9e6385eb-c718-47a9-901a-ee0e0c26519b req-0625d7f0-be49-4a6b-bfe9-f1b1a438aa24 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] No waiting events found dispatching network-vif-unplugged-0659d4f2-a740-4ecb-92df-7e2267226c3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.589 350391 DEBUG nova.compute.manager [req-9e6385eb-c718-47a9-901a-ee0e0c26519b req-0625d7f0-be49-4a6b-bfe9-f1b1a438aa24 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Received event network-vif-unplugged-0659d4f2-a740-4ecb-92df-7e2267226c3e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.653 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.654 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.655 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 02:25:03 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:03.656 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:25:03 compute-0 nova_compute[350387]: 2025-11-26 02:25:03.984 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:04 compute-0 nova_compute[350387]: 2025-11-26 02:25:04.167 350391 INFO nova.virt.libvirt.driver [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Deleting instance files /var/lib/nova/instances/74d081af-66cd-4e37-99e4-31f777885766_del#033[00m
Nov 26 02:25:04 compute-0 nova_compute[350387]: 2025-11-26 02:25:04.168 350391 INFO nova.virt.libvirt.driver [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Deletion of /var/lib/nova/instances/74d081af-66cd-4e37-99e4-31f777885766_del complete#033[00m
Nov 26 02:25:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 236 MiB data, 392 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:25:04 compute-0 nova_compute[350387]: 2025-11-26 02:25:04.260 350391 INFO nova.compute.manager [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Took 1.26 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 02:25:04 compute-0 nova_compute[350387]: 2025-11-26 02:25:04.261 350391 DEBUG oslo.service.loopingcall [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 02:25:04 compute-0 nova_compute[350387]: 2025-11-26 02:25:04.261 350391 DEBUG nova.compute.manager [-] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 02:25:04 compute-0 nova_compute[350387]: 2025-11-26 02:25:04.261 350391 DEBUG nova.network.neutron [-] [instance: 74d081af-66cd-4e37-99e4-31f777885766] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 02:25:04 compute-0 podman[466576]: 2025-11-26 02:25:04.597425613 +0000 UTC m=+0.130804976 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:25:04 compute-0 podman[466575]: 2025-11-26 02:25:04.639627546 +0000 UTC m=+0.182116784 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-type=git, vendor=Red Hat, Inc.)
Nov 26 02:25:04 compute-0 nova_compute[350387]: 2025-11-26 02:25:04.895 350391 DEBUG nova.network.neutron [-] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:25:04 compute-0 nova_compute[350387]: 2025-11-26 02:25:04.945 350391 INFO nova.compute.manager [-] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Took 0.68 seconds to deallocate network for instance.#033[00m
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.132 350391 DEBUG oslo_concurrency.lockutils [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.133 350391 DEBUG oslo_concurrency.lockutils [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.205 350391 DEBUG oslo_concurrency.processutils [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.692 350391 DEBUG nova.compute.manager [req-7a964506-28c4-4d45-976a-532d64d36634 req-29eaac0f-d70d-4454-8320-97da77fd9497 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Received event network-vif-plugged-0659d4f2-a740-4ecb-92df-7e2267226c3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.693 350391 DEBUG oslo_concurrency.lockutils [req-7a964506-28c4-4d45-976a-532d64d36634 req-29eaac0f-d70d-4454-8320-97da77fd9497 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "74d081af-66cd-4e37-99e4-31f777885766-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.694 350391 DEBUG oslo_concurrency.lockutils [req-7a964506-28c4-4d45-976a-532d64d36634 req-29eaac0f-d70d-4454-8320-97da77fd9497 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.694 350391 DEBUG oslo_concurrency.lockutils [req-7a964506-28c4-4d45-976a-532d64d36634 req-29eaac0f-d70d-4454-8320-97da77fd9497 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.695 350391 DEBUG nova.compute.manager [req-7a964506-28c4-4d45-976a-532d64d36634 req-29eaac0f-d70d-4454-8320-97da77fd9497 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] No waiting events found dispatching network-vif-plugged-0659d4f2-a740-4ecb-92df-7e2267226c3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.695 350391 WARNING nova.compute.manager [req-7a964506-28c4-4d45-976a-532d64d36634 req-29eaac0f-d70d-4454-8320-97da77fd9497 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Received unexpected event network-vif-plugged-0659d4f2-a740-4ecb-92df-7e2267226c3e for instance with vm_state deleted and task_state None.#033[00m
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.695 350391 DEBUG nova.compute.manager [req-7a964506-28c4-4d45-976a-532d64d36634 req-29eaac0f-d70d-4454-8320-97da77fd9497 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Received event network-vif-deleted-0659d4f2-a740-4ecb-92df-7e2267226c3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:25:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:25:05 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774961715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.759 350391 DEBUG oslo_concurrency.processutils [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.774 350391 DEBUG nova.compute.provider_tree [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.794 350391 DEBUG nova.scheduler.client.report [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:25:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.820 350391 DEBUG oslo_concurrency.lockutils [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.847 350391 INFO nova.scheduler.client.report [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Deleted allocations for instance 74d081af-66cd-4e37-99e4-31f777885766#033[00m
Nov 26 02:25:05 compute-0 nova_compute[350387]: 2025-11-26 02:25:05.942 350391 DEBUG oslo_concurrency.lockutils [None req-d2293311-5d83-4a2f-9e8f-06488b0693be 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "74d081af-66cd-4e37-99e4-31f777885766" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:25:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 169 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.435564) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123906435657, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1683, "num_deletes": 251, "total_data_size": 2762220, "memory_usage": 2806000, "flush_reason": "Manual Compaction"}
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123906455685, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 2702725, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44581, "largest_seqno": 46263, "table_properties": {"data_size": 2694908, "index_size": 4760, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15817, "raw_average_key_size": 19, "raw_value_size": 2679335, "raw_average_value_size": 3382, "num_data_blocks": 212, "num_entries": 792, "num_filter_entries": 792, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764123725, "oldest_key_time": 1764123725, "file_creation_time": 1764123906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 20148 microseconds, and 11629 cpu microseconds.
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.455727) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 2702725 bytes OK
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.455742) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.457749) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.457762) EVENT_LOG_v1 {"time_micros": 1764123906457758, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.457777) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 2754988, prev total WAL file size 2754988, number of live WAL files 2.
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.459006) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(2639KB)], [107(6609KB)]
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123906459093, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 9471320, "oldest_snapshot_seqno": -1}
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6072 keys, 7737565 bytes, temperature: kUnknown
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123906510002, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 7737565, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7699939, "index_size": 21324, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 158009, "raw_average_key_size": 26, "raw_value_size": 7593066, "raw_average_value_size": 1250, "num_data_blocks": 845, "num_entries": 6072, "num_filter_entries": 6072, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764123906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.510260) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 7737565 bytes
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.512799) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.8 rd, 151.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 6.5 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 6586, records dropped: 514 output_compression: NoCompression
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.512907) EVENT_LOG_v1 {"time_micros": 1764123906512813, "job": 64, "event": "compaction_finished", "compaction_time_micros": 50986, "compaction_time_cpu_micros": 19252, "output_level": 6, "num_output_files": 1, "total_output_size": 7737565, "num_input_records": 6586, "num_output_records": 6072, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123906513811, "job": 64, "event": "table_file_deletion", "file_number": 109}
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764123906516274, "job": 64, "event": "table_file_deletion", "file_number": 107}
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.458729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.516483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.516489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.516492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.516495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:25:06 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:25:06.516498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:25:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 157 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 26 02:25:08 compute-0 nova_compute[350387]: 2025-11-26 02:25:08.298 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:08 compute-0 nova_compute[350387]: 2025-11-26 02:25:08.988 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 157 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 26 02:25:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:25:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:25:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:25:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:25:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:25:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:25:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:25:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2244: 321 pgs: 321 active+clean; 157 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 26 02:25:13 compute-0 nova_compute[350387]: 2025-11-26 02:25:13.302 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:13 compute-0 nova_compute[350387]: 2025-11-26 02:25:13.749 350391 DEBUG oslo_concurrency.lockutils [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "add194b7-6a6c-48ef-8355-3344185eb43e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:25:13 compute-0 nova_compute[350387]: 2025-11-26 02:25:13.750 350391 DEBUG oslo_concurrency.lockutils [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:25:13 compute-0 nova_compute[350387]: 2025-11-26 02:25:13.750 350391 DEBUG oslo_concurrency.lockutils [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:25:13 compute-0 nova_compute[350387]: 2025-11-26 02:25:13.750 350391 DEBUG oslo_concurrency.lockutils [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:25:13 compute-0 nova_compute[350387]: 2025-11-26 02:25:13.751 350391 DEBUG oslo_concurrency.lockutils [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:25:13 compute-0 nova_compute[350387]: 2025-11-26 02:25:13.752 350391 INFO nova.compute.manager [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Terminating instance#033[00m
Nov 26 02:25:13 compute-0 nova_compute[350387]: 2025-11-26 02:25:13.753 350391 DEBUG nova.compute.manager [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 02:25:13 compute-0 kernel: tapcaa46d5d-d6 (unregistering): left promiscuous mode
Nov 26 02:25:13 compute-0 NetworkManager[48886]: <info>  [1764123913.8627] device (tapcaa46d5d-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 02:25:13 compute-0 nova_compute[350387]: 2025-11-26 02:25:13.875 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:13 compute-0 ovn_controller[89102]: 2025-11-26T02:25:13Z|00172|binding|INFO|Releasing lport caa46d5d-d6ee-42de-a514-e911d1f0fc60 from this chassis (sb_readonly=0)
Nov 26 02:25:13 compute-0 ovn_controller[89102]: 2025-11-26T02:25:13Z|00173|binding|INFO|Setting lport caa46d5d-d6ee-42de-a514-e911d1f0fc60 down in Southbound
Nov 26 02:25:13 compute-0 ovn_controller[89102]: 2025-11-26T02:25:13Z|00174|binding|INFO|Removing iface tapcaa46d5d-d6 ovn-installed in OVS
Nov 26 02:25:13 compute-0 nova_compute[350387]: 2025-11-26 02:25:13.878 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:13.887 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:b7:00 10.100.2.215'], port_security=['fa:16:3e:6e:b7:00 10.100.2.215'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.215/16', 'neutron:device_id': 'add194b7-6a6c-48ef-8355-3344185eb43e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02245f78-e221-4ecd-ae3b-975782a68c5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb4e9e1ffe494961ba45f8f24f21b106', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20511ddf-b2cd-472a-84f8-e35fd6d0c575', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61c2d3e7-61df-4898-a297-774785d24b01, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>], logical_port=caa46d5d-d6ee-42de-a514-e911d1f0fc60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f78c00d2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:25:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:13.888 286844 INFO neutron.agent.ovn.metadata.agent [-] Port caa46d5d-d6ee-42de-a514-e911d1f0fc60 in datapath 02245f78-e221-4ecd-ae3b-975782a68c5e unbound from our chassis#033[00m
Nov 26 02:25:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:13.890 286844 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02245f78-e221-4ecd-ae3b-975782a68c5e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 02:25:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:13.891 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[3df616b8-6076-411b-9b74-01f510d5268d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:25:13 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:13.892 286844 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e namespace which is not needed anymore#033[00m
Nov 26 02:25:13 compute-0 nova_compute[350387]: 2025-11-26 02:25:13.908 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:13 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 26 02:25:13 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 7min 4.920s CPU time.
Nov 26 02:25:13 compute-0 systemd-machined[138512]: Machine qemu-16-instance-0000000f terminated.
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:13.999 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.010 350391 INFO nova.virt.libvirt.driver [-] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Instance destroyed successfully.#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.010 350391 DEBUG nova.objects.instance [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lazy-loading 'resources' on Instance uuid add194b7-6a6c-48ef-8355-3344185eb43e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.025 350391 DEBUG nova.virt.libvirt.vif [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T02:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-9551628-asg-agzqqfkj5yfv-qlp6pkk65bxs-dtpyatzesj3i',id=15,image_ref='dbaf181e-c7da-4938-bfef-7ab3aa9a19bc',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-26T02:15:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='bd820598-acdd-4f42-8252-1f5951161b01'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cb4e9e1ffe494961ba45f8f24f21b106',ramdisk_id='',reservation_id='r-lsmzl6nz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='dbaf181e-c7da-4938-bfef-7ab3aa9a19bc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-624283200',owner_user_name='tempest-PrometheusGabbiTest-624283200-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T02:15:15Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='3a9710ede02d47cbb016ff596d936633',uuid=add194b7-6a6c-48ef-8355-3344185eb43e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "address": "fa:16:3e:6e:b7:00", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaa46d5d-d6", "ovs_interfaceid": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.026 350391 DEBUG nova.network.os_vif_util [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Converting VIF {"id": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "address": "fa:16:3e:6e:b7:00", "network": {"id": "02245f78-e221-4ecd-ae3b-975782a68c5e", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.215", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb4e9e1ffe494961ba45f8f24f21b106", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaa46d5d-d6", "ovs_interfaceid": "caa46d5d-d6ee-42de-a514-e911d1f0fc60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.028 350391 DEBUG nova.network.os_vif_util [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:b7:00,bridge_name='br-int',has_traffic_filtering=True,id=caa46d5d-d6ee-42de-a514-e911d1f0fc60,network=Network(02245f78-e221-4ecd-ae3b-975782a68c5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaa46d5d-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.029 350391 DEBUG os_vif [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:b7:00,bridge_name='br-int',has_traffic_filtering=True,id=caa46d5d-d6ee-42de-a514-e911d1f0fc60,network=Network(02245f78-e221-4ecd-ae3b-975782a68c5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaa46d5d-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.033 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.034 350391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcaa46d5d-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.039 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.041 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.046 350391 INFO os_vif [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:b7:00,bridge_name='br-int',has_traffic_filtering=True,id=caa46d5d-d6ee-42de-a514-e911d1f0fc60,network=Network(02245f78-e221-4ecd-ae3b-975782a68c5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaa46d5d-d6')#033[00m
Nov 26 02:25:14 compute-0 neutron-haproxy-ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e[446753]: [NOTICE]   (446774) : haproxy version is 2.8.14-c23fe91
Nov 26 02:25:14 compute-0 neutron-haproxy-ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e[446753]: [NOTICE]   (446774) : path to executable is /usr/sbin/haproxy
Nov 26 02:25:14 compute-0 neutron-haproxy-ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e[446753]: [WARNING]  (446774) : Exiting Master process...
Nov 26 02:25:14 compute-0 neutron-haproxy-ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e[446753]: [ALERT]    (446774) : Current worker (446800) exited with code 143 (Terminated)
Nov 26 02:25:14 compute-0 neutron-haproxy-ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e[446753]: [WARNING]  (446774) : All workers exited. Exiting... (0)
Nov 26 02:25:14 compute-0 systemd[1]: libpod-9219975fe38cbf3212487f1feb2bf487a2f4b1fd222e03710329805d22453326.scope: Deactivated successfully.
Nov 26 02:25:14 compute-0 podman[466675]: 2025-11-26 02:25:14.181202804 +0000 UTC m=+0.102030610 container died 9219975fe38cbf3212487f1feb2bf487a2f4b1fd222e03710329805d22453326 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 26 02:25:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 321 active+clean; 157 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 26 02:25:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9219975fe38cbf3212487f1feb2bf487a2f4b1fd222e03710329805d22453326-userdata-shm.mount: Deactivated successfully.
Nov 26 02:25:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1872517a346af7240d3817d8e2052f966c97db92ccb2808d6c4b55f00422ae1f-merged.mount: Deactivated successfully.
Nov 26 02:25:14 compute-0 podman[466675]: 2025-11-26 02:25:14.270976229 +0000 UTC m=+0.191804025 container cleanup 9219975fe38cbf3212487f1feb2bf487a2f4b1fd222e03710329805d22453326 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Nov 26 02:25:14 compute-0 systemd[1]: libpod-conmon-9219975fe38cbf3212487f1feb2bf487a2f4b1fd222e03710329805d22453326.scope: Deactivated successfully.
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.356 350391 DEBUG nova.compute.manager [req-023a5eb9-2025-4e21-ac3b-773b7014f88a req-78b31c1b-ce11-41e4-bf8e-1f749e835668 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Received event network-vif-unplugged-caa46d5d-d6ee-42de-a514-e911d1f0fc60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.356 350391 DEBUG oslo_concurrency.lockutils [req-023a5eb9-2025-4e21-ac3b-773b7014f88a req-78b31c1b-ce11-41e4-bf8e-1f749e835668 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.356 350391 DEBUG oslo_concurrency.lockutils [req-023a5eb9-2025-4e21-ac3b-773b7014f88a req-78b31c1b-ce11-41e4-bf8e-1f749e835668 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.356 350391 DEBUG oslo_concurrency.lockutils [req-023a5eb9-2025-4e21-ac3b-773b7014f88a req-78b31c1b-ce11-41e4-bf8e-1f749e835668 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.357 350391 DEBUG nova.compute.manager [req-023a5eb9-2025-4e21-ac3b-773b7014f88a req-78b31c1b-ce11-41e4-bf8e-1f749e835668 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] No waiting events found dispatching network-vif-unplugged-caa46d5d-d6ee-42de-a514-e911d1f0fc60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.357 350391 DEBUG nova.compute.manager [req-023a5eb9-2025-4e21-ac3b-773b7014f88a req-78b31c1b-ce11-41e4-bf8e-1f749e835668 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Received event network-vif-unplugged-caa46d5d-d6ee-42de-a514-e911d1f0fc60 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 02:25:14 compute-0 podman[466721]: 2025-11-26 02:25:14.418200184 +0000 UTC m=+0.093292134 container remove 9219975fe38cbf3212487f1feb2bf487a2f4b1fd222e03710329805d22453326 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 26 02:25:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:14.428 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[5d154e84-476f-4e8d-9bb8-261dce5835b3]: (4, ('Wed Nov 26 02:25:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e (9219975fe38cbf3212487f1feb2bf487a2f4b1fd222e03710329805d22453326)\n9219975fe38cbf3212487f1feb2bf487a2f4b1fd222e03710329805d22453326\nWed Nov 26 02:25:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e (9219975fe38cbf3212487f1feb2bf487a2f4b1fd222e03710329805d22453326)\n9219975fe38cbf3212487f1feb2bf487a2f4b1fd222e03710329805d22453326\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:25:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:14.431 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[6d7b467f-4582-4dd6-82a0-b4bdb94ca920]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:25:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:14.432 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02245f78-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.434 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:14 compute-0 kernel: tap02245f78-e0: left promiscuous mode
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.453 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.456 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:14.461 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[79f6ebd3-f327-4b76-9eea-5f161f1d779f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:25:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:14.482 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[62f28e42-bfe1-470f-bf0f-7718b4ddb57a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:25:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:14.484 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[24fb517b-7a5b-4d2b-90c5-f7a0559ad9b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:25:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:14.503 413433 DEBUG oslo.privsep.daemon [-] privsep: reply[41668a32-fe83-4101-adec-7c3226b1d240]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677793, 'reachable_time': 30902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 466736, 'error': None, 'target': 'ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:25:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:14.506 287175 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-02245f78-e221-4ecd-ae3b-975782a68c5e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 02:25:14 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:14.506 287175 DEBUG oslo.privsep.daemon [-] privsep: reply[b06a99f2-3294-4a52-bdaf-cbfe1992de60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 02:25:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d02245f78\x2de221\x2d4ecd\x2dae3b\x2d975782a68c5e.mount: Deactivated successfully.
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.889 350391 INFO nova.virt.libvirt.driver [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Deleting instance files /var/lib/nova/instances/add194b7-6a6c-48ef-8355-3344185eb43e_del#033[00m
Nov 26 02:25:14 compute-0 nova_compute[350387]: 2025-11-26 02:25:14.890 350391 INFO nova.virt.libvirt.driver [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Deletion of /var/lib/nova/instances/add194b7-6a6c-48ef-8355-3344185eb43e_del complete#033[00m
Nov 26 02:25:15 compute-0 nova_compute[350387]: 2025-11-26 02:25:15.058 350391 INFO nova.compute.manager [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Took 1.30 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 02:25:15 compute-0 nova_compute[350387]: 2025-11-26 02:25:15.058 350391 DEBUG oslo.service.loopingcall [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 02:25:15 compute-0 nova_compute[350387]: 2025-11-26 02:25:15.059 350391 DEBUG nova.compute.manager [-] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 02:25:15 compute-0 nova_compute[350387]: 2025-11-26 02:25:15.059 350391 DEBUG nova.network.neutron [-] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 02:25:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:25:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 103 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.0 KiB/s wr, 35 op/s
Nov 26 02:25:16 compute-0 nova_compute[350387]: 2025-11-26 02:25:16.473 350391 DEBUG nova.compute.manager [req-8b1fdaa1-cfc5-4a72-987f-67994c72839e req-9a6f3786-88ea-4bb5-b671-29dbcb5ca4f7 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Received event network-vif-plugged-caa46d5d-d6ee-42de-a514-e911d1f0fc60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:25:16 compute-0 nova_compute[350387]: 2025-11-26 02:25:16.473 350391 DEBUG oslo_concurrency.lockutils [req-8b1fdaa1-cfc5-4a72-987f-67994c72839e req-9a6f3786-88ea-4bb5-b671-29dbcb5ca4f7 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Acquiring lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:25:16 compute-0 nova_compute[350387]: 2025-11-26 02:25:16.474 350391 DEBUG oslo_concurrency.lockutils [req-8b1fdaa1-cfc5-4a72-987f-67994c72839e req-9a6f3786-88ea-4bb5-b671-29dbcb5ca4f7 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:25:16 compute-0 nova_compute[350387]: 2025-11-26 02:25:16.474 350391 DEBUG oslo_concurrency.lockutils [req-8b1fdaa1-cfc5-4a72-987f-67994c72839e req-9a6f3786-88ea-4bb5-b671-29dbcb5ca4f7 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:25:16 compute-0 nova_compute[350387]: 2025-11-26 02:25:16.475 350391 DEBUG nova.compute.manager [req-8b1fdaa1-cfc5-4a72-987f-67994c72839e req-9a6f3786-88ea-4bb5-b671-29dbcb5ca4f7 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] No waiting events found dispatching network-vif-plugged-caa46d5d-d6ee-42de-a514-e911d1f0fc60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 02:25:16 compute-0 nova_compute[350387]: 2025-11-26 02:25:16.475 350391 WARNING nova.compute.manager [req-8b1fdaa1-cfc5-4a72-987f-67994c72839e req-9a6f3786-88ea-4bb5-b671-29dbcb5ca4f7 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Received unexpected event network-vif-plugged-caa46d5d-d6ee-42de-a514-e911d1f0fc60 for instance with vm_state active and task_state deleting.#033[00m
Nov 26 02:25:16 compute-0 podman[466737]: 2025-11-26 02:25:16.586205779 +0000 UTC m=+0.131772353 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 02:25:16 compute-0 podman[466738]: 2025-11-26 02:25:16.5930272 +0000 UTC m=+0.135826577 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 02:25:16 compute-0 podman[466739]: 2025-11-26 02:25:16.625454408 +0000 UTC m=+0.160982221 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:25:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 77 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 26 02:25:18 compute-0 nova_compute[350387]: 2025-11-26 02:25:18.207 350391 DEBUG nova.network.neutron [-] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 02:25:18 compute-0 nova_compute[350387]: 2025-11-26 02:25:18.257 350391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764123903.2562222, 74d081af-66cd-4e37-99e4-31f777885766 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:25:18 compute-0 nova_compute[350387]: 2025-11-26 02:25:18.258 350391 INFO nova.compute.manager [-] [instance: 74d081af-66cd-4e37-99e4-31f777885766] VM Stopped (Lifecycle Event)#033[00m
Nov 26 02:25:18 compute-0 nova_compute[350387]: 2025-11-26 02:25:18.265 350391 INFO nova.compute.manager [-] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Took 3.21 seconds to deallocate network for instance.#033[00m
Nov 26 02:25:18 compute-0 nova_compute[350387]: 2025-11-26 02:25:18.304 350391 DEBUG nova.compute.manager [None req-e629dc20-4a28-42a7-ac45-baafa18523fe - - - - - -] [instance: 74d081af-66cd-4e37-99e4-31f777885766] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:25:18 compute-0 nova_compute[350387]: 2025-11-26 02:25:18.336 350391 DEBUG nova.compute.manager [req-7120842f-47b8-4518-9ef6-9259d1dc066a req-7fb9b5e2-3f04-4f8f-a352-daa897641273 994caa2fa87141a2be1c848b2b6f8d66 7e0e2c38263841cfaf602258f17f5f5d - - default default] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Received event network-vif-deleted-caa46d5d-d6ee-42de-a514-e911d1f0fc60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 02:25:18 compute-0 nova_compute[350387]: 2025-11-26 02:25:18.340 350391 DEBUG oslo_concurrency.lockutils [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:25:18 compute-0 nova_compute[350387]: 2025-11-26 02:25:18.340 350391 DEBUG oslo_concurrency.lockutils [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:25:18 compute-0 nova_compute[350387]: 2025-11-26 02:25:18.407 350391 DEBUG oslo_concurrency.processutils [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:25:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:25:18 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2540544521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:25:18 compute-0 nova_compute[350387]: 2025-11-26 02:25:18.910 350391 DEBUG oslo_concurrency.processutils [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:25:18 compute-0 nova_compute[350387]: 2025-11-26 02:25:18.922 350391 DEBUG nova.compute.provider_tree [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:25:18 compute-0 nova_compute[350387]: 2025-11-26 02:25:18.941 350391 DEBUG nova.scheduler.client.report [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:25:18 compute-0 nova_compute[350387]: 2025-11-26 02:25:18.968 350391 DEBUG oslo_concurrency.lockutils [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:25:19 compute-0 nova_compute[350387]: 2025-11-26 02:25:19.000 350391 INFO nova.scheduler.client.report [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Deleted allocations for instance add194b7-6a6c-48ef-8355-3344185eb43e#033[00m
Nov 26 02:25:19 compute-0 nova_compute[350387]: 2025-11-26 02:25:19.006 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:19 compute-0 nova_compute[350387]: 2025-11-26 02:25:19.037 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:19 compute-0 nova_compute[350387]: 2025-11-26 02:25:19.056 350391 DEBUG oslo_concurrency.lockutils [None req-47e46f95-53d5-4683-b5bd-503261690b30 3a9710ede02d47cbb016ff596d936633 cb4e9e1ffe494961ba45f8f24f21b106 - - default default] Lock "add194b7-6a6c-48ef-8355-3344185eb43e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:25:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 26 02:25:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:25:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 26 02:25:24 compute-0 nova_compute[350387]: 2025-11-26 02:25:24.009 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:24 compute-0 nova_compute[350387]: 2025-11-26 02:25:24.042 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 26 02:25:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 26 02:25:24 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 26 02:25:24 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 26 02:25:24 compute-0 podman[466820]: 2025-11-26 02:25:24.600619567 +0000 UTC m=+0.153910153 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 02:25:24 compute-0 podman[466821]: 2025-11-26 02:25:24.620428442 +0000 UTC m=+0.165360124 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 02:25:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:25.013 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:25:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:25.014 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:25:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:25:25.014 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:25:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:25:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1023 B/s wr, 42 op/s
Nov 26 02:25:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 26 02:25:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 26 02:25:26 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 26 02:25:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:25:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3466842154' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:25:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:25:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3466842154' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:25:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 26 02:25:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 26 02:25:27 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 26 02:25:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2255: 321 pgs: 321 active+clean; 85 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Nov 26 02:25:29 compute-0 nova_compute[350387]: 2025-11-26 02:25:29.003 350391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764123914.0021768, add194b7-6a6c-48ef-8355-3344185eb43e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 02:25:29 compute-0 nova_compute[350387]: 2025-11-26 02:25:29.004 350391 INFO nova.compute.manager [-] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] VM Stopped (Lifecycle Event)#033[00m
Nov 26 02:25:29 compute-0 nova_compute[350387]: 2025-11-26 02:25:29.012 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:29 compute-0 nova_compute[350387]: 2025-11-26 02:25:29.026 350391 DEBUG nova.compute.manager [None req-a4e5328b-835c-471a-a5fc-78ef677133a7 - - - - - -] [instance: add194b7-6a6c-48ef-8355-3344185eb43e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 02:25:29 compute-0 nova_compute[350387]: 2025-11-26 02:25:29.045 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:29 compute-0 podman[466862]: 2025-11-26 02:25:29.588803798 +0000 UTC m=+0.134720606 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, version=9.4, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-container, vcs-type=git, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Nov 26 02:25:29 compute-0 podman[466863]: 2025-11-26 02:25:29.615699352 +0000 UTC m=+0.157277298 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 26 02:25:29 compute-0 podman[158021]: time="2025-11-26T02:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:25:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:25:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8193 "" "Go-http-client/1.1"
Nov 26 02:25:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 73 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 3.4 MiB/s wr, 112 op/s
Nov 26 02:25:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:25:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 26 02:25:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 26 02:25:30 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 26 02:25:31 compute-0 nova_compute[350387]: 2025-11-26 02:25:31.200 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:31 compute-0 openstack_network_exporter[367323]: ERROR   02:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:25:31 compute-0 openstack_network_exporter[367323]: ERROR   02:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:25:31 compute-0 openstack_network_exporter[367323]: ERROR   02:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:25:31 compute-0 openstack_network_exporter[367323]: ERROR   02:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:25:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:25:31 compute-0 openstack_network_exporter[367323]: ERROR   02:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:25:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:25:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 3.4 MiB/s wr, 84 op/s
Nov 26 02:25:34 compute-0 nova_compute[350387]: 2025-11-26 02:25:34.015 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:34 compute-0 nova_compute[350387]: 2025-11-26 02:25:34.048 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.7 MiB/s wr, 66 op/s
Nov 26 02:25:35 compute-0 podman[466901]: 2025-11-26 02:25:35.490442621 +0000 UTC m=+0.135933159 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:25:35 compute-0 podman[466900]: 2025-11-26 02:25:35.511624275 +0000 UTC m=+0.163467731 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, maintainer=Red Hat, Inc., version=9.6, release=1755695350, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 02:25:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:25:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 26 02:25:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 26 02:25:35 compute-0 ceph-mon[192746]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 26 02:25:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 572 KiB/s wr, 55 op/s
Nov 26 02:25:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 0 B/s wr, 2 op/s
Nov 26 02:25:39 compute-0 nova_compute[350387]: 2025-11-26 02:25:39.018 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:39 compute-0 nova_compute[350387]: 2025-11-26 02:25:39.051 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 654 B/s rd, 0 B/s wr, 1 op/s
Nov 26 02:25:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:25:41
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'volumes', '.rgw.root', 'default.rgw.meta', 'images', 'vms', 'default.rgw.log', 'backups']
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:25:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:25:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:25:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:25:43 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:25:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:25:43 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:25:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:25:43 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:25:43 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 71eb6f92-6c48-489f-a703-957218b58b12 does not exist
Nov 26 02:25:43 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 11cd1aa6-1731-4379-bae9-3dfeb6a53d41 does not exist
Nov 26 02:25:43 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 62d05626-06e7-49dc-aba5-d8c9dff77892 does not exist
Nov 26 02:25:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:25:43 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:25:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:25:43 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:25:43 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:25:43 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:25:44 compute-0 nova_compute[350387]: 2025-11-26 02:25:44.021 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:44 compute-0 nova_compute[350387]: 2025-11-26 02:25:44.053 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:25:44 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:25:44 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:25:44 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:25:44 compute-0 podman[467211]: 2025-11-26 02:25:44.727603732 +0000 UTC m=+0.083596063 container create 6b3a8f27d7962a8d355a1c6283b1968337c8661dd1bfdab332b979abbb620bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gagarin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:25:44 compute-0 podman[467211]: 2025-11-26 02:25:44.690856713 +0000 UTC m=+0.046849074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:25:44 compute-0 systemd[1]: Started libpod-conmon-6b3a8f27d7962a8d355a1c6283b1968337c8661dd1bfdab332b979abbb620bde.scope.
Nov 26 02:25:44 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:25:44 compute-0 podman[467211]: 2025-11-26 02:25:44.892242595 +0000 UTC m=+0.248234996 container init 6b3a8f27d7962a8d355a1c6283b1968337c8661dd1bfdab332b979abbb620bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:25:44 compute-0 podman[467211]: 2025-11-26 02:25:44.906560656 +0000 UTC m=+0.262553017 container start 6b3a8f27d7962a8d355a1c6283b1968337c8661dd1bfdab332b979abbb620bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gagarin, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:25:44 compute-0 podman[467211]: 2025-11-26 02:25:44.915940799 +0000 UTC m=+0.271933160 container attach 6b3a8f27d7962a8d355a1c6283b1968337c8661dd1bfdab332b979abbb620bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:25:44 compute-0 wonderful_gagarin[467226]: 167 167
Nov 26 02:25:44 compute-0 systemd[1]: libpod-6b3a8f27d7962a8d355a1c6283b1968337c8661dd1bfdab332b979abbb620bde.scope: Deactivated successfully.
Nov 26 02:25:44 compute-0 podman[467211]: 2025-11-26 02:25:44.92204517 +0000 UTC m=+0.278037531 container died 6b3a8f27d7962a8d355a1c6283b1968337c8661dd1bfdab332b979abbb620bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 02:25:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-465aa73136a3fef0935df500c193a79897340f75edc83756f78b77c6488b40e3-merged.mount: Deactivated successfully.
Nov 26 02:25:44 compute-0 podman[467211]: 2025-11-26 02:25:44.988612135 +0000 UTC m=+0.344604496 container remove 6b3a8f27d7962a8d355a1c6283b1968337c8661dd1bfdab332b979abbb620bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 02:25:45 compute-0 systemd[1]: libpod-conmon-6b3a8f27d7962a8d355a1c6283b1968337c8661dd1bfdab332b979abbb620bde.scope: Deactivated successfully.
Nov 26 02:25:45 compute-0 podman[467250]: 2025-11-26 02:25:45.274386992 +0000 UTC m=+0.089278242 container create 6805444c7126eef6bc6a96945d24623f9ef5d83cdcecdd24fb4ab9b1eb4e3f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 02:25:45 compute-0 podman[467250]: 2025-11-26 02:25:45.246796379 +0000 UTC m=+0.061687689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:25:45 compute-0 systemd[1]: Started libpod-conmon-6805444c7126eef6bc6a96945d24623f9ef5d83cdcecdd24fb4ab9b1eb4e3f96.scope.
Nov 26 02:25:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2762855f9bbd94c1825db1664476fefb327ea6edfa2862fa46eaf2c07386db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2762855f9bbd94c1825db1664476fefb327ea6edfa2862fa46eaf2c07386db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2762855f9bbd94c1825db1664476fefb327ea6edfa2862fa46eaf2c07386db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2762855f9bbd94c1825db1664476fefb327ea6edfa2862fa46eaf2c07386db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2762855f9bbd94c1825db1664476fefb327ea6edfa2862fa46eaf2c07386db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:25:45 compute-0 podman[467250]: 2025-11-26 02:25:45.472519414 +0000 UTC m=+0.287410704 container init 6805444c7126eef6bc6a96945d24623f9ef5d83cdcecdd24fb4ab9b1eb4e3f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 02:25:45 compute-0 podman[467250]: 2025-11-26 02:25:45.49595861 +0000 UTC m=+0.310849860 container start 6805444c7126eef6bc6a96945d24623f9ef5d83cdcecdd24fb4ab9b1eb4e3f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:25:45 compute-0 podman[467250]: 2025-11-26 02:25:45.50272915 +0000 UTC m=+0.317620430 container attach 6805444c7126eef6bc6a96945d24623f9ef5d83cdcecdd24fb4ab9b1eb4e3f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 02:25:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:25:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:25:46 compute-0 nova_compute[350387]: 2025-11-26 02:25:46.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:25:46 compute-0 tender_montalcini[467266]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:25:46 compute-0 tender_montalcini[467266]: --> relative data size: 1.0
Nov 26 02:25:46 compute-0 tender_montalcini[467266]: --> All data devices are unavailable
Nov 26 02:25:46 compute-0 systemd[1]: libpod-6805444c7126eef6bc6a96945d24623f9ef5d83cdcecdd24fb4ab9b1eb4e3f96.scope: Deactivated successfully.
Nov 26 02:25:46 compute-0 systemd[1]: libpod-6805444c7126eef6bc6a96945d24623f9ef5d83cdcecdd24fb4ab9b1eb4e3f96.scope: Consumed 1.319s CPU time.
Nov 26 02:25:46 compute-0 podman[467250]: 2025-11-26 02:25:46.867443946 +0000 UTC m=+1.682335216 container died 6805444c7126eef6bc6a96945d24623f9ef5d83cdcecdd24fb4ab9b1eb4e3f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:25:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e2762855f9bbd94c1825db1664476fefb327ea6edfa2862fa46eaf2c07386db-merged.mount: Deactivated successfully.
Nov 26 02:25:46 compute-0 podman[467250]: 2025-11-26 02:25:46.965569046 +0000 UTC m=+1.780460286 container remove 6805444c7126eef6bc6a96945d24623f9ef5d83cdcecdd24fb4ab9b1eb4e3f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:25:46 compute-0 systemd[1]: libpod-conmon-6805444c7126eef6bc6a96945d24623f9ef5d83cdcecdd24fb4ab9b1eb4e3f96.scope: Deactivated successfully.
Nov 26 02:25:47 compute-0 podman[467296]: 2025-11-26 02:25:47.03244502 +0000 UTC m=+0.111386592 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 02:25:47 compute-0 podman[467304]: 2025-11-26 02:25:47.034408495 +0000 UTC m=+0.105453246 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:25:47 compute-0 podman[467303]: 2025-11-26 02:25:47.070176077 +0000 UTC m=+0.137969797 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 26 02:25:47 compute-0 nova_compute[350387]: 2025-11-26 02:25:47.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:25:47 compute-0 nova_compute[350387]: 2025-11-26 02:25:47.339 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:25:47 compute-0 nova_compute[350387]: 2025-11-26 02:25:47.340 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:25:47 compute-0 nova_compute[350387]: 2025-11-26 02:25:47.340 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:25:47 compute-0 nova_compute[350387]: 2025-11-26 02:25:47.340 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:25:47 compute-0 nova_compute[350387]: 2025-11-26 02:25:47.341 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:25:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:25:47 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3136125904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:25:47 compute-0 nova_compute[350387]: 2025-11-26 02:25:47.848 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:25:48 compute-0 podman[467528]: 2025-11-26 02:25:48.020199015 +0000 UTC m=+0.108748078 container create 5c63712d9399fa5ccc281657f21728907c8656de376905739c93267fd73d57cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_noyce, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 02:25:48 compute-0 podman[467528]: 2025-11-26 02:25:47.982615742 +0000 UTC m=+0.071164865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:25:48 compute-0 systemd[1]: Started libpod-conmon-5c63712d9399fa5ccc281657f21728907c8656de376905739c93267fd73d57cc.scope.
Nov 26 02:25:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:25:48 compute-0 podman[467528]: 2025-11-26 02:25:48.188602043 +0000 UTC m=+0.277151086 container init 5c63712d9399fa5ccc281657f21728907c8656de376905739c93267fd73d57cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 02:25:48 compute-0 podman[467528]: 2025-11-26 02:25:48.207113342 +0000 UTC m=+0.295662415 container start 5c63712d9399fa5ccc281657f21728907c8656de376905739c93267fd73d57cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_noyce, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 02:25:48 compute-0 podman[467528]: 2025-11-26 02:25:48.214921601 +0000 UTC m=+0.303470654 container attach 5c63712d9399fa5ccc281657f21728907c8656de376905739c93267fd73d57cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_noyce, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:25:48 compute-0 blissful_noyce[467544]: 167 167
Nov 26 02:25:48 compute-0 systemd[1]: libpod-5c63712d9399fa5ccc281657f21728907c8656de376905739c93267fd73d57cc.scope: Deactivated successfully.
Nov 26 02:25:48 compute-0 podman[467528]: 2025-11-26 02:25:48.218167462 +0000 UTC m=+0.306716505 container died 5c63712d9399fa5ccc281657f21728907c8656de376905739c93267fd73d57cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_noyce, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 26 02:25:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ad419e863016bedfe63e6faace994a3e104051869e3d223539c18c07c420f4b-merged.mount: Deactivated successfully.
Nov 26 02:25:48 compute-0 podman[467528]: 2025-11-26 02:25:48.292407982 +0000 UTC m=+0.380957025 container remove 5c63712d9399fa5ccc281657f21728907c8656de376905739c93267fd73d57cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_noyce, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:25:48 compute-0 systemd[1]: libpod-conmon-5c63712d9399fa5ccc281657f21728907c8656de376905739c93267fd73d57cc.scope: Deactivated successfully.
Nov 26 02:25:48 compute-0 nova_compute[350387]: 2025-11-26 02:25:48.425 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:25:48 compute-0 nova_compute[350387]: 2025-11-26 02:25:48.427 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3968MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:25:48 compute-0 nova_compute[350387]: 2025-11-26 02:25:48.428 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:25:48 compute-0 nova_compute[350387]: 2025-11-26 02:25:48.428 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:25:48 compute-0 nova_compute[350387]: 2025-11-26 02:25:48.531 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:25:48 compute-0 nova_compute[350387]: 2025-11-26 02:25:48.531 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:25:48 compute-0 podman[467566]: 2025-11-26 02:25:48.56644594 +0000 UTC m=+0.074831368 container create 45f9ebe8051b03377fde2a91e36660a792004712e193d7037091ac7a857f7e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_albattani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 02:25:48 compute-0 nova_compute[350387]: 2025-11-26 02:25:48.576 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:25:48 compute-0 podman[467566]: 2025-11-26 02:25:48.542490879 +0000 UTC m=+0.050876287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:25:48 compute-0 systemd[1]: Started libpod-conmon-45f9ebe8051b03377fde2a91e36660a792004712e193d7037091ac7a857f7e08.scope.
Nov 26 02:25:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b917087d71cb0b45232ae9076cfc01a4196b26c94d62a916e54d8b2a8726efb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b917087d71cb0b45232ae9076cfc01a4196b26c94d62a916e54d8b2a8726efb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b917087d71cb0b45232ae9076cfc01a4196b26c94d62a916e54d8b2a8726efb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b917087d71cb0b45232ae9076cfc01a4196b26c94d62a916e54d8b2a8726efb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:25:48 compute-0 podman[467566]: 2025-11-26 02:25:48.729095287 +0000 UTC m=+0.237480755 container init 45f9ebe8051b03377fde2a91e36660a792004712e193d7037091ac7a857f7e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:25:48 compute-0 podman[467566]: 2025-11-26 02:25:48.750033934 +0000 UTC m=+0.258419352 container start 45f9ebe8051b03377fde2a91e36660a792004712e193d7037091ac7a857f7e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_albattani, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:25:48 compute-0 podman[467566]: 2025-11-26 02:25:48.756379942 +0000 UTC m=+0.264765390 container attach 45f9ebe8051b03377fde2a91e36660a792004712e193d7037091ac7a857f7e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_albattani, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:25:49 compute-0 nova_compute[350387]: 2025-11-26 02:25:49.024 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:49 compute-0 nova_compute[350387]: 2025-11-26 02:25:49.055 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:25:49 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4132578590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:25:49 compute-0 nova_compute[350387]: 2025-11-26 02:25:49.142 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:25:49 compute-0 nova_compute[350387]: 2025-11-26 02:25:49.157 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:25:49 compute-0 nova_compute[350387]: 2025-11-26 02:25:49.176 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:25:49 compute-0 nova_compute[350387]: 2025-11-26 02:25:49.210 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:25:49 compute-0 nova_compute[350387]: 2025-11-26 02:25:49.211 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:25:49 compute-0 eager_albattani[467583]: {
Nov 26 02:25:49 compute-0 eager_albattani[467583]:    "0": [
Nov 26 02:25:49 compute-0 eager_albattani[467583]:        {
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "devices": [
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "/dev/loop3"
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            ],
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "lv_name": "ceph_lv0",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "lv_size": "21470642176",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "name": "ceph_lv0",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "tags": {
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.cluster_name": "ceph",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.crush_device_class": "",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.encrypted": "0",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.osd_id": "0",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.type": "block",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.vdo": "0"
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            },
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "type": "block",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "vg_name": "ceph_vg0"
Nov 26 02:25:49 compute-0 eager_albattani[467583]:        }
Nov 26 02:25:49 compute-0 eager_albattani[467583]:    ],
Nov 26 02:25:49 compute-0 eager_albattani[467583]:    "1": [
Nov 26 02:25:49 compute-0 eager_albattani[467583]:        {
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "devices": [
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "/dev/loop4"
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            ],
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "lv_name": "ceph_lv1",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "lv_size": "21470642176",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "name": "ceph_lv1",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "tags": {
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.cluster_name": "ceph",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.crush_device_class": "",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.encrypted": "0",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.osd_id": "1",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.type": "block",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.vdo": "0"
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            },
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "type": "block",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "vg_name": "ceph_vg1"
Nov 26 02:25:49 compute-0 eager_albattani[467583]:        }
Nov 26 02:25:49 compute-0 eager_albattani[467583]:    ],
Nov 26 02:25:49 compute-0 eager_albattani[467583]:    "2": [
Nov 26 02:25:49 compute-0 eager_albattani[467583]:        {
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "devices": [
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "/dev/loop5"
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            ],
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "lv_name": "ceph_lv2",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "lv_size": "21470642176",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "name": "ceph_lv2",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "tags": {
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.cluster_name": "ceph",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.crush_device_class": "",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.encrypted": "0",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.osd_id": "2",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.type": "block",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:                "ceph.vdo": "0"
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            },
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "type": "block",
Nov 26 02:25:49 compute-0 eager_albattani[467583]:            "vg_name": "ceph_vg2"
Nov 26 02:25:49 compute-0 eager_albattani[467583]:        }
Nov 26 02:25:49 compute-0 eager_albattani[467583]:    ]
Nov 26 02:25:49 compute-0 eager_albattani[467583]: }
Nov 26 02:25:49 compute-0 systemd[1]: libpod-45f9ebe8051b03377fde2a91e36660a792004712e193d7037091ac7a857f7e08.scope: Deactivated successfully.
Nov 26 02:25:49 compute-0 podman[467566]: 2025-11-26 02:25:49.645212895 +0000 UTC m=+1.153598313 container died 45f9ebe8051b03377fde2a91e36660a792004712e193d7037091ac7a857f7e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_albattani, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b917087d71cb0b45232ae9076cfc01a4196b26c94d62a916e54d8b2a8726efb2-merged.mount: Deactivated successfully.
Nov 26 02:25:49 compute-0 podman[467566]: 2025-11-26 02:25:49.73819227 +0000 UTC m=+1.246577698 container remove 45f9ebe8051b03377fde2a91e36660a792004712e193d7037091ac7a857f7e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 26 02:25:49 compute-0 systemd[1]: libpod-conmon-45f9ebe8051b03377fde2a91e36660a792004712e193d7037091ac7a857f7e08.scope: Deactivated successfully.
Nov 26 02:25:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:25:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:25:50 compute-0 podman[467761]: 2025-11-26 02:25:50.852977684 +0000 UTC m=+0.092231945 container create 63436c115e6659e1754d9df9309888ae6b5c250e79318c81cb6c724c2ce82705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_borg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 02:25:50 compute-0 podman[467761]: 2025-11-26 02:25:50.818174669 +0000 UTC m=+0.057428990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:25:50 compute-0 systemd[1]: Started libpod-conmon-63436c115e6659e1754d9df9309888ae6b5c250e79318c81cb6c724c2ce82705.scope.
Nov 26 02:25:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:25:50 compute-0 podman[467761]: 2025-11-26 02:25:50.966422923 +0000 UTC m=+0.205677234 container init 63436c115e6659e1754d9df9309888ae6b5c250e79318c81cb6c724c2ce82705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:25:50 compute-0 podman[467761]: 2025-11-26 02:25:50.978723138 +0000 UTC m=+0.217977399 container start 63436c115e6659e1754d9df9309888ae6b5c250e79318c81cb6c724c2ce82705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:25:50 compute-0 podman[467761]: 2025-11-26 02:25:50.984758447 +0000 UTC m=+0.224012698 container attach 63436c115e6659e1754d9df9309888ae6b5c250e79318c81cb6c724c2ce82705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_borg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 02:25:50 compute-0 hardcore_borg[467776]: 167 167
Nov 26 02:25:50 compute-0 systemd[1]: libpod-63436c115e6659e1754d9df9309888ae6b5c250e79318c81cb6c724c2ce82705.scope: Deactivated successfully.
Nov 26 02:25:50 compute-0 podman[467761]: 2025-11-26 02:25:50.989348865 +0000 UTC m=+0.228603176 container died 63436c115e6659e1754d9df9309888ae6b5c250e79318c81cb6c724c2ce82705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 02:25:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb8b8d52f880c61dc66be90a51ccc4937220da373afc335b5cf64e0ece72a76d-merged.mount: Deactivated successfully.
Nov 26 02:25:51 compute-0 podman[467761]: 2025-11-26 02:25:51.068590176 +0000 UTC m=+0.307844437 container remove 63436c115e6659e1754d9df9309888ae6b5c250e79318c81cb6c724c2ce82705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_borg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 02:25:51 compute-0 systemd[1]: libpod-conmon-63436c115e6659e1754d9df9309888ae6b5c250e79318c81cb6c724c2ce82705.scope: Deactivated successfully.
Nov 26 02:25:51 compute-0 nova_compute[350387]: 2025-11-26 02:25:51.213 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:25:51 compute-0 nova_compute[350387]: 2025-11-26 02:25:51.215 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:25:51 compute-0 nova_compute[350387]: 2025-11-26 02:25:51.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:25:51 compute-0 podman[467800]: 2025-11-26 02:25:51.338250171 +0000 UTC m=+0.080512547 container create 75ef132dc7a10ec0f641894a33638f699d9552cb50332a28fcd764ea22ab3ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_darwin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:25:51 compute-0 podman[467800]: 2025-11-26 02:25:51.307220932 +0000 UTC m=+0.049483388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:25:51 compute-0 systemd[1]: Started libpod-conmon-75ef132dc7a10ec0f641894a33638f699d9552cb50332a28fcd764ea22ab3ce0.scope.
Nov 26 02:25:51 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a5fe1efe668ccf437315dfbced25689075a1d3924b96ccfbbe14720c579b63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a5fe1efe668ccf437315dfbced25689075a1d3924b96ccfbbe14720c579b63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a5fe1efe668ccf437315dfbced25689075a1d3924b96ccfbbe14720c579b63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a5fe1efe668ccf437315dfbced25689075a1d3924b96ccfbbe14720c579b63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:25:51 compute-0 podman[467800]: 2025-11-26 02:25:51.513297296 +0000 UTC m=+0.255559782 container init 75ef132dc7a10ec0f641894a33638f699d9552cb50332a28fcd764ea22ab3ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 02:25:51 compute-0 podman[467800]: 2025-11-26 02:25:51.534996404 +0000 UTC m=+0.277258810 container start 75ef132dc7a10ec0f641894a33638f699d9552cb50332a28fcd764ea22ab3ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_darwin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:25:51 compute-0 podman[467800]: 2025-11-26 02:25:51.541882196 +0000 UTC m=+0.284144672 container attach 75ef132dc7a10ec0f641894a33638f699d9552cb50332a28fcd764ea22ab3ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_darwin, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:25:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:25:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:25:52 compute-0 nova_compute[350387]: 2025-11-26 02:25:52.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:25:52 compute-0 nova_compute[350387]: 2025-11-26 02:25:52.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:25:52 compute-0 nova_compute[350387]: 2025-11-26 02:25:52.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:25:52 compute-0 nova_compute[350387]: 2025-11-26 02:25:52.320 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 02:25:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:25:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2978 syncs, 3.54 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 643 writes, 1806 keys, 643 commit groups, 1.0 writes per commit group, ingest: 0.76 MB, 0.00 MB/s#012Interval WAL: 643 writes, 302 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]: {
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:        "osd_id": 0,
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:        "type": "bluestore"
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:    },
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:        "osd_id": 2,
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:        "type": "bluestore"
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:    },
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:        "osd_id": 1,
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:        "type": "bluestore"
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]:    }
Nov 26 02:25:52 compute-0 vibrant_darwin[467816]: }
Nov 26 02:25:52 compute-0 systemd[1]: libpod-75ef132dc7a10ec0f641894a33638f699d9552cb50332a28fcd764ea22ab3ce0.scope: Deactivated successfully.
Nov 26 02:25:52 compute-0 podman[467800]: 2025-11-26 02:25:52.81758141 +0000 UTC m=+1.559843816 container died 75ef132dc7a10ec0f641894a33638f699d9552cb50332a28fcd764ea22ab3ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_darwin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:25:52 compute-0 systemd[1]: libpod-75ef132dc7a10ec0f641894a33638f699d9552cb50332a28fcd764ea22ab3ce0.scope: Consumed 1.272s CPU time.
Nov 26 02:25:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6a5fe1efe668ccf437315dfbced25689075a1d3924b96ccfbbe14720c579b63-merged.mount: Deactivated successfully.
Nov 26 02:25:52 compute-0 podman[467800]: 2025-11-26 02:25:52.92792289 +0000 UTC m=+1.670185296 container remove 75ef132dc7a10ec0f641894a33638f699d9552cb50332a28fcd764ea22ab3ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:25:52 compute-0 systemd[1]: libpod-conmon-75ef132dc7a10ec0f641894a33638f699d9552cb50332a28fcd764ea22ab3ce0.scope: Deactivated successfully.
Nov 26 02:25:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:25:53 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:25:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:25:53 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:25:53 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev fe2c176d-2fa0-45be-894b-0171fa01de29 does not exist
Nov 26 02:25:53 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev b7e40f9e-e97e-4fa7-906e-6d2c430e7c69 does not exist
Nov 26 02:25:54 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:25:54 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:25:54 compute-0 nova_compute[350387]: 2025-11-26 02:25:54.027 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:54 compute-0 nova_compute[350387]: 2025-11-26 02:25:54.057 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:25:55 compute-0 podman[467911]: 2025-11-26 02:25:55.619181945 +0000 UTC m=+0.166284500 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 26 02:25:55 compute-0 podman[467912]: 2025-11-26 02:25:55.655481312 +0000 UTC m=+0.202670229 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 26 02:25:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:25:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:25:57 compute-0 nova_compute[350387]: 2025-11-26 02:25:57.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:25:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:25:59 compute-0 nova_compute[350387]: 2025-11-26 02:25:59.032 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:25:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 3285 syncs, 3.60 writes per sync, written: 0.04 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 524 writes, 1394 keys, 524 commit groups, 1.0 writes per commit group, ingest: 0.69 MB, 0.00 MB/s#012Interval WAL: 524 writes, 229 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:25:59 compute-0 nova_compute[350387]: 2025-11-26 02:25:59.060 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:25:59 compute-0 podman[158021]: time="2025-11-26T02:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:25:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:25:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8192 "" "Go-http-client/1.1"
Nov 26 02:26:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:00 compute-0 podman[467957]: 2025-11-26 02:26:00.561195201 +0000 UTC m=+0.117593956 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 02:26:00 compute-0 podman[467956]: 2025-11-26 02:26:00.578176597 +0000 UTC m=+0.136604789 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Nov 26 02:26:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:26:01 compute-0 openstack_network_exporter[367323]: ERROR   02:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:26:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:26:01 compute-0 openstack_network_exporter[367323]: ERROR   02:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:26:01 compute-0 openstack_network_exporter[367323]: ERROR   02:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:26:01 compute-0 openstack_network_exporter[367323]: ERROR   02:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:26:01 compute-0 openstack_network_exporter[367323]: ERROR   02:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:26:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:26:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:03 compute-0 nova_compute[350387]: 2025-11-26 02:26:03.294 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:26:03 compute-0 nova_compute[350387]: 2025-11-26 02:26:03.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:26:03 compute-0 nova_compute[350387]: 2025-11-26 02:26:03.297 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:26:04 compute-0 nova_compute[350387]: 2025-11-26 02:26:04.034 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:04 compute-0 nova_compute[350387]: 2025-11-26 02:26:04.062 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:26:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 9943 writes, 38K keys, 9943 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9943 writes, 2702 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 489 writes, 1339 keys, 489 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s#012Interval WAL: 489 writes, 225 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:26:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:26:06 compute-0 ceph-mgr[193049]: [devicehealth INFO root] Check health
Nov 26 02:26:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:06 compute-0 podman[467995]: 2025-11-26 02:26:06.591929451 +0000 UTC m=+0.129906591 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 02:26:06 compute-0 podman[467994]: 2025-11-26 02:26:06.594893574 +0000 UTC m=+0.140353623 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9-minimal)
Nov 26 02:26:07 compute-0 nova_compute[350387]: 2025-11-26 02:26:07.294 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:26:08 compute-0 ovn_controller[89102]: 2025-11-26T02:26:08Z|00175|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Nov 26 02:26:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:09 compute-0 nova_compute[350387]: 2025-11-26 02:26:09.038 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:09 compute-0 nova_compute[350387]: 2025-11-26 02:26:09.064 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:26:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:26:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:26:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:26:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:26:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:26:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:26:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:14 compute-0 nova_compute[350387]: 2025-11-26 02:26:14.040 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:14 compute-0 nova_compute[350387]: 2025-11-26 02:26:14.068 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:26:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:17 compute-0 podman[468041]: 2025-11-26 02:26:17.577003754 +0000 UTC m=+0.114947702 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Nov 26 02:26:17 compute-0 podman[468040]: 2025-11-26 02:26:17.589314419 +0000 UTC m=+0.135218400 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm)
Nov 26 02:26:17 compute-0 podman[468042]: 2025-11-26 02:26:17.605144572 +0000 UTC m=+0.136521086 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 02:26:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:19 compute-0 nova_compute[350387]: 2025-11-26 02:26:19.043 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:19 compute-0 nova_compute[350387]: 2025-11-26 02:26:19.072 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:26:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:24 compute-0 nova_compute[350387]: 2025-11-26 02:26:24.046 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:24 compute-0 nova_compute[350387]: 2025-11-26 02:26:24.075 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:26:25.015 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:26:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:26:25.015 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:26:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:26:25.016 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:26:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:26:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:26 compute-0 podman[468098]: 2025-11-26 02:26:26.583975974 +0000 UTC m=+0.133738919 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 02:26:26 compute-0 podman[468099]: 2025-11-26 02:26:26.615351013 +0000 UTC m=+0.169723267 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 26 02:26:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:26:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2945676387' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:26:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:26:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2945676387' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:26:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:29 compute-0 nova_compute[350387]: 2025-11-26 02:26:29.049 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:29 compute-0 nova_compute[350387]: 2025-11-26 02:26:29.079 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:29 compute-0 podman[158021]: time="2025-11-26T02:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:26:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:26:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8190 "" "Go-http-client/1.1"
Nov 26 02:26:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:26:31 compute-0 openstack_network_exporter[367323]: ERROR   02:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:26:31 compute-0 openstack_network_exporter[367323]: ERROR   02:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:26:31 compute-0 openstack_network_exporter[367323]: ERROR   02:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:26:31 compute-0 openstack_network_exporter[367323]: ERROR   02:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:26:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:26:31 compute-0 openstack_network_exporter[367323]: ERROR   02:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:26:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:26:31 compute-0 podman[468142]: 2025-11-26 02:26:31.57523281 +0000 UTC m=+0.119132569 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-type=git, config_id=edpm, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, release-0.7.12=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible)
Nov 26 02:26:31 compute-0 podman[468143]: 2025-11-26 02:26:31.574367766 +0000 UTC m=+0.106685100 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:26:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:34 compute-0 nova_compute[350387]: 2025-11-26 02:26:34.052 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:34 compute-0 nova_compute[350387]: 2025-11-26 02:26:34.081 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:26:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:37 compute-0 podman[468181]: 2025-11-26 02:26:37.569399046 +0000 UTC m=+0.109113858 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 02:26:37 compute-0 podman[468180]: 2025-11-26 02:26:37.578741908 +0000 UTC m=+0.124586592 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.)
Nov 26 02:26:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:39 compute-0 nova_compute[350387]: 2025-11-26 02:26:39.054 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:39 compute-0 nova_compute[350387]: 2025-11-26 02:26:39.083 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:26:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:26:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:26:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:26:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:26:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:26:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:26:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:26:41
Nov 26 02:26:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:26:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:26:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'images', '.mgr', 'vms', 'default.rgw.control', 'volumes', 'backups']
Nov 26 02:26:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:26:41 compute-0 systemd-logind[800]: New session 63 of user zuul.
Nov 26 02:26:41 compute-0 systemd[1]: Started Session 63 of User zuul.
Nov 26 02:26:41 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:26:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:26:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:26:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:26:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:26:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:26:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:26:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:26:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:26:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:26:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.879 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.880 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.880 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.881 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.883 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.885 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.887 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.887 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.888 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.888 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.888 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.888 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.888 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.888 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.889 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.889 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.889 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.889 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.889 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.889 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.890 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.890 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.890 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.890 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.890 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.890 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.891 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.891 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.891 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.891 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.891 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.891 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.892 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.892 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.892 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.892 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.892 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.892 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.893 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.893 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.893 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.893 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.893 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.894 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.894 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.894 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.894 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.894 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.894 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.895 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.895 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.895 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.895 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.895 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.896 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.896 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.896 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.896 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.897 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.897 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.897 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.897 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.897 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.897 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.897 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.898 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.898 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.898 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.898 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.898 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.898 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.898 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.898 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.898 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.899 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.899 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.899 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.899 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.899 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.899 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.899 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:26:42.899 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:26:44 compute-0 nova_compute[350387]: 2025-11-26 02:26:44.056 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:44 compute-0 nova_compute[350387]: 2025-11-26 02:26:44.085 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:45 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15527 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:26:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:26:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:46 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15529 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:26:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 26 02:26:46 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2327690387' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 26 02:26:47 compute-0 nova_compute[350387]: 2025-11-26 02:26:47.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:26:47 compute-0 nova_compute[350387]: 2025-11-26 02:26:47.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:26:47 compute-0 nova_compute[350387]: 2025-11-26 02:26:47.447 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:26:47 compute-0 nova_compute[350387]: 2025-11-26 02:26:47.447 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:26:47 compute-0 nova_compute[350387]: 2025-11-26 02:26:47.448 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:26:47 compute-0 nova_compute[350387]: 2025-11-26 02:26:47.448 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:26:47 compute-0 nova_compute[350387]: 2025-11-26 02:26:47.448 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:26:47 compute-0 podman[468496]: 2025-11-26 02:26:47.955541878 +0000 UTC m=+0.109071957 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 26 02:26:47 compute-0 podman[468498]: 2025-11-26 02:26:47.957714608 +0000 UTC m=+0.102669007 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 02:26:47 compute-0 podman[468494]: 2025-11-26 02:26:47.963063868 +0000 UTC m=+0.122802141 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 26 02:26:47 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:26:47 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3463121523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:26:48 compute-0 nova_compute[350387]: 2025-11-26 02:26:48.015 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:26:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:48 compute-0 nova_compute[350387]: 2025-11-26 02:26:48.453 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:26:48 compute-0 nova_compute[350387]: 2025-11-26 02:26:48.456 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3933MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:26:48 compute-0 nova_compute[350387]: 2025-11-26 02:26:48.457 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:26:48 compute-0 nova_compute[350387]: 2025-11-26 02:26:48.457 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:26:48 compute-0 nova_compute[350387]: 2025-11-26 02:26:48.589 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:26:48 compute-0 nova_compute[350387]: 2025-11-26 02:26:48.589 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:26:48 compute-0 nova_compute[350387]: 2025-11-26 02:26:48.614 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing inventories for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 02:26:48 compute-0 nova_compute[350387]: 2025-11-26 02:26:48.642 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating ProviderTree inventory for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 02:26:48 compute-0 nova_compute[350387]: 2025-11-26 02:26:48.642 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating inventory in ProviderTree for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 02:26:48 compute-0 nova_compute[350387]: 2025-11-26 02:26:48.666 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing aggregate associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 02:26:48 compute-0 nova_compute[350387]: 2025-11-26 02:26:48.692 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing trait associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, traits: COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 02:26:48 compute-0 nova_compute[350387]: 2025-11-26 02:26:48.724 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:26:49 compute-0 nova_compute[350387]: 2025-11-26 02:26:49.059 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:49 compute-0 nova_compute[350387]: 2025-11-26 02:26:49.088 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:49 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:26:49 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/741054372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:26:49 compute-0 nova_compute[350387]: 2025-11-26 02:26:49.239 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:26:49 compute-0 nova_compute[350387]: 2025-11-26 02:26:49.250 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:26:49 compute-0 nova_compute[350387]: 2025-11-26 02:26:49.286 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:26:49 compute-0 nova_compute[350387]: 2025-11-26 02:26:49.289 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:26:49 compute-0 nova_compute[350387]: 2025-11-26 02:26:49.289 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:26:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:26:51 compute-0 nova_compute[350387]: 2025-11-26 02:26:51.290 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:26:51 compute-0 nova_compute[350387]: 2025-11-26 02:26:51.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:26:51 compute-0 nova_compute[350387]: 2025-11-26 02:26:51.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:26:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:26:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:52 compute-0 nova_compute[350387]: 2025-11-26 02:26:52.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:26:52 compute-0 nova_compute[350387]: 2025-11-26 02:26:52.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:26:52 compute-0 nova_compute[350387]: 2025-11-26 02:26:52.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:26:52 compute-0 nova_compute[350387]: 2025-11-26 02:26:52.355 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 02:26:54 compute-0 nova_compute[350387]: 2025-11-26 02:26:54.061 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:54 compute-0 nova_compute[350387]: 2025-11-26 02:26:54.089 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:26:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:26:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:26:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:26:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:26:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:26:54 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 47a1de41-3fa6-4396-adfa-84c30838b1bf does not exist
Nov 26 02:26:54 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev bdded0fb-6f7c-447b-9aa0-40783193844e does not exist
Nov 26 02:26:54 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev bcef4dde-372c-413c-9786-7c4be7f5fa9d does not exist
Nov 26 02:26:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:26:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:26:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:26:54 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:26:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:26:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:26:55 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:26:55 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:26:55 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:26:55 compute-0 podman[468895]: 2025-11-26 02:26:55.781817717 +0000 UTC m=+0.091841924 container create 46c1abb4885212f4452838848bf9868f76a3f4733bdf8645b65d35409cd00c49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 02:26:55 compute-0 podman[468895]: 2025-11-26 02:26:55.743256227 +0000 UTC m=+0.053280504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:26:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:26:55 compute-0 systemd[1]: Started libpod-conmon-46c1abb4885212f4452838848bf9868f76a3f4733bdf8645b65d35409cd00c49.scope.
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.856283) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124015856358, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1173, "num_deletes": 258, "total_data_size": 1673195, "memory_usage": 1701616, "flush_reason": "Manual Compaction"}
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124015873249, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 1656346, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46264, "largest_seqno": 47436, "table_properties": {"data_size": 1650640, "index_size": 3037, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12170, "raw_average_key_size": 19, "raw_value_size": 1639088, "raw_average_value_size": 2656, "num_data_blocks": 136, "num_entries": 617, "num_filter_entries": 617, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764123907, "oldest_key_time": 1764123907, "file_creation_time": 1764124015, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 17053 microseconds, and 9958 cpu microseconds.
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.873332) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 1656346 bytes OK
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.873359) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.876272) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.876294) EVENT_LOG_v1 {"time_micros": 1764124015876287, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.876318) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1667738, prev total WAL file size 1667738, number of live WAL files 2.
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.877767) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373535' seq:72057594037927935, type:22 .. '6C6F676D0032303037' seq:0, type:0; will stop at (end)
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1617KB)], [110(7556KB)]
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124015877812, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 9393911, "oldest_snapshot_seqno": -1}
Nov 26 02:26:55 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6158 keys, 9289214 bytes, temperature: kUnknown
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124015936778, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 9289214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9248743, "index_size": 23977, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 160786, "raw_average_key_size": 26, "raw_value_size": 9137991, "raw_average_value_size": 1483, "num_data_blocks": 957, "num_entries": 6158, "num_filter_entries": 6158, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764124015, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:26:55 compute-0 podman[468895]: 2025-11-26 02:26:55.939582488 +0000 UTC m=+0.249606775 container init 46c1abb4885212f4452838848bf9868f76a3f4733bdf8645b65d35409cd00c49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.937152) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 9289214 bytes
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.940533) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.1 rd, 157.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(11.3) write-amplify(5.6) OK, records in: 6689, records dropped: 531 output_compression: NoCompression
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.940566) EVENT_LOG_v1 {"time_micros": 1764124015940551, "job": 66, "event": "compaction_finished", "compaction_time_micros": 59029, "compaction_time_cpu_micros": 37657, "output_level": 6, "num_output_files": 1, "total_output_size": 9289214, "num_input_records": 6689, "num_output_records": 6158, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124015941256, "job": 66, "event": "table_file_deletion", "file_number": 112}
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124015943879, "job": 66, "event": "table_file_deletion", "file_number": 110}
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.877594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.944059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.944507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.944514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.944517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:26:55 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:26:55.944520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:26:55 compute-0 podman[468895]: 2025-11-26 02:26:55.958528678 +0000 UTC m=+0.268552915 container start 46c1abb4885212f4452838848bf9868f76a3f4733bdf8645b65d35409cd00c49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:26:55 compute-0 podman[468895]: 2025-11-26 02:26:55.964712122 +0000 UTC m=+0.274736419 container attach 46c1abb4885212f4452838848bf9868f76a3f4733bdf8645b65d35409cd00c49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:26:55 compute-0 sad_booth[468912]: 167 167
Nov 26 02:26:55 compute-0 systemd[1]: libpod-46c1abb4885212f4452838848bf9868f76a3f4733bdf8645b65d35409cd00c49.scope: Deactivated successfully.
Nov 26 02:26:55 compute-0 podman[468895]: 2025-11-26 02:26:55.973326593 +0000 UTC m=+0.283350820 container died 46c1abb4885212f4452838848bf9868f76a3f4733bdf8645b65d35409cd00c49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:26:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-45e5e5c4c93a2db97ff174bed81c7927bbf6288745d31ae9ee8b529654de7bef-merged.mount: Deactivated successfully.
Nov 26 02:26:56 compute-0 podman[468895]: 2025-11-26 02:26:56.065274829 +0000 UTC m=+0.375299056 container remove 46c1abb4885212f4452838848bf9868f76a3f4733bdf8645b65d35409cd00c49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:26:56 compute-0 systemd[1]: libpod-conmon-46c1abb4885212f4452838848bf9868f76a3f4733bdf8645b65d35409cd00c49.scope: Deactivated successfully.
Nov 26 02:26:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:56 compute-0 podman[468935]: 2025-11-26 02:26:56.356923081 +0000 UTC m=+0.089388246 container create 6b783876bed5f10209957369bb6f8cc4f6dae88fb8cffbb961b1780951754077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 02:26:56 compute-0 podman[468935]: 2025-11-26 02:26:56.331863559 +0000 UTC m=+0.064328724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:26:56 compute-0 systemd[1]: Started libpod-conmon-6b783876bed5f10209957369bb6f8cc4f6dae88fb8cffbb961b1780951754077.scope.
Nov 26 02:26:56 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799d2443749ec3c874174a7573e6f20a25aa7be15204fae3bf16f515daca3ae7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799d2443749ec3c874174a7573e6f20a25aa7be15204fae3bf16f515daca3ae7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799d2443749ec3c874174a7573e6f20a25aa7be15204fae3bf16f515daca3ae7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799d2443749ec3c874174a7573e6f20a25aa7be15204fae3bf16f515daca3ae7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799d2443749ec3c874174a7573e6f20a25aa7be15204fae3bf16f515daca3ae7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:26:56 compute-0 podman[468935]: 2025-11-26 02:26:56.523985502 +0000 UTC m=+0.256450657 container init 6b783876bed5f10209957369bb6f8cc4f6dae88fb8cffbb961b1780951754077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:26:56 compute-0 podman[468935]: 2025-11-26 02:26:56.547425179 +0000 UTC m=+0.279890334 container start 6b783876bed5f10209957369bb6f8cc4f6dae88fb8cffbb961b1780951754077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 26 02:26:56 compute-0 podman[468935]: 2025-11-26 02:26:56.553965892 +0000 UTC m=+0.286431047 container attach 6b783876bed5f10209957369bb6f8cc4f6dae88fb8cffbb961b1780951754077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 02:26:57 compute-0 nova_compute[350387]: 2025-11-26 02:26:57.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:26:57 compute-0 podman[468963]: 2025-11-26 02:26:57.587993823 +0000 UTC m=+0.134535101 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Nov 26 02:26:57 compute-0 podman[468964]: 2025-11-26 02:26:57.618563559 +0000 UTC m=+0.156216388 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 02:26:57 compute-0 eager_carson[468950]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:26:57 compute-0 eager_carson[468950]: --> relative data size: 1.0
Nov 26 02:26:57 compute-0 eager_carson[468950]: --> All data devices are unavailable
Nov 26 02:26:57 compute-0 systemd[1]: libpod-6b783876bed5f10209957369bb6f8cc4f6dae88fb8cffbb961b1780951754077.scope: Deactivated successfully.
Nov 26 02:26:57 compute-0 systemd[1]: libpod-6b783876bed5f10209957369bb6f8cc4f6dae88fb8cffbb961b1780951754077.scope: Consumed 1.281s CPU time.
Nov 26 02:26:57 compute-0 podman[468935]: 2025-11-26 02:26:57.88411599 +0000 UTC m=+1.616581145 container died 6b783876bed5f10209957369bb6f8cc4f6dae88fb8cffbb961b1780951754077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 02:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-799d2443749ec3c874174a7573e6f20a25aa7be15204fae3bf16f515daca3ae7-merged.mount: Deactivated successfully.
Nov 26 02:26:57 compute-0 podman[468935]: 2025-11-26 02:26:57.977540777 +0000 UTC m=+1.710005902 container remove 6b783876bed5f10209957369bb6f8cc4f6dae88fb8cffbb961b1780951754077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 02:26:58 compute-0 systemd[1]: libpod-conmon-6b783876bed5f10209957369bb6f8cc4f6dae88fb8cffbb961b1780951754077.scope: Deactivated successfully.
Nov 26 02:26:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:26:59 compute-0 nova_compute[350387]: 2025-11-26 02:26:59.064 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:59 compute-0 nova_compute[350387]: 2025-11-26 02:26:59.092 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:26:59 compute-0 podman[469174]: 2025-11-26 02:26:59.150372018 +0000 UTC m=+0.090847796 container create e992aa241f39beda08a63b635210b9c8c8ed00f7b87fbb3b37a1e0720531f55e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_roentgen, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:26:59 compute-0 podman[469174]: 2025-11-26 02:26:59.118225818 +0000 UTC m=+0.058701606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:26:59 compute-0 systemd[1]: Started libpod-conmon-e992aa241f39beda08a63b635210b9c8c8ed00f7b87fbb3b37a1e0720531f55e.scope.
Nov 26 02:26:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:26:59 compute-0 podman[469174]: 2025-11-26 02:26:59.37243846 +0000 UTC m=+0.312914228 container init e992aa241f39beda08a63b635210b9c8c8ed00f7b87fbb3b37a1e0720531f55e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_roentgen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 02:26:59 compute-0 podman[469174]: 2025-11-26 02:26:59.393513341 +0000 UTC m=+0.333989129 container start e992aa241f39beda08a63b635210b9c8c8ed00f7b87fbb3b37a1e0720531f55e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:26:59 compute-0 goofy_roentgen[469190]: 167 167
Nov 26 02:26:59 compute-0 podman[469174]: 2025-11-26 02:26:59.408984504 +0000 UTC m=+0.349460272 container attach e992aa241f39beda08a63b635210b9c8c8ed00f7b87fbb3b37a1e0720531f55e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_roentgen, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 02:26:59 compute-0 systemd[1]: libpod-e992aa241f39beda08a63b635210b9c8c8ed00f7b87fbb3b37a1e0720531f55e.scope: Deactivated successfully.
Nov 26 02:26:59 compute-0 podman[469174]: 2025-11-26 02:26:59.411288089 +0000 UTC m=+0.351763837 container died e992aa241f39beda08a63b635210b9c8c8ed00f7b87fbb3b37a1e0720531f55e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:26:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-98193d62908c629a83f4943a7a48ce3618fa12a6c8c4167c5a80602694e77928-merged.mount: Deactivated successfully.
Nov 26 02:26:59 compute-0 podman[469174]: 2025-11-26 02:26:59.484469609 +0000 UTC m=+0.424945347 container remove e992aa241f39beda08a63b635210b9c8c8ed00f7b87fbb3b37a1e0720531f55e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 02:26:59 compute-0 systemd[1]: libpod-conmon-e992aa241f39beda08a63b635210b9c8c8ed00f7b87fbb3b37a1e0720531f55e.scope: Deactivated successfully.
Nov 26 02:26:59 compute-0 podman[158021]: time="2025-11-26T02:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:26:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:26:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8198 "" "Go-http-client/1.1"
Nov 26 02:26:59 compute-0 podman[469214]: 2025-11-26 02:26:59.795813553 +0000 UTC m=+0.097944416 container create 37d504efb14c0f5dea86e730de4ac693f8298b20398e203fae5e75c20b99eb62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:26:59 compute-0 systemd[1]: Started libpod-conmon-37d504efb14c0f5dea86e730de4ac693f8298b20398e203fae5e75c20b99eb62.scope.
Nov 26 02:26:59 compute-0 podman[469214]: 2025-11-26 02:26:59.767610112 +0000 UTC m=+0.069741035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:26:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b921588ccb98bdbb5e63a248583c058f30b0b4dcff0c574340b5104a87bb452c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b921588ccb98bdbb5e63a248583c058f30b0b4dcff0c574340b5104a87bb452c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b921588ccb98bdbb5e63a248583c058f30b0b4dcff0c574340b5104a87bb452c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b921588ccb98bdbb5e63a248583c058f30b0b4dcff0c574340b5104a87bb452c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:26:59 compute-0 podman[469214]: 2025-11-26 02:26:59.943033878 +0000 UTC m=+0.245164771 container init 37d504efb14c0f5dea86e730de4ac693f8298b20398e203fae5e75c20b99eb62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:26:59 compute-0 podman[469214]: 2025-11-26 02:26:59.969592192 +0000 UTC m=+0.271723035 container start 37d504efb14c0f5dea86e730de4ac693f8298b20398e203fae5e75c20b99eb62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:26:59 compute-0 podman[469214]: 2025-11-26 02:26:59.975209919 +0000 UTC m=+0.277340772 container attach 37d504efb14c0f5dea86e730de4ac693f8298b20398e203fae5e75c20b99eb62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 02:27:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:27:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]: {
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:    "0": [
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:        {
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "devices": [
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "/dev/loop3"
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            ],
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "lv_name": "ceph_lv0",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "lv_size": "21470642176",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "name": "ceph_lv0",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "tags": {
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.cluster_name": "ceph",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.crush_device_class": "",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.encrypted": "0",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.osd_id": "0",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.type": "block",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.vdo": "0"
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            },
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "type": "block",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "vg_name": "ceph_vg0"
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:        }
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:    ],
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:    "1": [
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:        {
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "devices": [
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "/dev/loop4"
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            ],
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "lv_name": "ceph_lv1",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "lv_size": "21470642176",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "name": "ceph_lv1",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "tags": {
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.cluster_name": "ceph",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.crush_device_class": "",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.encrypted": "0",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.osd_id": "1",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.type": "block",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.vdo": "0"
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            },
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "type": "block",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "vg_name": "ceph_vg1"
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:        }
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:    ],
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:    "2": [
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:        {
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "devices": [
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "/dev/loop5"
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            ],
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "lv_name": "ceph_lv2",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "lv_size": "21470642176",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "name": "ceph_lv2",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "tags": {
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.cluster_name": "ceph",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.crush_device_class": "",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.encrypted": "0",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.osd_id": "2",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.type": "block",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:                "ceph.vdo": "0"
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            },
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "type": "block",
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:            "vg_name": "ceph_vg2"
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:        }
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]:    ]
Nov 26 02:27:00 compute-0 sweet_rosalind[469231]: }
Nov 26 02:27:00 compute-0 systemd[1]: libpod-37d504efb14c0f5dea86e730de4ac693f8298b20398e203fae5e75c20b99eb62.scope: Deactivated successfully.
Nov 26 02:27:00 compute-0 podman[469214]: 2025-11-26 02:27:00.904014463 +0000 UTC m=+1.206145396 container died 37d504efb14c0f5dea86e730de4ac693f8298b20398e203fae5e75c20b99eb62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:27:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b921588ccb98bdbb5e63a248583c058f30b0b4dcff0c574340b5104a87bb452c-merged.mount: Deactivated successfully.
Nov 26 02:27:01 compute-0 podman[469214]: 2025-11-26 02:27:01.025560787 +0000 UTC m=+1.327691640 container remove 37d504efb14c0f5dea86e730de4ac693f8298b20398e203fae5e75c20b99eb62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:27:01 compute-0 systemd[1]: libpod-conmon-37d504efb14c0f5dea86e730de4ac693f8298b20398e203fae5e75c20b99eb62.scope: Deactivated successfully.
Nov 26 02:27:01 compute-0 openstack_network_exporter[367323]: ERROR   02:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:27:01 compute-0 openstack_network_exporter[367323]: ERROR   02:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:27:01 compute-0 openstack_network_exporter[367323]: ERROR   02:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:27:01 compute-0 openstack_network_exporter[367323]: ERROR   02:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:27:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:27:01 compute-0 openstack_network_exporter[367323]: ERROR   02:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:27:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:27:01 compute-0 podman[469352]: 2025-11-26 02:27:01.739350957 +0000 UTC m=+0.115912489 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.29.0, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, container_name=kepler, release-0.7.12=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., name=ubi9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 02:27:01 compute-0 podman[469353]: 2025-11-26 02:27:01.776792866 +0000 UTC m=+0.138591144 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 02:27:02 compute-0 podman[469432]: 2025-11-26 02:27:02.213217264 +0000 UTC m=+0.087799251 container create deacd1f2af2d56c1c272fd835dfb64196069fe10d0979d016236169478f0c13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 02:27:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:27:02 compute-0 podman[469432]: 2025-11-26 02:27:02.180705473 +0000 UTC m=+0.055287470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:27:02 compute-0 systemd[1]: Started libpod-conmon-deacd1f2af2d56c1c272fd835dfb64196069fe10d0979d016236169478f0c13b.scope.
Nov 26 02:27:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:27:02 compute-0 podman[469432]: 2025-11-26 02:27:02.36725183 +0000 UTC m=+0.241833847 container init deacd1f2af2d56c1c272fd835dfb64196069fe10d0979d016236169478f0c13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_satoshi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 02:27:02 compute-0 podman[469432]: 2025-11-26 02:27:02.390433499 +0000 UTC m=+0.265015496 container start deacd1f2af2d56c1c272fd835dfb64196069fe10d0979d016236169478f0c13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_satoshi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:27:02 compute-0 podman[469432]: 2025-11-26 02:27:02.397078685 +0000 UTC m=+0.271660682 container attach deacd1f2af2d56c1c272fd835dfb64196069fe10d0979d016236169478f0c13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_satoshi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 02:27:02 compute-0 flamboyant_satoshi[469450]: 167 167
Nov 26 02:27:02 compute-0 systemd[1]: libpod-deacd1f2af2d56c1c272fd835dfb64196069fe10d0979d016236169478f0c13b.scope: Deactivated successfully.
Nov 26 02:27:02 compute-0 conmon[469450]: conmon deacd1f2af2d56c1c272 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-deacd1f2af2d56c1c272fd835dfb64196069fe10d0979d016236169478f0c13b.scope/container/memory.events
Nov 26 02:27:02 compute-0 podman[469455]: 2025-11-26 02:27:02.484726711 +0000 UTC m=+0.053087218 container died deacd1f2af2d56c1c272fd835dfb64196069fe10d0979d016236169478f0c13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 02:27:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c83166d399227767bc20beedfb9f8e716a3d2c795307583d4469797a5e27ee2-merged.mount: Deactivated successfully.
Nov 26 02:27:02 compute-0 podman[469455]: 2025-11-26 02:27:02.55108342 +0000 UTC m=+0.119443887 container remove deacd1f2af2d56c1c272fd835dfb64196069fe10d0979d016236169478f0c13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_satoshi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:27:02 compute-0 systemd[1]: libpod-conmon-deacd1f2af2d56c1c272fd835dfb64196069fe10d0979d016236169478f0c13b.scope: Deactivated successfully.
Nov 26 02:27:02 compute-0 podman[469477]: 2025-11-26 02:27:02.834999925 +0000 UTC m=+0.089749585 container create d9592f599cf24ddbff1c9dacfb20520e901d8bda045024d40cc4bdae2a33957b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wing, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:27:02 compute-0 podman[469477]: 2025-11-26 02:27:02.796641451 +0000 UTC m=+0.051391181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:27:02 compute-0 systemd[1]: Started libpod-conmon-d9592f599cf24ddbff1c9dacfb20520e901d8bda045024d40cc4bdae2a33957b.scope.
Nov 26 02:27:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5446bbfd9f8f50c19d3383d863169e9e2461c8b93f862025c331d97fbc9b27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5446bbfd9f8f50c19d3383d863169e9e2461c8b93f862025c331d97fbc9b27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5446bbfd9f8f50c19d3383d863169e9e2461c8b93f862025c331d97fbc9b27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5446bbfd9f8f50c19d3383d863169e9e2461c8b93f862025c331d97fbc9b27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:27:03 compute-0 podman[469477]: 2025-11-26 02:27:03.025684198 +0000 UTC m=+0.280433908 container init d9592f599cf24ddbff1c9dacfb20520e901d8bda045024d40cc4bdae2a33957b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 02:27:03 compute-0 podman[469477]: 2025-11-26 02:27:03.051223434 +0000 UTC m=+0.305973074 container start d9592f599cf24ddbff1c9dacfb20520e901d8bda045024d40cc4bdae2a33957b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:27:03 compute-0 podman[469477]: 2025-11-26 02:27:03.056579444 +0000 UTC m=+0.311329144 container attach d9592f599cf24ddbff1c9dacfb20520e901d8bda045024d40cc4bdae2a33957b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wing, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 26 02:27:03 compute-0 nova_compute[350387]: 2025-11-26 02:27:03.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:27:03 compute-0 nova_compute[350387]: 2025-11-26 02:27:03.303 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:27:03 compute-0 ovs-vsctl[469524]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 26 02:27:04 compute-0 nova_compute[350387]: 2025-11-26 02:27:04.066 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:27:04 compute-0 nova_compute[350387]: 2025-11-26 02:27:04.094 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:27:04 compute-0 happy_wing[469493]: {
Nov 26 02:27:04 compute-0 happy_wing[469493]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:27:04 compute-0 happy_wing[469493]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:27:04 compute-0 happy_wing[469493]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:27:04 compute-0 happy_wing[469493]:        "osd_id": 0,
Nov 26 02:27:04 compute-0 happy_wing[469493]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:27:04 compute-0 happy_wing[469493]:        "type": "bluestore"
Nov 26 02:27:04 compute-0 happy_wing[469493]:    },
Nov 26 02:27:04 compute-0 happy_wing[469493]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:27:04 compute-0 happy_wing[469493]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:27:04 compute-0 happy_wing[469493]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:27:04 compute-0 happy_wing[469493]:        "osd_id": 2,
Nov 26 02:27:04 compute-0 happy_wing[469493]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:27:04 compute-0 happy_wing[469493]:        "type": "bluestore"
Nov 26 02:27:04 compute-0 happy_wing[469493]:    },
Nov 26 02:27:04 compute-0 happy_wing[469493]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:27:04 compute-0 happy_wing[469493]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:27:04 compute-0 happy_wing[469493]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:27:04 compute-0 happy_wing[469493]:        "osd_id": 1,
Nov 26 02:27:04 compute-0 happy_wing[469493]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:27:04 compute-0 happy_wing[469493]:        "type": "bluestore"
Nov 26 02:27:04 compute-0 happy_wing[469493]:    }
Nov 26 02:27:04 compute-0 happy_wing[469493]: }
Nov 26 02:27:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:27:04 compute-0 nova_compute[350387]: 2025-11-26 02:27:04.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:27:04 compute-0 systemd[1]: libpod-d9592f599cf24ddbff1c9dacfb20520e901d8bda045024d40cc4bdae2a33957b.scope: Deactivated successfully.
Nov 26 02:27:04 compute-0 podman[469477]: 2025-11-26 02:27:04.307089231 +0000 UTC m=+1.561838891 container died d9592f599cf24ddbff1c9dacfb20520e901d8bda045024d40cc4bdae2a33957b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 02:27:04 compute-0 systemd[1]: libpod-d9592f599cf24ddbff1c9dacfb20520e901d8bda045024d40cc4bdae2a33957b.scope: Consumed 1.253s CPU time.
Nov 26 02:27:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f5446bbfd9f8f50c19d3383d863169e9e2461c8b93f862025c331d97fbc9b27-merged.mount: Deactivated successfully.
Nov 26 02:27:04 compute-0 podman[469477]: 2025-11-26 02:27:04.4276736 +0000 UTC m=+1.682423260 container remove d9592f599cf24ddbff1c9dacfb20520e901d8bda045024d40cc4bdae2a33957b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wing, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 26 02:27:04 compute-0 systemd[1]: libpod-conmon-d9592f599cf24ddbff1c9dacfb20520e901d8bda045024d40cc4bdae2a33957b.scope: Deactivated successfully.
Nov 26 02:27:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:27:04 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:27:04 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:27:04 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:27:04 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev da78460f-5ab0-47d5-9ebe-33456d28b0b2 does not exist
Nov 26 02:27:04 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 22f4f8c2-a24b-4707-9ebc-6ccbf9ff5491 does not exist
Nov 26 02:27:05 compute-0 virtqemud[138515]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 26 02:27:05 compute-0 virtqemud[138515]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 26 02:27:05 compute-0 virtqemud[138515]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 26 02:27:05 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:27:05 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:27:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:27:06 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: cache status {prefix=cache status} (starting...)
Nov 26 02:27:06 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: client ls {prefix=client ls} (starting...)
Nov 26 02:27:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:27:06 compute-0 lvm[469953]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 02:27:06 compute-0 lvm[469953]: VG ceph_vg2 finished
Nov 26 02:27:06 compute-0 lvm[469960]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 02:27:06 compute-0 lvm[469960]: VG ceph_vg1 finished
Nov 26 02:27:06 compute-0 lvm[469972]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 02:27:06 compute-0 lvm[469972]: VG ceph_vg0 finished
Nov 26 02:27:06 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: damage ls {prefix=damage ls} (starting...)
Nov 26 02:27:07 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: dump loads {prefix=dump loads} (starting...)
Nov 26 02:27:07 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 26 02:27:07 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15537 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:27:07 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 26 02:27:07 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 26 02:27:07 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 26 02:27:07 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15541 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:27:07 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 26 02:27:07 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3214821281' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 26 02:27:07 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 26 02:27:08 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 26 02:27:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:27:08 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1770682557' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:27:08 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: ops {prefix=ops} (starting...)
Nov 26 02:27:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:27:08 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 26 02:27:08 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4098796593' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 26 02:27:08 compute-0 podman[470253]: 2025-11-26 02:27:08.566122741 +0000 UTC m=+0.109054997 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:27:08 compute-0 podman[470251]: 2025-11-26 02:27:08.567078858 +0000 UTC m=+0.109862719 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal)
Nov 26 02:27:08 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15547 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:27:08 compute-0 ceph-mgr[193049]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 02:27:08 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T02:27:08.581+0000 7f7615e48640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 02:27:09 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: session ls {prefix=session ls} (starting...)
Nov 26 02:27:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 26 02:27:09 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1217956940' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 26 02:27:09 compute-0 nova_compute[350387]: 2025-11-26 02:27:09.068 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:27:09 compute-0 nova_compute[350387]: 2025-11-26 02:27:09.096 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:27:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 26 02:27:09 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2929009946' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 26 02:27:09 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: status {prefix=status} (starting...)
Nov 26 02:27:09 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 26 02:27:09 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/89083620' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 02:27:09 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15557 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:27:09 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15559 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:27:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 26 02:27:10 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3724909802' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 02:27:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:27:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 26 02:27:10 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1885798441' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 26 02:27:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 26 02:27:10 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2318097729' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 02:27:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:27:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 26 02:27:10 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3295590244' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 26 02:27:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 02:27:10 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/248605859' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 02:27:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:27:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:27:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:27:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:27:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:27:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:27:11 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15571 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:27:11 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T02:27:11.355+0000 7f7615e48640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 26 02:27:11 compute-0 ceph-mgr[193049]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 26 02:27:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 26 02:27:11 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3603360394' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 26 02:27:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 26 02:27:11 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2733920033' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 26 02:27:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 26 02:27:11 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1073666520' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 02:27:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:27:12 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15581 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:27:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 26 02:27:12 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4113497850' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 26 02:27:12 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15585 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:27:12 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 26 02:27:12 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1301605293' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fb033000/0x0/0x4ffc00000, data 0x1b28221/0x1beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 2793472 heap: 96526336 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 2793472 heap: 96526336 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557316e3a000 session 0x55731891b0e0
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x55731781b800 session 0x55731891b4a0
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557315b9d000 session 0x5573177a9c20
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557316e3a000 session 0x557316053860
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 149.760803223s of 149.783859253s, submitted: 3
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 93716480 unmapped: 8060928 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557316fa2400 session 0x55731891ab40
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557316fc4000 session 0x5573180441e0
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557317821c00 session 0x557318bffe00
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557315b9d000 session 0x557318bffc20
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557316e3a000 session 0x557318bfeb40
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 94806016 unmapped: 6971392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557316fa2400 session 0x557318bfef00
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557316fc4000 session 0x557315e2e000
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x55731782b400 session 0x557317128f00
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557315b9d000 session 0x5573160410e0
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228641 data_alloc: 234881024 data_used: 14798848
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 94806016 unmapped: 6971392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 94806016 unmapped: 6971392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x1f4e293/0x2013000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 94806016 unmapped: 6971392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 94806016 unmapped: 6971392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 94806016 unmapped: 6971392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228641 data_alloc: 234881024 data_used: 14798848
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x1f4e293/0x2013000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 94806016 unmapped: 6971392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557316e3a000 session 0x557315b0fc20
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 94806016 unmapped: 6971392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 94871552 unmapped: 6905856 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 94879744 unmapped: 6897664 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 97886208 unmapped: 3891200 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261122 data_alloc: 234881024 data_used: 19132416
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98836480 unmapped: 2940928 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x1f4e293/0x2013000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98869248 unmapped: 2908160 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x1f4e293/0x2013000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261282 data_alloc: 234881024 data_used: 19140608
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x1f4e293/0x2013000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x1f4e293/0x2013000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261282 data_alloc: 234881024 data_used: 19140608
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x1f4e293/0x2013000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x1f4e293/0x2013000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261282 data_alloc: 234881024 data_used: 19140608
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x1f4e293/0x2013000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261282 data_alloc: 234881024 data_used: 19140608
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98902016 unmapped: 2875392 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98910208 unmapped: 2867200 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x1f4e293/0x2013000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98918400 unmapped: 2859008 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98926592 unmapped: 2850816 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261282 data_alloc: 234881024 data_used: 19140608
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98926592 unmapped: 2850816 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98926592 unmapped: 2850816 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x1f4e293/0x2013000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98926592 unmapped: 2850816 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fac0b000/0x0/0x4ffc00000, data 0x1f4e293/0x2013000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98926592 unmapped: 2850816 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98934784 unmapped: 2842624 heap: 101777408 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261762 data_alloc: 234881024 data_used: 19152896
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.194286346s of 42.390087128s, submitted: 27
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 4915200 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102498304 unmapped: 3112960 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9273000/0x0/0x4ffc00000, data 0x2746293/0x280b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 3342336 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9255000/0x0/0x4ffc00000, data 0x2764293/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 3153920 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 3153920 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332782 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 3153920 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 3153920 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 3153920 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 3153920 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 3153920 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330106 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 3153920 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 3153920 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102457344 unmapped: 3153920 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 6678 writes, 26K keys, 6678 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6678 writes, 1316 syncs, 5.07 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 732 writes, 2356 keys, 732 commit groups, 1.0 writes per commit group, ingest: 2.30 MB, 0.00 MB/s#012Interval WAL: 732 writes, 312 syncs, 2.35 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 3145728 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 3145728 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330106 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102465536 unmapped: 3145728 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330106 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330106 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330106 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102473728 unmapped: 3137536 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102481920 unmapped: 3129344 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.656017303s of 32.928413391s, submitted: 57
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330282 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330282 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330282 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330282 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330282 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330282 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330282 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330282 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330282 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f9249000/0x0/0x4ffc00000, data 0x2770293/0x2835000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330282 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102490112 unmapped: 3121152 heap: 105611264 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 48.401882172s of 48.411685944s, submitted: 1
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557315b9d800 session 0x55731867f860
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102342656 unmapped: 10084352 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557315bd5000 session 0x55731867e960
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557315bd5800 session 0x55731867e3c0
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557315b9d800 session 0x557315b18960
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557315b9d000 session 0x557318bfe780
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102342656 unmapped: 10084352 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557315bd5000 session 0x5573173c9e00
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557316e3a000 session 0x557316144960
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557315b7ac00 session 0x5573170ffa40
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557315b9d000 session 0x5573178970e0
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102342656 unmapped: 10084352 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557315b9d800 session 0x5573175f2d20
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 10051584 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363843 data_alloc: 234881024 data_used: 19156992
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557315bd5000 session 0x557317545c20
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8e08000/0x0/0x4ffc00000, data 0x2bb02f5/0x2c76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 10051584 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 10051584 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557316e3a000 session 0x5573175452c0
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557317819400 session 0x557317121860
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101769216 unmapped: 10657792 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101793792 unmapped: 10633216 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101793792 unmapped: 10633216 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376986 data_alloc: 234881024 data_used: 19984384
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 102236160 unmapped: 10190848 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8dc8000/0x0/0x4ffc00000, data 0x2bef305/0x2cb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104341504 unmapped: 8085504 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 6856704 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 6856704 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 6856704 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401946 data_alloc: 234881024 data_used: 23441408
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 6856704 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8dc8000/0x0/0x4ffc00000, data 0x2bef305/0x2cb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 6856704 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8dc8000/0x0/0x4ffc00000, data 0x2bef305/0x2cb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 6856704 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105570304 unmapped: 6856704 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.173721313s of 17.423257828s, submitted: 35
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8dc8000/0x0/0x4ffc00000, data 0x2bef305/0x2cb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105619456 unmapped: 6807552 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8dc8000/0x0/0x4ffc00000, data 0x2bef305/0x2cb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,1])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401418 data_alloc: 234881024 data_used: 23441408
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105709568 unmapped: 6717440 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105734144 unmapped: 6692864 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105734144 unmapped: 6692864 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105734144 unmapped: 6692864 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105734144 unmapped: 6692864 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401418 data_alloc: 234881024 data_used: 23441408
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105742336 unmapped: 6684672 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8dc8000/0x0/0x4ffc00000, data 0x2bef305/0x2cb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105742336 unmapped: 6684672 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105742336 unmapped: 6684672 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105742336 unmapped: 6684672 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8dc8000/0x0/0x4ffc00000, data 0x2bef305/0x2cb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105750528 unmapped: 6676480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8dc8000/0x0/0x4ffc00000, data 0x2bef305/0x2cb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401418 data_alloc: 234881024 data_used: 23441408
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105750528 unmapped: 6676480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105750528 unmapped: 6676480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105750528 unmapped: 6676480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105750528 unmapped: 6676480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8dc8000/0x0/0x4ffc00000, data 0x2bef305/0x2cb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105750528 unmapped: 6676480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401418 data_alloc: 234881024 data_used: 23441408
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105750528 unmapped: 6676480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8dc8000/0x0/0x4ffc00000, data 0x2bef305/0x2cb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 6643712 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 6643712 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 6643712 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 6643712 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401418 data_alloc: 234881024 data_used: 23441408
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 6643712 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 6643712 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f8dc8000/0x0/0x4ffc00000, data 0x2bef305/0x2cb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.248287201s of 23.822357178s, submitted: 90
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106766336 unmapped: 5660672 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 4759552 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 4947968 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1459568 data_alloc: 234881024 data_used: 23711744
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 4825088 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 4825088 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 4825088 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 4825088 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 4825088 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1459728 data_alloc: 234881024 data_used: 23715840
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 4825088 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 4825088 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 4825088 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 4816896 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 4816896 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1459728 data_alloc: 234881024 data_used: 23715840
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 4816896 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 4816896 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 4816896 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 4808704 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 4808704 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1459728 data_alloc: 234881024 data_used: 23715840
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107651072 unmapped: 4775936 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107651072 unmapped: 4775936 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107651072 unmapped: 4775936 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107659264 unmapped: 4767744 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107659264 unmapped: 4767744 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1459728 data_alloc: 234881024 data_used: 23715840
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107659264 unmapped: 4767744 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107659264 unmapped: 4767744 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107659264 unmapped: 4767744 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107659264 unmapped: 4767744 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107659264 unmapped: 4767744 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1459728 data_alloc: 234881024 data_used: 23715840
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107659264 unmapped: 4767744 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107659264 unmapped: 4767744 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107659264 unmapped: 4767744 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107659264 unmapped: 4767744 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 4759552 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1459728 data_alloc: 234881024 data_used: 23715840
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.855220795s of 33.113029480s, submitted: 61
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 5210112 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 5201920 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 5201920 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 5201920 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 5201920 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 5201920 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 5201920 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107233280 unmapped: 5193728 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 5185536 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 5177344 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 5169152 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 5169152 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 5169152 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 5169152 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 5169152 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 5169152 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 5169152 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 5169152 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 5169152 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 5169152 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 5160960 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 5160960 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 5160960 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 5160960 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 5160960 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 5160960 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 5160960 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 5160960 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 5160960 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 5160960 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 5160960 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 5160960 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 5160960 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 5152768 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 5152768 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 5152768 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 5152768 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 5144576 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 5144576 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 5144576 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 5144576 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 5144576 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 5144576 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 5144576 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 5144576 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 5144576 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 5136384 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 5128192 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 5128192 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 5128192 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 5128192 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 5128192 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 5128192 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 5128192 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 5128192 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 5128192 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 5128192 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 5120000 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 5120000 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 5120000 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 5120000 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 5120000 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 5120000 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 5120000 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 5120000 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 5120000 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 5120000 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457152 data_alloc: 234881024 data_used: 23719936
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 5120000 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 5120000 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 153.173889160s of 153.188903809s, submitted: 2
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557317858800 session 0x5573173c83c0
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x55731788c400 session 0x55731891a000
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x55731781d400 session 0x557317896d20
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 5120000 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f87a7000/0x0/0x4ffc00000, data 0x3210305/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103948288 unmapped: 8478720 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557316e3a000 session 0x5573177a9e00
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103948288 unmapped: 8478720 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 234881024 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103948288 unmapped: 8478720 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103948288 unmapped: 8478720 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 234881024 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 234881024 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 234881024 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 234881024 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 234881024 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 234881024 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 234881024 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 234881024 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 234881024 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 234881024 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 234881024 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 234881024 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 234881024 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 234881024 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 218103808 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557316fc5400 session 0x557315c64f00
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 218103808 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 218103808 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 8454144 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 218103808 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 218103808 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 218103808 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 8445952 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 8437760 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 218103808 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 8437760 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 8437760 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 8437760 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 8437760 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 8437760 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 218103808 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 8437760 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 8437760 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 8437760 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f93c8000/0x0/0x4ffc00000, data 0x25ef2e2/0x26b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 8437760 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 8437760 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310674 data_alloc: 218103808 data_used: 16973824
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 8437760 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 8437760 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557316fa2400 session 0x557317896b40
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 120.810012817s of 121.131683350s, submitted: 52
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557316fc4000 session 0x557315efa780
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x55731781f400 session 0x557318045a40
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103309312 unmapped: 9117696 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 12853248 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa012000/0x0/0x4ffc00000, data 0x19a72d2/0x1a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [1,1])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 ms_handle_reset con 0x557316fc5400 session 0x557318bfe000
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99598336 unmapped: 12828672 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183438 data_alloc: 218103808 data_used: 12611584
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa012000/0x0/0x4ffc00000, data 0x19a7270/0x1a6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa012000/0x0/0x4ffc00000, data 0x19a7270/0x1a6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183438 data_alloc: 218103808 data_used: 12611584
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa012000/0x0/0x4ffc00000, data 0x19a7270/0x1a6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa012000/0x0/0x4ffc00000, data 0x19a7270/0x1a6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183438 data_alloc: 218103808 data_used: 12611584
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa012000/0x0/0x4ffc00000, data 0x19a7270/0x1a6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa012000/0x0/0x4ffc00000, data 0x19a7270/0x1a6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183438 data_alloc: 218103808 data_used: 12611584
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183438 data_alloc: 218103808 data_used: 12611584
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa012000/0x0/0x4ffc00000, data 0x19a7270/0x1a6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa012000/0x0/0x4ffc00000, data 0x19a7270/0x1a6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183438 data_alloc: 218103808 data_used: 12611584
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 12820480 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.516965866s of 29.788280487s, submitted: 51
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99246080 unmapped: 13180928 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa012000/0x0/0x4ffc00000, data 0x19a728e/0x1a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99246080 unmapped: 13180928 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa012000/0x0/0x4ffc00000, data 0x19a728e/0x1a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99270656 unmapped: 13156352 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186900 data_alloc: 218103808 data_used: 12611584
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99270656 unmapped: 13156352 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fa012000/0x0/0x4ffc00000, data 0x19a7293/0x1a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 13074432 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x55731781d400 session 0x55731867e960
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa00d000/0x0/0x4ffc00000, data 0x19a8e33/0x1a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 13074432 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 13074432 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa00d000/0x0/0x4ffc00000, data 0x19a8e33/0x1a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99360768 unmapped: 13066240 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192610 data_alloc: 218103808 data_used: 12619776
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99360768 unmapped: 13066240 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa00d000/0x0/0x4ffc00000, data 0x19a8e33/0x1a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99360768 unmapped: 13066240 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99360768 unmapped: 13066240 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99360768 unmapped: 13066240 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa00d000/0x0/0x4ffc00000, data 0x19a8e33/0x1a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99360768 unmapped: 13066240 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192610 data_alloc: 218103808 data_used: 12619776
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99360768 unmapped: 13066240 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa00d000/0x0/0x4ffc00000, data 0x19a8e33/0x1a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.889939308s of 14.053059578s, submitted: 23
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x557316fa2400 session 0x557318bff4a0
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x557316fc4000 session 0x557317897c20
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99385344 unmapped: 13041664 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99393536 unmapped: 13033472 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x557316fc5400 session 0x557315ae2f00
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x55731781f400 session 0x557315f205a0
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x557317858800 session 0x557315f212c0
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x557316fa2400 session 0x557315f21680
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x557316fc4000 session 0x557317674f00
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99393536 unmapped: 13033472 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x557316fc5400 session 0x5573176750e0
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x55731781f400 session 0x557317129e00
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x55731788c400 session 0x557315b4ab40
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x557316fa2400 session 0x557315b4a000
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa00e000/0x0/0x4ffc00000, data 0x19a8e33/0x1a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99393536 unmapped: 13033472 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192971 data_alloc: 218103808 data_used: 12623872
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x557316fc4000 session 0x557314ee7c20
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa00e000/0x0/0x4ffc00000, data 0x19a8e33/0x1a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99393536 unmapped: 13033472 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x557316fc5400 session 0x557315e1ba40
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99393536 unmapped: 13033472 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x55731781f400 session 0x557315e1b860
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x55731788d800 session 0x557315ec0960
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99393536 unmapped: 13033472 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194941 data_alloc: 218103808 data_used: 12623872
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa00d000/0x0/0x4ffc00000, data 0x19a8e43/0x1a71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa00d000/0x0/0x4ffc00000, data 0x19a8e43/0x1a71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:12 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194941 data_alloc: 218103808 data_used: 12623872
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa00d000/0x0/0x4ffc00000, data 0x19a8e43/0x1a71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa00d000/0x0/0x4ffc00000, data 0x19a8e43/0x1a71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:12 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:12 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194941 data_alloc: 218103808 data_used: 12623872
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa00d000/0x0/0x4ffc00000, data 0x19a8e43/0x1a71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa00d000/0x0/0x4ffc00000, data 0x19a8e43/0x1a71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x557316fa2400 session 0x557317957680
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.581138611s of 22.655023575s, submitted: 11
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x557316fc4000 session 0x557315ae21e0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x557316fc5400 session 0x557317956f00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 128 ms_handle_reset con 0x55731781f400 session 0x5573180454a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa00e000/0x0/0x4ffc00000, data 0x19a8e33/0x1a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193316 data_alloc: 218103808 data_used: 12623872
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99434496 unmapped: 12992512 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fa00e000/0x0/0x4ffc00000, data 0x19a8e33/0x1a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,0,2])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12902400 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 129 ms_handle_reset con 0x55731788d400 session 0x557315ba63c0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12902400 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa00c000/0x0/0x4ffc00000, data 0x19aa9be/0x1a71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194608 data_alloc: 218103808 data_used: 12627968
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12902400 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12902400 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12902400 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12902400 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12902400 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa00c000/0x0/0x4ffc00000, data 0x19aa9be/0x1a71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194608 data_alloc: 218103808 data_used: 12627968
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12902400 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12902400 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fa00c000/0x0/0x4ffc00000, data 0x19aa9be/0x1a71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12902400 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 12902400 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.115213394s of 15.466153145s, submitted: 66
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 100466688 unmapped: 11960320 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200386 data_alloc: 218103808 data_used: 12636160
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 100466688 unmapped: 11960320 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 100466688 unmapped: 11960320 heap: 112427008 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fa009000/0x0/0x4ffc00000, data 0x19ac444/0x1a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 100130816 unmapped: 29081600 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 100130816 unmapped: 29081600 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 28033024 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 131 ms_handle_reset con 0x557316fa2400 session 0x557315acf4a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312697 data_alloc: 218103808 data_used: 12644352
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 28033024 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f9078000/0x0/0x4ffc00000, data 0x29399e4/0x2a05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 28033024 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 28033024 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 28033024 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f9078000/0x0/0x4ffc00000, data 0x29399e4/0x2a05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 28033024 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312697 data_alloc: 218103808 data_used: 12644352
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 28033024 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 28033024 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.963764191s of 13.395780563s, submitted: 30
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 28033024 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 132 ms_handle_reset con 0x557316fc4000 session 0x557315404f00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101195776 unmapped: 28016640 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fa002000/0x0/0x4ffc00000, data 0x19afb6f/0x1a7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101195776 unmapped: 28016640 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209729 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101203968 unmapped: 28008448 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 7526 writes, 29K keys, 7526 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7526 writes, 1699 syncs, 4.43 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 848 writes, 2356 keys, 848 commit groups, 1.0 writes per commit group, ingest: 1.52 MB, 0.00 MB/s#012Interval WAL: 848 writes, 383 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 173.495849609s of 173.693954468s, submitted: 43
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101244928 unmapped: 27967488 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557315b9d000 session 0x557318bfe960
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557315b9d800 session 0x557318bfef00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557315bd5000 session 0x557318bff860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4f9bf1000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101261312 unmapped: 27951104 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211443 data_alloc: 218103808 data_used: 12652544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98861056 unmapped: 30351360 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557315b9d000 session 0x557315f221e0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa690000/0x0/0x4ffc00000, data 0xf14560/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093316 data_alloc: 218103808 data_used: 8089600
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa690000/0x0/0x4ffc00000, data 0xf14560/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093316 data_alloc: 218103808 data_used: 8089600
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa690000/0x0/0x4ffc00000, data 0xf14560/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093316 data_alloc: 218103808 data_used: 8089600
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa690000/0x0/0x4ffc00000, data 0xf14560/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.161890030s of 19.983861923s, submitted: 122
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557317859800 session 0x557317897860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557317859c00 session 0x557315fc3c20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557316fc5000 session 0x557315efd860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad30000/0x0/0x4ffc00000, data 0x874560/0x93e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad30000/0x0/0x4ffc00000, data 0x874560/0x93e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557315b9d800 session 0x5573177a9a40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 74.880226135s of 75.004554749s, submitted: 22
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 33226752 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa8e3000/0x0/0x4ffc00000, data 0xcc0584/0xd8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 134 ms_handle_reset con 0x557315b9d000 session 0x557317128d20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 33218560 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045751 data_alloc: 218103808 data_used: 3862528
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa8de000/0x0/0x4ffc00000, data 0xcc2124/0xd8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96051200 unmapped: 33161216 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x55731891be00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96059392 unmapped: 33153024 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96059392 unmapped: 33153024 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859800 session 0x557314ee7e00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859c00 session 0x5573177dc5a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315bd5000 session 0x557315f23a40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x5573179565a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x5573177dda40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99246080 unmapped: 29966336 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99246080 unmapped: 29966336 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106702 data_alloc: 218103808 data_used: 6815744
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859800 session 0x557316052d20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 62.977394104s of 63.218254089s, submitted: 29
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104603648 unmapped: 24608768 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859c00 session 0x557315f21860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc4000 session 0x557317897680
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fa2400 session 0x5573176741e0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x557315e0d860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x5573177dc1e0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859c00 session 0x557317544b40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859800 session 0x5573177dde00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x557315b19c20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5400 session 0x5573175445a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fa2400 session 0x55731470c5a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 33660928 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8c06000/0x0/0x4ffc00000, data 0x2998cb1/0x2a68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8c06000/0x0/0x4ffc00000, data 0x2998cb1/0x2a68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99762176 unmapped: 33652736 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8c06000/0x0/0x4ffc00000, data 0x2998cb1/0x2a68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99762176 unmapped: 33652736 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99762176 unmapped: 33652736 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266658 data_alloc: 218103808 data_used: 6815744
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x557318bffe00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 100114432 unmapped: 33300480 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 100139008 unmapped: 33275904 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x55731781f400 session 0x557317675860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8bdc000/0x0/0x4ffc00000, data 0x29c2cb1/0x2a92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 100139008 unmapped: 33275904 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99647488 unmapped: 33767424 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99123200 unmapped: 34291712 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297436 data_alloc: 218103808 data_used: 8544256
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105283584 unmapped: 28131328 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8bdc000/0x0/0x4ffc00000, data 0x29c2cb1/0x2a92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111550464 unmapped: 21864448 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111550464 unmapped: 21864448 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8bdc000/0x0/0x4ffc00000, data 0x29c2cb1/0x2a92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111583232 unmapped: 21831680 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8bdc000/0x0/0x4ffc00000, data 0x29c2cb1/0x2a92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 21798912 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413596 data_alloc: 234881024 data_used: 24965120
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x557317675e00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fa2400 session 0x557318045a40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.679318428s of 14.867232323s, submitted: 25
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x5573169b9680
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9900000/0x0/0x4ffc00000, data 0x1c9ecb1/0x1d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233888 data_alloc: 234881024 data_used: 12312576
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9900000/0x0/0x4ffc00000, data 0x1c9ecb1/0x1d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233888 data_alloc: 234881024 data_used: 12312576
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9900000/0x0/0x4ffc00000, data 0x1c9ecb1/0x1d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9900000/0x0/0x4ffc00000, data 0x1c9ecb1/0x1d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233888 data_alloc: 234881024 data_used: 12312576
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233888 data_alloc: 234881024 data_used: 12312576
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9900000/0x0/0x4ffc00000, data 0x1c9ecb1/0x1d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9900000/0x0/0x4ffc00000, data 0x1c9ecb1/0x1d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9900000/0x0/0x4ffc00000, data 0x1c9ecb1/0x1d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233888 data_alloc: 234881024 data_used: 12312576
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.922222137s of 25.935543060s, submitted: 5
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 24354816 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 24281088 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8799000/0x0/0x4ffc00000, data 0x2e05cb1/0x2ed5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 24166400 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 24166400 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379986 data_alloc: 234881024 data_used: 13197312
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 24166400 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 24158208 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 24158208 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 24158208 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2e0fcb1/0x2edf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2e0fcb1/0x2edf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 24158208 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379986 data_alloc: 234881024 data_used: 13197312
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5400 session 0x557315efc780
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x55731788dc00 session 0x557315efc3c0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x557315b0fe00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fa2400 session 0x557315b0f4a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 24117248 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x557315ed1c20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5400 session 0x557315ae34a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x55731788c000 session 0x557315ae3680
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x5573177dda40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fa2400 session 0x5573177dc5a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 26722304 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 26722304 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 26722304 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26714112 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1428552 data_alloc: 234881024 data_used: 13201408
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f80e2000/0x0/0x4ffc00000, data 0x34bbcc1/0x358c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26714112 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26714112 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x5573177dd2c0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5400 session 0x5573177dd0e0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315f59c00 session 0x55731867e000
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x557315f203c0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fa2400 session 0x5573169b8f00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.991237640s of 16.390045166s, submitted: 115
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x55731867ef00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5400 session 0x557316053e00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 26583040 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315f58c00 session 0x5573160525a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x557317956b40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x557318044d20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 26550272 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7d57000/0x0/0x4ffc00000, data 0x3844d33/0x3917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 26542080 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464210 data_alloc: 234881024 data_used: 13205504
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 26533888 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7d57000/0x0/0x4ffc00000, data 0x3844d33/0x3917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 26517504 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 26845184 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7d57000/0x0/0x4ffc00000, data 0x3844d33/0x3917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fa2400 session 0x557318bffa40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5400 session 0x5573175f2960
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 26951680 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315f58800 session 0x557317675680
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410974 data_alloc: 234881024 data_used: 13201408
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557318462800 session 0x557317120960
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411266 data_alloc: 234881024 data_used: 13205504
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 28286976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.252437592s of 16.516160965s, submitted: 43
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 28286976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 28286976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411618 data_alloc: 234881024 data_used: 13205504
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 28286976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 28286976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105447424 unmapped: 27967488 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437378 data_alloc: 234881024 data_used: 16900096
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437378 data_alloc: 234881024 data_used: 16900096
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 27090944 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437378 data_alloc: 234881024 data_used: 16900096
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 27090944 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 27090944 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 27090944 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859800 session 0x5573161445a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859c00 session 0x557317110b40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.091217041s of 20.105909348s, submitted: 2
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103112704 unmapped: 30302208 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x5573180441e0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184560 data_alloc: 218103808 data_used: 8294400
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d51000/0x0/0x4ffc00000, data 0x184cd13/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184560 data_alloc: 218103808 data_used: 8294400
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d51000/0x0/0x4ffc00000, data 0x184cd13/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d51000/0x0/0x4ffc00000, data 0x184cd13/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184560 data_alloc: 218103808 data_used: 8294400
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d51000/0x0/0x4ffc00000, data 0x184cd13/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d51000/0x0/0x4ffc00000, data 0x184cd13/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184560 data_alloc: 218103808 data_used: 8294400
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d51000/0x0/0x4ffc00000, data 0x184cd13/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d51000/0x0/0x4ffc00000, data 0x184cd13/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.659467697s of 17.694372177s, submitted: 12
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104923136 unmapped: 28491776 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104906752 unmapped: 28508160 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 28475392 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229706 data_alloc: 218103808 data_used: 8359936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 28352512 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 28352512 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f98cc000/0x0/0x4ffc00000, data 0x1cc9d13/0x1d9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 28352512 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 28352512 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f98cc000/0x0/0x4ffc00000, data 0x1cc9d13/0x1d9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 28352512 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f98cc000/0x0/0x4ffc00000, data 0x1cc9d13/0x1d9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285957 data_alloc: 218103808 data_used: 8372224
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104849408 unmapped: 36962304 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5400 session 0x557315ed0960
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104849408 unmapped: 36962304 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x5573171114a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x5573169b8000
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859c00 session 0x5573177dd4a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318462800 session 0x557317896960
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.090188980s of 10.480854988s, submitted: 80
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 36945920 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x5573173c9680
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x557315ec03c0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x557315ed1c20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x557318044d20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859c00 session 0x557318045a40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 36937728 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8ec3000/0x0/0x4ffc00000, data 0x26d5948/0x27ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 36937728 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307456 data_alloc: 218103808 data_used: 8376320
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 36937728 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 36937728 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104882176 unmapped: 36929536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318462800 session 0x557315ba65a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x5573177a8780
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104882176 unmapped: 36929536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8ec3000/0x0/0x4ffc00000, data 0x26d5948/0x27ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104882176 unmapped: 36929536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x557318bff860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x557318bff2c0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309294 data_alloc: 218103808 data_used: 8376320
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104906752 unmapped: 36904960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318a82000 session 0x557315b4a000
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318fef400 session 0x557315b0e960
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x5573180443c0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x557315e1b860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104906752 unmapped: 36904960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x5573173c8780
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8ec2000/0x0/0x4ffc00000, data 0x26d5958/0x27ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318a82000 session 0x557318bfef00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 29294592 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318fee800 session 0x557315aced20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318fee800 session 0x557318bfed20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x557315ba6960
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.451944351s of 10.590756416s, submitted: 24
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x557315f21680
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x5573177dd0e0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318feec00 session 0x557315e0c780
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x557315f20780
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x557315efd860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318a82000 session 0x5573176752c0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x5573171290e0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318fee800 session 0x557315b4ba40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 29253632 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x55731867e960
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x557316041860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x557317129e00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 29253632 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318a82000 session 0x557317545e00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859c00 session 0x5573171281e0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731788c000 session 0x557315ae2f00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x557315b0fa40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420502 data_alloc: 234881024 data_used: 17334272
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 29302784 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x557316040d20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318a82000 session 0x557318044b40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x55731470d2c0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x557315ed1a40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 30212096 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x557315ba6b40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731788c000 session 0x5573177f05a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318a82000 session 0x557318bffc20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 30203904 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87db000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 30187520 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 30187520 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1396081 data_alloc: 234881024 data_used: 15851520
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 30187520 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87db000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30179328 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448081 data_alloc: 234881024 data_used: 23171072
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87db000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448081 data_alloc: 234881024 data_used: 23171072
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87db000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448241 data_alloc: 234881024 data_used: 23175168
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87db000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87db000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448241 data_alloc: 234881024 data_used: 23175168
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.193988800s of 28.378929138s, submitted: 34
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450177 data_alloc: 234881024 data_used: 23162880
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87dd000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87dd000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731788b800 session 0x557315b0fc20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x5573173c9e00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450177 data_alloc: 234881024 data_used: 23162880
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x557315fc34a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114393088 unmapped: 27418624 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a800 session 0x55731470d860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87dd000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.920627594s of 10.017802238s, submitted: 28
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557315acf860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557319694c00 session 0x5573177a90e0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557317544d20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x557317956780
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a800 session 0x557315e0cb40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25952256 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19693568 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122265600 unmapped: 19546112 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f74c8000/0x0/0x4ffc00000, data 0x40ce916/0x41a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 20504576 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1618073 data_alloc: 234881024 data_used: 24219648
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19693568 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 19677184 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557319695c00 session 0x557315404960
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 20291584 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317821c00 session 0x557315b4ba40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 20291584 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557315b4a000
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x557315f22d20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121872384 unmapped: 19939328 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f73f1000/0x0/0x4ffc00000, data 0x41a4926/0x427d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1628325 data_alloc: 234881024 data_used: 24231936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 19922944 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731788a000 session 0x5573173c9a40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731788bc00 session 0x557315b190e0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 19922944 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.479908943s of 11.012975693s, submitted: 119
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 25452544 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557319695c00 session 0x557317110d20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 25100288 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7be2000/0x0/0x4ffc00000, data 0x2f89906/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7be2000/0x0/0x4ffc00000, data 0x2f89906/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 24150016 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494134 data_alloc: 234881024 data_used: 22036480
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 24150016 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7be2000/0x0/0x4ffc00000, data 0x2f89906/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 24150016 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 24133632 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b9d000 session 0x557315b19e00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fa2400 session 0x5573175f21e0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 24125440 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557314703e00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402270 data_alloc: 234881024 data_used: 18280448
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e15000/0x0/0x4ffc00000, data 0x2783894/0x2858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402270 data_alloc: 234881024 data_used: 18280448
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e15000/0x0/0x4ffc00000, data 0x2783894/0x2858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402270 data_alloc: 234881024 data_used: 18280448
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e15000/0x0/0x4ffc00000, data 0x2783894/0x2858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402270 data_alloc: 234881024 data_used: 18280448
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e15000/0x0/0x4ffc00000, data 0x2783894/0x2858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e15000/0x0/0x4ffc00000, data 0x2783894/0x2858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e15000/0x0/0x4ffc00000, data 0x2783894/0x2858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402270 data_alloc: 234881024 data_used: 18280448
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.028324127s of 32.471443176s, submitted: 47
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 22847488 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e15000/0x0/0x4ffc00000, data 0x2783894/0x2858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445974 data_alloc: 234881024 data_used: 18694144
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 24190976 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24862720 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24715264 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24715264 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24715264 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456186 data_alloc: 234881024 data_used: 18501632
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24715264 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8946000/0x0/0x4ffc00000, data 0x2c4a894/0x2d1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24715264 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450154 data_alloc: 234881024 data_used: 18501632
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8933000/0x0/0x4ffc00000, data 0x2c66894/0x2d3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450154 data_alloc: 234881024 data_used: 18501632
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24608768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24608768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24608768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8933000/0x0/0x4ffc00000, data 0x2c66894/0x2d3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24608768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x55731891ab40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731788a000 session 0x557317128960
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x5573177dc1e0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24608768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x557318045680
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.408416748s of 20.744815826s, submitted: 74
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b9d000 session 0x5573179565a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fa2400 session 0x5573161454a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731788bc00 session 0x557316053860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557316053e00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x55731867f0e0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1496135 data_alloc: 234881024 data_used: 18501632
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23085056 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23085056 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23085056 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f83bc000/0x0/0x4ffc00000, data 0x31db906/0x32b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118734848 unmapped: 23076864 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118734848 unmapped: 23076864 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b9d000 session 0x55731867eb40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f83bc000/0x0/0x4ffc00000, data 0x31db906/0x32b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fa2400 session 0x55731867f4a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1496135 data_alloc: 234881024 data_used: 18501632
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118734848 unmapped: 23076864 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f83bc000/0x0/0x4ffc00000, data 0x31db906/0x32b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118734848 unmapped: 23076864 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317858c00 session 0x55731867e000
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x55731867f2c0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 22863872 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 22863872 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 22863872 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1500742 data_alloc: 234881024 data_used: 18542592
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 22863872 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8391000/0x0/0x4ffc00000, data 0x3205929/0x32dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8391000/0x0/0x4ffc00000, data 0x3205929/0x32dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 22814720 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.138267517s of 12.369709969s, submitted: 33
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 20340736 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8391000/0x0/0x4ffc00000, data 0x3205929/0x32dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 20340736 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 20217856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543806 data_alloc: 234881024 data_used: 23457792
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 20217856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 20217856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a800 session 0x557315ba6960
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317821c00 session 0x55731867fa40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 20217856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fa2400 session 0x557315404f00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 20193280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 20193280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543966 data_alloc: 234881024 data_used: 23461888
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 20185088 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 20185088 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 19963904 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 19841024 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 19554304 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543806 data_alloc: 234881024 data_used: 23588864
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 19554304 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 19521536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 19521536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 19521536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 19521536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543806 data_alloc: 234881024 data_used: 23588864
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 19521536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 19488768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 19488768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 19488768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 19488768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543806 data_alloc: 234881024 data_used: 23588864
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 19488768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122355712 unmapped: 19456000 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122363904 unmapped: 19447808 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122363904 unmapped: 19447808 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122372096 unmapped: 19439616 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543806 data_alloc: 234881024 data_used: 23588864
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122372096 unmapped: 19439616 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122372096 unmapped: 19439616 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.034168243s of 30.101922989s, submitted: 22
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 125911040 unmapped: 15900672 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126033920 unmapped: 15777792 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7ba3000/0x0/0x4ffc00000, data 0x39eb929/0x3ac3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126500864 unmapped: 15310848 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7b9e000/0x0/0x4ffc00000, data 0x39f8929/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1624362 data_alloc: 234881024 data_used: 24428544
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126533632 unmapped: 15278080 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7b9e000/0x0/0x4ffc00000, data 0x39f8929/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 15245312 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 15245312 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 15245312 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 15245312 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1624538 data_alloc: 234881024 data_used: 24432640
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 15245312 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7b9e000/0x0/0x4ffc00000, data 0x39f8929/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 15237120 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 15237120 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7b9e000/0x0/0x4ffc00000, data 0x39f8929/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 15237120 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 15237120 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7b9e000/0x0/0x4ffc00000, data 0x39f8929/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.249171257s of 13.556386948s, submitted: 68
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1632570 data_alloc: 234881024 data_used: 24686592
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 15048704 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 15048704 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 15048704 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 15048704 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 15048704 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1633706 data_alloc: 234881024 data_used: 24715264
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 15048704 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7b9e000/0x0/0x4ffc00000, data 0x39f8929/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7b9e000/0x0/0x4ffc00000, data 0x39f8929/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 15048704 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731928c000 session 0x557315ae3680
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557315acf4a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a800 session 0x557317957680
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fa2400 session 0x5573169b94a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317821c00 session 0x557315ed1860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127516672 unmapped: 18489344 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731928c000 session 0x5573160412c0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557315b0f2c0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a800 session 0x557317129a40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fa2400 session 0x5573169b9c20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127590400 unmapped: 18415616 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127590400 unmapped: 18415616 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f70bb000/0x0/0x4ffc00000, data 0x44da98b/0x45b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1715758 data_alloc: 234881024 data_used: 24711168
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127590400 unmapped: 18415616 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127590400 unmapped: 18415616 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f70bb000/0x0/0x4ffc00000, data 0x44da98b/0x45b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127639552 unmapped: 18366464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.957085609s of 12.243449211s, submitted: 62
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317821c00 session 0x557315b0ed20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127492096 unmapped: 18513920 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127492096 unmapped: 18513920 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7091000/0x0/0x4ffc00000, data 0x450498b/0x45dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1720489 data_alloc: 234881024 data_used: 24670208
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127492096 unmapped: 18513920 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 15089664 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 11845632 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 136699904 unmapped: 9306112 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 136699904 unmapped: 9306112 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1799529 data_alloc: 251658240 data_used: 35803136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 136749056 unmapped: 9256960 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7091000/0x0/0x4ffc00000, data 0x450498b/0x45dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 136749056 unmapped: 9256960 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7091000/0x0/0x4ffc00000, data 0x450498b/0x45dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 136790016 unmapped: 9216000 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 136806400 unmapped: 9199616 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7091000/0x0/0x4ffc00000, data 0x450498b/0x45dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.489391327s of 11.564118385s, submitted: 12
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x557315b18780
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b9d000 session 0x557315b194a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 136839168 unmapped: 9166848 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557315f21860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1638713 data_alloc: 251658240 data_used: 29650944
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7df1000/0x0/0x4ffc00000, data 0x377b8f6/0x3851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7df1000/0x0/0x4ffc00000, data 0x377b8f6/0x3851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7df1000/0x0/0x4ffc00000, data 0x377b8f6/0x3851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1638713 data_alloc: 251658240 data_used: 29650944
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7df1000/0x0/0x4ffc00000, data 0x377b8f6/0x3851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859000 session 0x5573171294a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859400 session 0x557318045c20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x5573177dcb40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128188416 unmapped: 17817600 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128196608 unmapped: 17809408 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445106 data_alloc: 234881024 data_used: 20480000
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128196608 unmapped: 17809408 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8d02000/0x0/0x4ffc00000, data 0x28988d6/0x296c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128196608 unmapped: 17809408 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127541248 unmapped: 18464768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 18456576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 18456576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445106 data_alloc: 234881024 data_used: 20480000
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 18456576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 18456576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8d02000/0x0/0x4ffc00000, data 0x28988d6/0x296c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 18456576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8d02000/0x0/0x4ffc00000, data 0x28988d6/0x296c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 18456576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 18456576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.778203964s of 20.959466934s, submitted: 43
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 137 ms_handle_reset con 0x557315406000 session 0x5573180454a0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452961 data_alloc: 234881024 data_used: 20488192
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127598592 unmapped: 18407424 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 137 ms_handle_reset con 0x557315b7a400 session 0x557318bffa40
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127598592 unmapped: 18407424 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x289a876/0x2971000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127614976 unmapped: 18391040 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 138 ms_handle_reset con 0x557315b9d000 session 0x5573176743c0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8cf9000/0x0/0x4ffc00000, data 0x289c447/0x2974000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127705088 unmapped: 18300928 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127705088 unmapped: 18300928 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455487 data_alloc: 234881024 data_used: 20516864
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128131072 unmapped: 17874944 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128860160 unmapped: 17145856 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128860160 unmapped: 17145856 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8613000/0x0/0x4ffc00000, data 0x2f85024/0x305b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 17637376 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 17637376 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f859a000/0x0/0x4ffc00000, data 0x2ffe024/0x30d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.584693909s of 10.118380547s, submitted: 114
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528496 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8587000/0x0/0x4ffc00000, data 0x300fa87/0x30e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528544 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8587000/0x0/0x4ffc00000, data 0x300fa87/0x30e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 9454 writes, 36K keys, 9454 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9454 writes, 2477 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1928 writes, 7544 keys, 1928 commit groups, 1.0 writes per commit group, ingest: 8.85 MB, 0.01 MB/s#012Interval WAL: 1928 writes, 778 syncs, 2.48 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528544 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8587000/0x0/0x4ffc00000, data 0x300fa87/0x30e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 17555456 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: mgrc ms_handle_reset ms_handle_reset con 0x557316fc4800
Nov 26 02:27:13 compute-0 ceph-osd[208794]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2845592742
Nov 26 02:27:13 compute-0 ceph-osd[208794]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2845592742,v1:192.168.122.100:6801/2845592742]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: mgrc handle_mgr_configure stats_period=5
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528544 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128729088 unmapped: 17276928 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.026569366s of 16.066671371s, submitted: 15
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8587000/0x0/0x4ffc00000, data 0x300fa87/0x30e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 17235968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 17235968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 17235968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 17235968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528700 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128720896 unmapped: 17285120 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128720896 unmapped: 17285120 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128720896 unmapped: 17285120 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128720896 unmapped: 17285120 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128720896 unmapped: 17285120 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528700 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128696320 unmapped: 17309696 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128696320 unmapped: 17309696 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128696320 unmapped: 17309696 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128696320 unmapped: 17309696 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.779787064s of 12.801182747s, submitted: 3
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128696320 unmapped: 17309696 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 17326080 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 17326080 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 17326080 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 17326080 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 17326080 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 17317888 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 17317888 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 17317888 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 17317888 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 ms_handle_reset con 0x557316e3a000 session 0x5573177dcd20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 17424384 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 17424384 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 17416192 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 66.053329468s of 66.063583374s, submitted: 1
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129662976 unmapped: 16343040 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529788 data_alloc: 234881024 data_used: 21463040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129671168 unmapped: 16334848 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129703936 unmapped: 16302080 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529788 data_alloc: 234881024 data_used: 21463040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529788 data_alloc: 234881024 data_used: 21463040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 16236544 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529788 data_alloc: 234881024 data_used: 21463040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 16236544 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 16236544 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 16236544 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 16236544 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 16236544 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529788 data_alloc: 234881024 data_used: 21463040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 16236544 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529788 data_alloc: 234881024 data_used: 21463040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529788 data_alloc: 234881024 data_used: 21463040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 16211968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 71.231262207s of 71.723098755s, submitted: 108
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 16211968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 16211968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 16211968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 16211968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 16187392 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 16187392 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 16187392 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 131.769302368s of 131.775421143s, submitted: 1
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8583000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532160 data_alloc: 234881024 data_used: 21659648
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532780 data_alloc: 234881024 data_used: 21659648
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532780 data_alloc: 234881024 data_used: 21659648
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 15958016 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 15958016 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 15958016 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 15958016 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533260 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 15958016 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 15958016 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.488658905s of 21.519058228s, submitted: 5
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533680 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533680 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533680 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.315682411s of 14.335088730s, submitted: 2
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129908736 unmapped: 16097280 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129908736 unmapped: 16097280 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129908736 unmapped: 16097280 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129908736 unmapped: 16097280 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129908736 unmapped: 16097280 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129908736 unmapped: 16097280 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15587 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 ms_handle_reset con 0x55731788b000 session 0x55731891ad20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 ms_handle_reset con 0x55731788a400 session 0x557317129860
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 16023552 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 212.596664429s of 212.610076904s, submitted: 2
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f861f000/0x0/0x4ffc00000, data 0x2f77a87/0x304f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 ms_handle_reset con 0x55731788b000 session 0x557318bff680
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 16023552 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 16023552 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520609 data_alloc: 234881024 data_used: 21667840
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 16023552 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 16023552 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8620000/0x0/0x4ffc00000, data 0x2f77a77/0x304e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 16015360 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 16015360 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 16015360 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520609 data_alloc: 234881024 data_used: 21667840
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 16015360 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 16015360 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8620000/0x0/0x4ffc00000, data 0x2f77a77/0x304e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 16015360 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.107772827s of 10.144620895s, submitted: 7
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 ms_handle_reset con 0x55731928c000 session 0x557317111e00
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 ms_handle_reset con 0x55731928c400 session 0x55731867fc20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 25952256 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 ms_handle_reset con 0x55731788a400 session 0x557315e0c000
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288369 data_alloc: 218103808 data_used: 9326592
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9557000/0x0/0x4ffc00000, data 0x1ccaa15/0x1da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9557000/0x0/0x4ffc00000, data 0x1ccaa15/0x1da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288369 data_alloc: 218103808 data_used: 9326592
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.019871712s of 10.321649551s, submitted: 50
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 140 ms_handle_reset con 0x557315b7a800 session 0x557315b0ed20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 32325632 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fa0cb000/0x0/0x4ffc00000, data 0x14cc5c3/0x15a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328411 data_alloc: 218103808 data_used: 2514944
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 49152000 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 141 ms_handle_reset con 0x55731788a400 session 0x5573154043c0
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 49152000 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 142 ms_handle_reset con 0x55731788b000 session 0x557317957c20
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 49086464 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 49086464 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa0c6000/0x0/0x4ffc00000, data 0x14cfd1a/0x15a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 49086464 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa0c6000/0x0/0x4ffc00000, data 0x14cfd1a/0x15a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229276 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa0c3000/0x0/0x4ffc00000, data 0x14d1799/0x15aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.369564056s of 11.736245155s, submitted: 70
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 9943 writes, 38K keys, 9943 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9943 writes, 2702 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 489 writes, 1339 keys, 489 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s#012Interval WAL: 489 writes, 225 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: do_command 'config diff' '{prefix=config diff}'
Nov 26 02:27:13 compute-0 ceph-osd[208794]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 26 02:27:13 compute-0 ceph-osd[208794]: do_command 'config show' '{prefix=config show}'
Nov 26 02:27:13 compute-0 ceph-osd[208794]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 49758208 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: do_command 'counter dump' '{prefix=counter dump}'
Nov 26 02:27:13 compute-0 ceph-osd[208794]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 26 02:27:13 compute-0 ceph-osd[208794]: do_command 'counter schema' '{prefix=counter schema}'
Nov 26 02:27:13 compute-0 ceph-osd[208794]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 49963008 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:13 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:13 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:27:13 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:27:13 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:13 compute-0 ceph-osd[208794]: do_command 'log dump' '{prefix=log dump}'
Nov 26 02:27:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 26 02:27:13 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183136199' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 02:27:13 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 02:27:13 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15591 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:27:13 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 26 02:27:13 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1647079851' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 02:27:13 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15595 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:27:14 compute-0 nova_compute[350387]: 2025-11-26 02:27:14.070 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:27:14 compute-0 nova_compute[350387]: 2025-11-26 02:27:14.098 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:27:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 02:27:14 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3615063090' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 02:27:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:27:14 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15599 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:27:14 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 26 02:27:14 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186494146' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 02:27:14 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15603 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:27:15 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15607 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 02:27:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 26 02:27:15 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247139119' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 26 02:27:15 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15611 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 02:27:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:27:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 26 02:27:16 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/107520523' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 26 02:27:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:27:16 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15617 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 02:27:16 compute-0 ceph-mgr[193049]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 02:27:16 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T02:27:16.648+0000 7f7615e48640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 02:27:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 26 02:27:16 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1115504096' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 26 02:27:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 26 02:27:17 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1372511202' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 26 02:27:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 26 02:27:17 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1345266668' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 26 02:27:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 26 02:27:17 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3843928085' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 26 02:27:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 26 02:27:17 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1643118789' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 26 02:27:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 26 02:27:17 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/23322268' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 26 02:27:17 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251727 data_alloc: 234881024 data_used: 18694144
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 8536064 heap: 109535232 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 8536064 heap: 109535232 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 8536064 heap: 109535232 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fab3a000/0x0/0x4ffc00000, data 0x20214b7/0x20e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 8536064 heap: 109535232 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 8536064 heap: 109535232 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fab3a000/0x0/0x4ffc00000, data 0x20214b7/0x20e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:17 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251727 data_alloc: 234881024 data_used: 18694144
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56e4f2400 session 0x55a56eb8c780
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 8536064 heap: 109535232 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb68000 session 0x55a56e6070e0
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb5ec00 session 0x55a56e4f4f00
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb5ec00 session 0x55a56c405e00
Nov 26 02:27:17 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 163.522369385s of 163.530075073s, submitted: 1
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56c8d0800 session 0x55a56c13cb40
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56c8d0400 session 0x55a56c2b2000
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56e4f2400 session 0x55a56eb441e0
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb68000 session 0x55a56e2974a0
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb68000 session 0x55a56c2b30e0
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 12591104 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 12591104 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 12591104 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920e000/0x0/0x4ffc00000, data 0x27ad4b7/0x2870000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 12591104 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:17 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311178 data_alloc: 234881024 data_used: 18694144
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 12591104 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 12591104 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 12591104 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 12591104 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e9ebc20
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920e000/0x0/0x4ffc00000, data 0x27ad4b7/0x2870000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 12713984 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:17 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312611 data_alloc: 234881024 data_used: 18694144
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 12713984 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104194048 unmapped: 12156928 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 10125312 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x27ad4da/0x2871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 5718016 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x27ad4da/0x2871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:17 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1368131 data_alloc: 234881024 data_used: 26345472
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x27ad4da/0x2871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x27ad4da/0x2871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:17 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1368131 data_alloc: 234881024 data_used: 26345472
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x27ad4da/0x2871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x27ad4da/0x2871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:17 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1368131 data_alloc: 234881024 data_used: 26345472
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x27ad4da/0x2871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x27ad4da/0x2871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:17 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1368131 data_alloc: 234881024 data_used: 26345472
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x27ad4da/0x2871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x27ad4da/0x2871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:17 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1368131 data_alloc: 234881024 data_used: 26345472
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x27ad4da/0x2871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x27ad4da/0x2871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 5685248 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:17 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:17 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1368131 data_alloc: 234881024 data_used: 26345472
Nov 26 02:27:17 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110673920 unmapped: 5677056 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x27ad4da/0x2871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110673920 unmapped: 5677056 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f920d000/0x0/0x4ffc00000, data 0x27ad4da/0x2871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110673920 unmapped: 5677056 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.060741425s of 42.239898682s, submitted: 28
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 3629056 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 3629056 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1407011 data_alloc: 234881024 data_used: 26415104
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111968256 unmapped: 4382720 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111968256 unmapped: 4382720 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111968256 unmapped: 4382720 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111968256 unmapped: 4382720 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 7846 writes, 31K keys, 7846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7846 writes, 1645 syncs, 4.77 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 717 writes, 2400 keys, 717 commit groups, 1.0 writes per commit group, ingest: 2.56 MB, 0.00 MB/s#012Interval WAL: 717 writes, 284 syncs, 2.52 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1407171 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1407171 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1407171 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1407171 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1407171 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1407171 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 4759552 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.838478088s of 33.066207886s, submitted: 53
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1405731 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1405731 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1405731 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1405731 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1405731 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1405731 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1405731 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 4710400 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 4702208 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 4702208 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 4702208 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 4702208 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1405731 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 4702208 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 4702208 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 4702208 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 4702208 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 4702208 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1405731 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 4694016 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8d7c000/0x0/0x4ffc00000, data 0x2c3e4da/0x2d02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 4694016 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 4694016 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb5ec00 session 0x55a56c826d20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb5f800 session 0x55a56c404000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb5f000 session 0x55a56e2130e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56c8d0400 session 0x55a56c7b3680
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 4694016 heap: 116350976 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.358352661s of 48.385982513s, submitted: 3
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb5ec00 session 0x55a56c502f00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb5f000 session 0x55a56e2552c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb5f800 session 0x55a56c7123c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8896000/0x0/0x4ffc00000, data 0x31234ea/0x31e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb68000 session 0x55a56df761e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56c8d0400 session 0x55a56c8c32c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb5ec00 session 0x55a56eb7de00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb5f000 session 0x55a56eb7c3c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 10739712 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb5f800 session 0x55a56eb7d680
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb5f400 session 0x55a56eb3da40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56c8d0400 session 0x55a56eb3cd20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb5ec00 session 0x55a56e633a40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb5f000 session 0x55a56e2974a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1454026 data_alloc: 234881024 data_used: 26419200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111951872 unmapped: 10698752 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8894000/0x0/0x4ffc00000, data 0x312355c/0x31ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111984640 unmapped: 10665984 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8894000/0x0/0x4ffc00000, data 0x312355c/0x31ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 10633216 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 10633216 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56eb5f800 session 0x55a56eb44780
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112025600 unmapped: 10625024 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1455247 data_alloc: 234881024 data_used: 26431488
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112025600 unmapped: 10625024 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112025600 unmapped: 10625024 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8893000/0x0/0x4ffc00000, data 0x312357f/0x31eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 113074176 unmapped: 9576448 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 6291456 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 5791744 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1490287 data_alloc: 251658240 data_used: 31371264
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 5808128 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 5808128 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 5808128 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8893000/0x0/0x4ffc00000, data 0x312357f/0x31eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 5808128 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 5808128 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8893000/0x0/0x4ffc00000, data 0x312357f/0x31eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1490287 data_alloc: 251658240 data_used: 31371264
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 5775360 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.302444458s of 17.472299576s, submitted: 21
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 5775360 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 5742592 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8893000/0x0/0x4ffc00000, data 0x312357f/0x31eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1490239 data_alloc: 251658240 data_used: 31375360
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8483000/0x0/0x4ffc00000, data 0x312357f/0x31eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1490239 data_alloc: 251658240 data_used: 31375360
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8483000/0x0/0x4ffc00000, data 0x312357f/0x31eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8483000/0x0/0x4ffc00000, data 0x312357f/0x31eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8483000/0x0/0x4ffc00000, data 0x312357f/0x31eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1490239 data_alloc: 251658240 data_used: 31375360
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8483000/0x0/0x4ffc00000, data 0x312357f/0x31eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8483000/0x0/0x4ffc00000, data 0x312357f/0x31eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1490239 data_alloc: 251658240 data_used: 31375360
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8483000/0x0/0x4ffc00000, data 0x312357f/0x31eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 5652480 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 5636096 heap: 122650624 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.345281601s of 23.891231537s, submitted: 90
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1570457 data_alloc: 251658240 data_used: 31387648
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 4669440 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7a98000/0x0/0x4ffc00000, data 0x3b0857f/0x3bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 4128768 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f7a61000/0x0/0x4ffc00000, data 0x3b3f57f/0x3c07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 3104768 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 3104768 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 3104768 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1585511 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121757696 unmapped: 2990080 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121757696 unmapped: 2990080 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121757696 unmapped: 2990080 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79f6000/0x0/0x4ffc00000, data 0x3baa57f/0x3c72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119988224 unmapped: 4759552 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119988224 unmapped: 4759552 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 4677632 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 4677632 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 4677632 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 4677632 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 4734976 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 4734976 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 4734976 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 4734976 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 4734976 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 4734976 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 4734976 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 4734976 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 4734976 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 4734976 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 4734976 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 4734976 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 4726784 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 4726784 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 4694016 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 5414912 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 5414912 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 5406720 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 5406720 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 5406720 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 5406720 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 5406720 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 5406720 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 5406720 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 5406720 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 5406720 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 5406720 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 5406720 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119349248 unmapped: 5398528 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 5390336 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 5390336 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 5390336 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 5390336 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 5390336 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 5390336 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 5390336 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 5390336 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 5390336 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 5390336 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119357440 unmapped: 5390336 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 5382144 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 5382144 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 5382144 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 5382144 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 5373952 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 5373952 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 5373952 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 5373952 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 5373952 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 5373952 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 5373952 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 5373952 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 5373952 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 5373952 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 5373952 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119373824 unmapped: 5373952 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119382016 unmapped: 5365760 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119390208 unmapped: 5357568 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 5349376 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 5349376 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 5349376 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 5349376 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 5349376 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 5349376 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 5349376 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 5349376 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 5349376 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 5349376 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 5349376 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119398400 unmapped: 5349376 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119406592 unmapped: 5341184 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 5332992 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 5332992 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 5332992 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 5332992 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 5332992 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 5332992 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 5332992 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 5332992 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 5332992 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 5332992 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 5332992 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 5332992 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119414784 unmapped: 5332992 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 5324800 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 5324800 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 5324800 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 5324800 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 5324800 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 5324800 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575539 data_alloc: 251658240 data_used: 31395840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 5324800 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 5324800 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 5324800 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 5324800 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 5324800 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 185.741470337s of 186.199707031s, submitted: 98
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56d727400 session 0x55a56af62b40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56e4f2800 session 0x55a56e5041e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56e4f3000 session 0x55a56eb8ba40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f79e6000/0x0/0x4ffc00000, data 0x3bc057f/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1575435 data_alloc: 251658240 data_used: 31391744
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 10919936 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c9850d/0x2d5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56c8d0400 session 0x55a56df7b860
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56c1b3400 session 0x55a56eb50000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56d726800 session 0x55a56eb3c1e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56c1b3c00 session 0x55a56eb4fc20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114352128 unmapped: 10395648 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56d727000 session 0x55a56eb501e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8910000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399707 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 10387456 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 120.861587524s of 121.107582092s, submitted: 42
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56c8d0800 session 0x55a56df2e960
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56e4f2400 session 0x55a56eb4f4a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399531 data_alloc: 234881024 data_used: 23044096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8911000/0x0/0x4ffc00000, data 0x2c984ab/0x2d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e296f00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f952e000/0x0/0x4ffc00000, data 0x207b488/0x213f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f952e000/0x0/0x4ffc00000, data 0x207b488/0x213f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253293 data_alloc: 218103808 data_used: 15319040
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f952e000/0x0/0x4ffc00000, data 0x207b488/0x213f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253293 data_alloc: 218103808 data_used: 15319040
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f952e000/0x0/0x4ffc00000, data 0x207b488/0x213f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253293 data_alloc: 218103808 data_used: 15319040
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f952e000/0x0/0x4ffc00000, data 0x207b488/0x213f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109789184 unmapped: 14958592 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f952e000/0x0/0x4ffc00000, data 0x207b488/0x213f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 14950400 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 14950400 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f952e000/0x0/0x4ffc00000, data 0x207b488/0x213f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253293 data_alloc: 218103808 data_used: 15319040
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 14950400 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 14950400 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 14950400 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 14950400 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f952e000/0x0/0x4ffc00000, data 0x207b488/0x213f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 14950400 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253293 data_alloc: 218103808 data_used: 15319040
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 14950400 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 14950400 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 14950400 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f952e000/0x0/0x4ffc00000, data 0x207b488/0x213f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 14950400 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.571105957s of 29.834913254s, submitted: 46
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 14950400 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255267 data_alloc: 218103808 data_used: 15323136
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f952d000/0x0/0x4ffc00000, data 0x207b498/0x2140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 14950400 heap: 124747776 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118185984 unmapped: 14950400 heap: 133136384 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 31694848 heap: 141533184 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 31694848 heap: 141533184 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56c8d0800 session 0x55a56eb8c780
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 31678464 heap: 141533184 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f859e000/0x0/0x4ffc00000, data 0x3008a15/0x30cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1366217 data_alloc: 218103808 data_used: 15331328
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f859e000/0x0/0x4ffc00000, data 0x3008a15/0x30cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 31678464 heap: 141533184 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 31678464 heap: 141533184 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 31678464 heap: 141533184 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 31678464 heap: 141533184 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 31678464 heap: 141533184 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1366217 data_alloc: 218103808 data_used: 15331328
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 31678464 heap: 141533184 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f859e000/0x0/0x4ffc00000, data 0x3008a15/0x30cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f859e000/0x0/0x4ffc00000, data 0x3008a15/0x30cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 31678464 heap: 141533184 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 31678464 heap: 141533184 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56e4f2800 session 0x55a56c13d0e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f859e000/0x0/0x4ffc00000, data 0x3008a15/0x30cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17129472 heap: 141533184 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e3d10e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 17129472 heap: 141533184 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.271093369s of 15.387918472s, submitted: 6
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56d926800 session 0x55a56eb4f2c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e659a40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56c8d0800 session 0x55a56b31c960
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f7e4f000/0x0/0x4ffc00000, data 0x3757a3e/0x381f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,1])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1468071 data_alloc: 234881024 data_used: 28901376
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56e4f2800 session 0x55a56b8141e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56e4f3000 session 0x55a56eb50780
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56d713c00 session 0x55a56eb50000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56c8d0400 session 0x55a56eb51c20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 125034496 unmapped: 17555456 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56c8d0800 session 0x55a56eb51860
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56e4f2800 session 0x55a56e4f54a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e504960
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56c1b3000 session 0x55a56df7b860
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e296f00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 125009920 unmapped: 17580032 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e297680
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 17563648 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56e4f2800 session 0x55a56e2974a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f7e49000/0x0/0x4ffc00000, data 0x375bae9/0x3825000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 125026304 unmapped: 17563648 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e424b40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56c1b2400 session 0x55a56b368d20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 125034496 unmapped: 17555456 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1474901 data_alloc: 234881024 data_used: 28913664
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 125034496 unmapped: 17555456 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 17514496 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124968960 unmapped: 17620992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 14745600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f7e48000/0x0/0x4ffc00000, data 0x375bb0c/0x3826000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 14745600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1529941 data_alloc: 234881024 data_used: 36564992
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 14745600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 14745600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f7e48000/0x0/0x4ffc00000, data 0x375bb0c/0x3826000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 14745600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 14745600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 14745600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1529941 data_alloc: 234881024 data_used: 36564992
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 14745600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f7e48000/0x0/0x4ffc00000, data 0x375bb0c/0x3826000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 14745600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 14737408 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 14737408 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 14737408 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1529941 data_alloc: 234881024 data_used: 36564992
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f7e48000/0x0/0x4ffc00000, data 0x375bb0c/0x3826000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 127852544 unmapped: 14737408 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.977439880s of 21.282739639s, submitted: 54
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56c8d0400 session 0x55a56eb4f4a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56c8d0800 session 0x55a56c827a40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124108800 unmapped: 18481152 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 ms_handle_reset con 0x55a56e4f2800 session 0x55a56eb8c780
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 18432000 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 18432000 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 18432000 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1416208 data_alloc: 234881024 data_used: 28905472
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 18432000 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 129 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e3d0d20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f859b000/0x0/0x4ffc00000, data 0x300a5e6/0x30d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274557 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9528000/0x0/0x4ffc00000, data 0x207ebd6/0x2145000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9528000/0x0/0x4ffc00000, data 0x207ebd6/0x2145000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9528000/0x0/0x4ffc00000, data 0x207ebd6/0x2145000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274557 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.112005234s of 15.587738037s, submitted: 80
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f9525000/0x0/0x4ffc00000, data 0x2080639/0x2148000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277531 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f9525000/0x0/0x4ffc00000, data 0x2080639/0x2148000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 131 ms_handle_reset con 0x55a56dcf1c00 session 0x55a56c7b34a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f9521000/0x0/0x4ffc00000, data 0x20821b6/0x214b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281177 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f9521000/0x0/0x4ffc00000, data 0x20821b6/0x214b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.898226738s of 13.929399490s, submitted: 15
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283479 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 132 ms_handle_reset con 0x55a56c8d1000 session 0x55a56e2963c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f951f000/0x0/0x4ffc00000, data 0x2083d87/0x214e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283479 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30957568 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f951f000/0x0/0x4ffc00000, data 0x2083d87/0x214e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 132 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 8724 writes, 34K keys, 8724 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8724 writes, 2036 syncs, 4.28 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 878 writes, 2698 keys, 878 commit groups, 1.0 writes per commit group, ingest: 1.85 MB, 0.00 MB/s#012Interval WAL: 878 writes, 391 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 173.001434326s of 173.086853027s, submitted: 32
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 31694848 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 ms_handle_reset con 0x55a56eb5d800 session 0x55a56eb44f00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 ms_handle_reset con 0x55a56d927400 session 0x55a56bb654a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 35536896 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 ms_handle_reset con 0x55a56c8d0800 session 0x55a56b31cf00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107143168 unmapped: 35446784 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130752 data_alloc: 218103808 data_used: 10362880
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa4a1000/0x0/0x4ffc00000, data 0x1103745/0x11cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130752 data_alloc: 218103808 data_used: 10362880
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa4a1000/0x0/0x4ffc00000, data 0x1103745/0x11cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa4a1000/0x0/0x4ffc00000, data 0x1103745/0x11cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa4a1000/0x0/0x4ffc00000, data 0x1103745/0x11cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130752 data_alloc: 218103808 data_used: 10362880
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa4a1000/0x0/0x4ffc00000, data 0x1103745/0x11cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130752 data_alloc: 218103808 data_used: 10362880
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 ms_handle_reset con 0x55a56d727800 session 0x55a56e425e00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.714269638s of 20.615226746s, submitted: 137
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 ms_handle_reset con 0x55a56fc29800 session 0x55a56e607680
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 74.238616943s of 74.364341736s, submitted: 25
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104701952 unmapped: 37888000 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104710144 unmapped: 37879808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 134 ms_handle_reset con 0x55a56fc29400 session 0x55a56c2b2000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa473000/0x0/0x4ffc00000, data 0x1134735/0x11fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104710144 unmapped: 37879808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29000 session 0x55a56d9ffa40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104710144 unmapped: 37879808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104710144 unmapped: 37879808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 26 02:27:18 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/970178124' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28c00 session 0x55a56c28c1e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28800 session 0x55a56df770e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28c00 session 0x55a56df765a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 60.036300659s of 60.289119720s, submitted: 26
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29000 session 0x55a56df77860
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 31752192 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29400 session 0x55a56b8d8d20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 31752192 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 31752192 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29800 session 0x55a56b353e00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28400 session 0x55a56df79680
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28000 session 0x55a56e4f50e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28400 session 0x55a56e606000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29000 session 0x55a56e633c20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28c00 session 0x55a56e606d20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 30105600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29400 session 0x55a56e631c20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29400 session 0x55a56b9452c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323660 data_alloc: 218103808 data_used: 13881344
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 30105600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8ba3000/0x0/0x4ffc00000, data 0x29fdf26/0x2acb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 30105600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 30105600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28000 session 0x55a56e6594a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28400 session 0x55a56c28c000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 30760960 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28c00 session 0x55a56c28c1e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 30760960 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0400 session 0x55a56eb8b680
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8ba3000/0x0/0x4ffc00000, data 0x29fdf26/0x2acb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0400 session 0x55a56eb8d2c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329098 data_alloc: 218103808 data_used: 13885440
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112181248 unmapped: 30408704 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.548220634s of 10.035059929s, submitted: 80
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112189440 unmapped: 30400512 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8b78000/0x0/0x4ffc00000, data 0x2a27f36/0x2af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8b78000/0x0/0x4ffc00000, data 0x2a27f36/0x2af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112197632 unmapped: 30392320 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8b78000/0x0/0x4ffc00000, data 0x2a27f36/0x2af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 29270016 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 28794880 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1398990 data_alloc: 234881024 data_used: 23592960
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114909184 unmapped: 27680768 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114917376 unmapped: 27672576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114917376 unmapped: 27672576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28000 session 0x55a56eb7da40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28400 session 0x55a56eb4e780
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8b78000/0x0/0x4ffc00000, data 0x2a27f36/0x2af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114917376 unmapped: 27672576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28c00 session 0x55a56e504960
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 27639808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369545 data_alloc: 234881024 data_used: 22966272
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 27639808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 27639808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 27639808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 27631616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 27631616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369545 data_alloc: 234881024 data_used: 22966272
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 27631616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 27623424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 27623424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 27623424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 27623424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369545 data_alloc: 234881024 data_used: 22966272
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 27623424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369545 data_alloc: 234881024 data_used: 22966272
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 27607040 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369545 data_alloc: 234881024 data_used: 22966272
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 27607040 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 27607040 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 27607040 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 27607040 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.431713104s of 33.620201111s, submitted: 41
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 24911872 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403693 data_alloc: 234881024 data_used: 23093248
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118439936 unmapped: 24150016 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 25518080 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f89b0000/0x0/0x4ffc00000, data 0x2bf1eb4/0x2cbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 24961024 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 24961024 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f89b0000/0x0/0x4ffc00000, data 0x2bf1eb4/0x2cbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 24928256 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417167 data_alloc: 234881024 data_used: 23797760
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 24928256 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 24920064 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 24788992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 24788992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8991000/0x0/0x4ffc00000, data 0x2c11eb4/0x2cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29400 session 0x55a56c826d20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.656462669s of 10.237721443s, submitted: 113
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0400 session 0x55a56b368d20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 24264704 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455488 data_alloc: 234881024 data_used: 23801856
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x3073f16/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 24231936 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 24231936 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x3073f16/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 24231936 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28000 session 0x55a56e255a40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 24231936 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28400 session 0x55a56c5acd20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x3073f16/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 24231936 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455488 data_alloc: 234881024 data_used: 23801856
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28c00 session 0x55a56eb8d4a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d1000 session 0x55a56b353e00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118661120 unmapped: 23928832 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0400 session 0x55a56eb4ef00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 22888448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 22888448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7f4d000/0x0/0x4ffc00000, data 0x3654f16/0x3721000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 22880256 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 22880256 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1509326 data_alloc: 234881024 data_used: 23920640
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.817225456s of 10.958477974s, submitted: 17
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 22790144 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28c00 session 0x55a56e488d20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119980032 unmapped: 22609920 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7f4a000/0x0/0x4ffc00000, data 0x3657f16/0x3724000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28000 session 0x55a56df783c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28400 session 0x55a56e4241e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0800 session 0x55a56c2530e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 22446080 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e425e00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0400 session 0x55a56c7b5a40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28000 session 0x55a56c713860
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83ac000/0x0/0x4ffc00000, data 0x31f5eb4/0x32c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1474319 data_alloc: 234881024 data_used: 23801856
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83ac000/0x0/0x4ffc00000, data 0x31f5eb4/0x32c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83ac000/0x0/0x4ffc00000, data 0x31f5eb4/0x32c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1474319 data_alloc: 234881024 data_used: 23801856
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.621734619s of 11.774707794s, submitted: 27
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83ab000/0x0/0x4ffc00000, data 0x31f6eb4/0x32c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1474963 data_alloc: 234881024 data_used: 23805952
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 22495232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83ab000/0x0/0x4ffc00000, data 0x31f6eb4/0x32c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120184832 unmapped: 22405120 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 20611072 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 20045824 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1516151 data_alloc: 234881024 data_used: 29446144
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83a7000/0x0/0x4ffc00000, data 0x31fbeb4/0x32c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83a7000/0x0/0x4ffc00000, data 0x31fbeb4/0x32c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1516151 data_alloc: 234881024 data_used: 29446144
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83a7000/0x0/0x4ffc00000, data 0x31fbeb4/0x32c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83a7000/0x0/0x4ffc00000, data 0x31fbeb4/0x32c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83a7000/0x0/0x4ffc00000, data 0x31fbeb4/0x32c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1516151 data_alloc: 234881024 data_used: 29446144
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.601295471s of 19.642702103s, submitted: 5
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29000 session 0x55a56d9ffa40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29800 session 0x55a56e631860
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 19996672 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e4892c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336755 data_alloc: 234881024 data_used: 19525632
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f921a000/0x0/0x4ffc00000, data 0x2388e52/0x2453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f921a000/0x0/0x4ffc00000, data 0x2388e52/0x2453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336755 data_alloc: 234881024 data_used: 19525632
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f921a000/0x0/0x4ffc00000, data 0x2388e52/0x2453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336755 data_alloc: 234881024 data_used: 19525632
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f921a000/0x0/0x4ffc00000, data 0x2388e52/0x2453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f921a000/0x0/0x4ffc00000, data 0x2388e52/0x2453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.012077332s of 18.237907410s, submitted: 45
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119865344 unmapped: 22724608 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1432429 data_alloc: 234881024 data_used: 20500480
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 19464192 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8620000/0x0/0x4ffc00000, data 0x2f7be52/0x3046000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 123240448 unmapped: 19349504 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 18661376 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 18513920 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 18513920 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451629 data_alloc: 234881024 data_used: 20959232
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 18513920 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 18513920 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8598000/0x0/0x4ffc00000, data 0x3003e52/0x30ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122355712 unmapped: 20234240 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e6305a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 20250624 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56e632960
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc29000 session 0x55a56e632f00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56d926c00 session 0x55a56e632d20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 20250624 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e5043c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.896427155s of 10.405808449s, submitted: 124
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e213860
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566241 data_alloc: 234881024 data_used: 20971520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56b9463c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc29000 session 0x55a56e2125a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56d927400 session 0x55a56c13c000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56eb503c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 28147712 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f775c000/0x0/0x4ffc00000, data 0x3e469cf/0x3f12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 28147712 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 28147712 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 28008448 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 28008448 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1565081 data_alloc: 234881024 data_used: 20971520
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56eb4f2c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 27992064 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56eb50780
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 27992064 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f7752000/0x0/0x4ffc00000, data 0x3e509cf/0x3f1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc29000 session 0x55a56eb7d860
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56eb5d800 session 0x55a56e607860
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 27639808 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56c28c000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc29000 session 0x55a56c4052c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f2800 session 0x55a56b31c000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 27639808 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e607680
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56d78c800 session 0x55a56c5acf00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 27639808 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc29000 session 0x55a56df78f00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56d78c000 session 0x55a56c7b3860
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.034696579s of 10.289134979s, submitted: 43
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56eb6dc00 session 0x55a56b8d6780
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1582211 data_alloc: 234881024 data_used: 22630400
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f7728000/0x0/0x4ffc00000, data 0x3e7a9cf/0x3f46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56c253e00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56b815c20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f2800 session 0x55a56b8a9860
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 29573120 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56c252f00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56eb6dc00 session 0x55a56e3d01e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56eb3cd20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56d78c000 session 0x55a56eb4f680
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f2800 session 0x55a56e633680
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e606000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56eb6dc00 session 0x55a56c502f00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56b368b40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc29000 session 0x55a56e607a40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 129622016 unmapped: 24518656 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 22323200 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56b8a81e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56eb3cb40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f2800 session 0x55a56e4f41e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56eb6dc00 session 0x55a56eb3c3c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6187000/0x0/0x4ffc00000, data 0x5419a41/0x54e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 30539776 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e4f45a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e505860
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e212d20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 29499392 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e2121e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1641379 data_alloc: 234881024 data_used: 20008960
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 29327360 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 29327360 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fad000/0x0/0x4ffc00000, data 0x45f3a41/0x46c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 29327360 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 128786432 unmapped: 25354240 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1785935 data_alloc: 251658240 data_used: 36515840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 22511616 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132055040 unmapped: 22085632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132055040 unmapped: 22085632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fad000/0x0/0x4ffc00000, data 0x45f3a41/0x46c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 22052864 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.828028679s of 13.312131882s, submitted: 110
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132169728 unmapped: 21970944 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1792707 data_alloc: 251658240 data_used: 37019648
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132169728 unmapped: 21970944 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132169728 unmapped: 21970944 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 21938176 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 21938176 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 21938176 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1792707 data_alloc: 251658240 data_used: 37019648
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 21938176 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 21938176 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132235264 unmapped: 21905408 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132235264 unmapped: 21905408 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1792707 data_alloc: 251658240 data_used: 37019648
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1792707 data_alloc: 251658240 data_used: 37019648
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 21848064 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 21848064 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 21848064 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1792707 data_alloc: 251658240 data_used: 37019648
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 21848064 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 21848064 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 21848064 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec28400 session 0x55a56e297e00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec28800 session 0x55a56e2963c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e296f00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e633e00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132308992 unmapped: 21831680 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.208217621s of 25.232786179s, submitted: 5
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e632000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec28400 session 0x55a56e254f00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 16474112 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6a9e000/0x0/0x4ffc00000, data 0x4b01aa3/0x4bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1871388 data_alloc: 251658240 data_used: 37527552
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 16580608 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 16703488 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141099008 unmapped: 13041664 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 11075584 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143572992 unmapped: 10567680 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec29400 session 0x55a56e6323c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1965642 data_alloc: 251658240 data_used: 38883328
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143818752 unmapped: 10321920 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e632960
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5cf4000/0x0/0x4ffc00000, data 0x58a2aa3/0x5971000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143826944 unmapped: 10313728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e633680
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56c7b4b40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143114240 unmapped: 11026432 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143114240 unmapped: 11026432 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.638150215s of 10.307005882s, submitted: 200
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56e296b40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec28000 session 0x55a56df765a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143122432 unmapped: 11018240 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5cfc000/0x0/0x4ffc00000, data 0x58a2ac6/0x5972000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,1])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1793725 data_alloc: 234881024 data_used: 34222080
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56b8145a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 13385728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 13385728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 13385728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 13385728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f63ed000/0x0/0x4ffc00000, data 0x4a2ba64/0x4afa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 13385728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1805933 data_alloc: 251658240 data_used: 36061184
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 13385728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f63ed000/0x0/0x4ffc00000, data 0x4a2ba64/0x4afa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28c00 session 0x55a56df7b0e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28400 session 0x55a56eb7c1e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 13385728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e3d0000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1619351 data_alloc: 234881024 data_used: 30748672
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1619351 data_alloc: 234881024 data_used: 30748672
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1619351 data_alloc: 234881024 data_used: 30748672
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.527814865s of 26.195085526s, submitted: 83
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1619367 data_alloc: 234881024 data_used: 30744576
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1619367 data_alloc: 234881024 data_used: 30744576
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137183232 unmapped: 16957440 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137183232 unmapped: 16957440 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143810560 unmapped: 10330112 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143810560 unmapped: 10330112 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6b4b000/0x0/0x4ffc00000, data 0x4644a64/0x4713000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 8953856 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1740889 data_alloc: 234881024 data_used: 31739904
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 10706944 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 10706944 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 10706944 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 10706944 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 10698752 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.758532524s of 15.169282913s, submitted: 129
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6b17000/0x0/0x4ffc00000, data 0x4678a64/0x4747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1741529 data_alloc: 234881024 data_used: 31744000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 10698752 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 10698752 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6b15000/0x0/0x4ffc00000, data 0x467aa64/0x4749000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 10698752 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 10698752 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 10698752 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1741529 data_alloc: 234881024 data_used: 31744000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 10698752 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6b15000/0x0/0x4ffc00000, data 0x467aa64/0x4749000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 10690560 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 10690560 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 10690560 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 10690560 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1741529 data_alloc: 234881024 data_used: 31744000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 10690560 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 10690560 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56df7b860
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56c8c2000
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6b15000/0x0/0x4ffc00000, data 0x467aa64/0x4749000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56c8c32c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28400 session 0x55a56e2134a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 10690560 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.815789223s of 12.824033737s, submitted: 1
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28c00 session 0x55a56e213680
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 16744448 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 16744448 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1778373 data_alloc: 234881024 data_used: 31748096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143736832 unmapped: 16703488 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b3000/0x0/0x4ffc00000, data 0x4adca64/0x4bab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143745024 unmapped: 16695296 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec28400 session 0x55a56b31cf00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143745024 unmapped: 16695296 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e632b40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143745024 unmapped: 16695296 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e6334a0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143745024 unmapped: 16695296 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b3000/0x0/0x4ffc00000, data 0x4adca64/0x4bab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28400 session 0x55a56e633e00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1781619 data_alloc: 234881024 data_used: 31748096
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143769600 unmapped: 16670720 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143785984 unmapped: 16654336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 16785408 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 17022976 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16990208 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.152029037s of 12.250783920s, submitted: 20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1801379 data_alloc: 251658240 data_used: 34910208
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56b8a9860
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1801323 data_alloc: 251658240 data_used: 34910208
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 17702912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 17702912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 17702912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 17702912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 17702912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1801963 data_alloc: 251658240 data_used: 34979840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1801963 data_alloc: 251658240 data_used: 34979840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1801963 data_alloc: 251658240 data_used: 34979840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142770176 unmapped: 17670144 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1801963 data_alloc: 251658240 data_used: 34979840
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142770176 unmapped: 17670144 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142770176 unmapped: 17670144 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142770176 unmapped: 17670144 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142770176 unmapped: 17670144 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142770176 unmapped: 17670144 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.086006165s of 30.111179352s, submitted: 4
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1866039 data_alloc: 251658240 data_used: 35270656
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5ee4000/0x0/0x4ffc00000, data 0x52a4a74/0x5374000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143990784 unmapped: 16449536 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5e2e000/0x0/0x4ffc00000, data 0x5351a74/0x5421000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1878977 data_alloc: 251658240 data_used: 35135488
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144039936 unmapped: 16400384 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5e18000/0x0/0x4ffc00000, data 0x5376a74/0x5446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144039936 unmapped: 16400384 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5e18000/0x0/0x4ffc00000, data 0x5376a74/0x5446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1872269 data_alloc: 251658240 data_used: 35135488
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144039936 unmapped: 16400384 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144039936 unmapped: 16400384 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144039936 unmapped: 16400384 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.147089005s of 13.530894279s, submitted: 89
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144056320 unmapped: 16384000 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5e18000/0x0/0x4ffc00000, data 0x5376a74/0x5446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144179200 unmapped: 16261120 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1897909 data_alloc: 251658240 data_used: 37761024
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144465920 unmapped: 15974400 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5e18000/0x0/0x4ffc00000, data 0x5376a74/0x5446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144465920 unmapped: 15974400 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5e18000/0x0/0x4ffc00000, data 0x5376a74/0x5446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5e18000/0x0/0x4ffc00000, data 0x5376a74/0x5446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144465920 unmapped: 15974400 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144465920 unmapped: 15974400 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144474112 unmapped: 15966208 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1941027 data_alloc: 251658240 data_used: 37781504
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56f455000 session 0x55a56e212960
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144572416 unmapped: 15867904 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5976000/0x0/0x4ffc00000, data 0x5818a74/0x58e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144572416 unmapped: 15867904 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144580608 unmapped: 15859712 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144580608 unmapped: 15859712 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56eb51e00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144580608 unmapped: 15859712 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56d926c00 session 0x55a56eb51c20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56df7ad20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1941027 data_alloc: 251658240 data_used: 37781504
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.132235527s of 12.247295380s, submitted: 20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56b8a8f00
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144605184 unmapped: 15835136 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144605184 unmapped: 15835136 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5973000/0x0/0x4ffc00000, data 0x5819aa7/0x58eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144605184 unmapped: 15835136 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144343040 unmapped: 16097280 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 15278080 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1967442 data_alloc: 251658240 data_used: 41213952
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 13541376 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 13541376 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 13541376 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5973000/0x0/0x4ffc00000, data 0x5819aa7/0x58eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 13541376 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 13541376 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5971000/0x0/0x4ffc00000, data 0x581aaa7/0x58ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5971000/0x0/0x4ffc00000, data 0x581aaa7/0x58ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1970102 data_alloc: 251658240 data_used: 41492480
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.006687164s of 10.064773560s, submitted: 9
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146997248 unmapped: 13443072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146997248 unmapped: 13443072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28c00 session 0x55a56e6323c0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec29800 session 0x55a56e3d1a40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5966000/0x0/0x4ffc00000, data 0x5826aa7/0x58f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146997248 unmapped: 13443072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56eb3cb40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6673000/0x0/0x4ffc00000, data 0x4b1aa97/0x4beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1846947 data_alloc: 251658240 data_used: 38170624
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6669000/0x0/0x4ffc00000, data 0x4b21a97/0x4bf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,3])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec29c00 session 0x55a56b8d8d20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1848537 data_alloc: 251658240 data_used: 38178816
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.912253380s of 10.141081810s, submitted: 27
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56b8d81e0
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79c2000/0x0/0x4ffc00000, data 0x37cba97/0x389c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1637779 data_alloc: 234881024 data_used: 29880320
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79c2000/0x0/0x4ffc00000, data 0x37cba12/0x389a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1637779 data_alloc: 234881024 data_used: 29880320
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.099064827s of 12.339138985s, submitted: 39
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79c2000/0x0/0x4ffc00000, data 0x37cba12/0x389a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 137 ms_handle_reset con 0x55a56d926c00 session 0x55a56df76d20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 137 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e254d20
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140804096 unmapped: 19636224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 137 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e424b40
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140795904 unmapped: 19644416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:27:18 compute-0 ceph-osd[207774]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:27:18 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:27:18 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1647934 data_alloc: 234881024 data_used: 29888512
Nov 26 02:27:18 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140861440 unmapped: 19578880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:32:34 compute-0 nova_compute[350387]: 2025-11-26 02:32:34.290 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:32:34 compute-0 nova_compute[350387]: 2025-11-26 02:32:34.292 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:32:34 compute-0 nova_compute[350387]: 2025-11-26 02:32:34.292 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:32:34 compute-0 nova_compute[350387]: 2025-11-26 02:32:34.292 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:32:34 compute-0 nova_compute[350387]: 2025-11-26 02:32:34.293 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:32:34 compute-0 nova_compute[350387]: 2025-11-26 02:32:34.295 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:32:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:32:34 compute-0 rsyslogd[188548]: imjournal: 16479 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 26 02:32:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:32:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:32:37 compute-0 podman[482038]: 2025-11-26 02:32:37.557356127 +0000 UTC m=+0.108966581 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 02:32:37 compute-0 podman[482039]: 2025-11-26 02:32:37.638075594 +0000 UTC m=+0.187594299 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:32:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:32:39 compute-0 nova_compute[350387]: 2025-11-26 02:32:39.295 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:32:39 compute-0 nova_compute[350387]: 2025-11-26 02:32:39.406 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:32:39 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:32:39.406 286844 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '26:ff:21', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'f2:c5:68:96:98:b1'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 02:32:39 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:32:39.408 286844 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 02:32:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:32:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:32:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:32:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:32:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:32:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:32:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:32:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:32:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:32:41
Nov 26 02:32:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:32:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:32:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'images', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 26 02:32:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:32:41 compute-0 podman[482082]: 2025-11-26 02:32:41.583366563 +0000 UTC m=+0.129353493 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-type=git, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, distribution-scope=public, config_id=edpm, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 26 02:32:41 compute-0 podman[482083]: 2025-11-26 02:32:41.58467407 +0000 UTC m=+0.123342984 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 02:32:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:32:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:32:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:32:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:32:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:32:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:32:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:32:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:32:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:32:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:32:42 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:32:42.411 286844 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=27d03014-5e51-4d89-b5a1-b13242894075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 02:32:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.881 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.882 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.882 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.883 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.887 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.889 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.889 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.890 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.890 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.890 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.890 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.890 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.891 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.891 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.891 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.891 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.891 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.892 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.892 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.892 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.892 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.892 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.893 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.893 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.893 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.893 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.893 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.894 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.894 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.889 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.894 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.895 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.895 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.895 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.895 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.896 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.896 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.896 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.896 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.897 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.897 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.897 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.894 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.897 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.898 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.898 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.898 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.898 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.898 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.898 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.899 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.899 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.899 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.899 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.899 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.899 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.900 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.900 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.900 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.900 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.900 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.900 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.901 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.901 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.901 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.901 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.901 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.901 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.902 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.902 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.902 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.902 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.903 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.903 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.903 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.903 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.903 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.903 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.904 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.904 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.904 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.904 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.904 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.904 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.904 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.905 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.905 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.905 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.905 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.905 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.905 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.905 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.906 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:32:42.906 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:32:44 compute-0 nova_compute[350387]: 2025-11-26 02:32:44.298 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:32:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:32:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:32:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:32:46 compute-0 podman[482119]: 2025-11-26 02:32:46.527164846 +0000 UTC m=+0.090503692 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container)
Nov 26 02:32:46 compute-0 podman[482120]: 2025-11-26 02:32:46.549321328 +0000 UTC m=+0.095031469 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:32:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:32:49 compute-0 nova_compute[350387]: 2025-11-26 02:32:49.300 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:32:49 compute-0 nova_compute[350387]: 2025-11-26 02:32:49.301 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:32:49 compute-0 nova_compute[350387]: 2025-11-26 02:32:49.302 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:32:49 compute-0 nova_compute[350387]: 2025-11-26 02:32:49.302 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:32:49 compute-0 nova_compute[350387]: 2025-11-26 02:32:49.302 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:32:49 compute-0 nova_compute[350387]: 2025-11-26 02:32:49.304 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:32:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:32:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:32:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:32:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:32:52 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:32:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:32:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:32:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:32:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:32:52 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev a40d2a73-a02d-4f70-8a5e-3e66e877e06f does not exist
Nov 26 02:32:52 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 3df6d9f8-b7bf-4aab-9aba-4d59d23a4d03 does not exist
Nov 26 02:32:52 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 595b370d-a26a-47b7-97a8-cf459ecfcbd3 does not exist
Nov 26 02:32:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:32:52 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:32:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:32:52 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:32:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:32:52 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:32:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:32:52 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:32:52 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:32:52 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:32:53 compute-0 nova_compute[350387]: 2025-11-26 02:32:53.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:32:53 compute-0 nova_compute[350387]: 2025-11-26 02:32:53.328 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:32:53 compute-0 nova_compute[350387]: 2025-11-26 02:32:53.329 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:32:53 compute-0 nova_compute[350387]: 2025-11-26 02:32:53.330 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:32:53 compute-0 nova_compute[350387]: 2025-11-26 02:32:53.331 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:32:53 compute-0 nova_compute[350387]: 2025-11-26 02:32:53.332 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:32:53 compute-0 podman[482428]: 2025-11-26 02:32:53.454188344 +0000 UTC m=+0.093969009 container create e1feb6e8ac9aaa3fa7484e8d2027a61f2aee1416c6af3487b4ad8bb999b9bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_morse, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:32:53 compute-0 podman[482428]: 2025-11-26 02:32:53.418457351 +0000 UTC m=+0.058238016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:32:53 compute-0 systemd[1]: Started libpod-conmon-e1feb6e8ac9aaa3fa7484e8d2027a61f2aee1416c6af3487b4ad8bb999b9bd49.scope.
Nov 26 02:32:53 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:32:53 compute-0 podman[482428]: 2025-11-26 02:32:53.615451533 +0000 UTC m=+0.255232208 container init e1feb6e8ac9aaa3fa7484e8d2027a61f2aee1416c6af3487b4ad8bb999b9bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_morse, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 02:32:53 compute-0 podman[482428]: 2025-11-26 02:32:53.633074657 +0000 UTC m=+0.272855292 container start e1feb6e8ac9aaa3fa7484e8d2027a61f2aee1416c6af3487b4ad8bb999b9bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 02:32:53 compute-0 podman[482428]: 2025-11-26 02:32:53.638908031 +0000 UTC m=+0.278688706 container attach e1feb6e8ac9aaa3fa7484e8d2027a61f2aee1416c6af3487b4ad8bb999b9bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 02:32:53 compute-0 mystifying_morse[482464]: 167 167
Nov 26 02:32:53 compute-0 systemd[1]: libpod-e1feb6e8ac9aaa3fa7484e8d2027a61f2aee1416c6af3487b4ad8bb999b9bd49.scope: Deactivated successfully.
Nov 26 02:32:53 compute-0 podman[482428]: 2025-11-26 02:32:53.647368869 +0000 UTC m=+0.287149544 container died e1feb6e8ac9aaa3fa7484e8d2027a61f2aee1416c6af3487b4ad8bb999b9bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_morse, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 02:32:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-034ae45cb08131a2670ae63791ba4ac04e81828fabb7483ad5bb81d1cc4a07c2-merged.mount: Deactivated successfully.
Nov 26 02:32:53 compute-0 podman[482428]: 2025-11-26 02:32:53.731437439 +0000 UTC m=+0.371218084 container remove e1feb6e8ac9aaa3fa7484e8d2027a61f2aee1416c6af3487b4ad8bb999b9bd49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_morse, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:32:53 compute-0 systemd[1]: libpod-conmon-e1feb6e8ac9aaa3fa7484e8d2027a61f2aee1416c6af3487b4ad8bb999b9bd49.scope: Deactivated successfully.
Nov 26 02:32:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:32:53 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2440703721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:32:53 compute-0 nova_compute[350387]: 2025-11-26 02:32:53.864 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:32:54 compute-0 podman[482489]: 2025-11-26 02:32:54.007377377 +0000 UTC m=+0.115908045 container create 35235b1f54b1bc9812c7ec1e02c61890663be5ddbfb74d4a8bca2741394e9056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bassi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:32:54 compute-0 podman[482489]: 2025-11-26 02:32:53.95798611 +0000 UTC m=+0.066516838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:32:54 compute-0 systemd[1]: Started libpod-conmon-35235b1f54b1bc9812c7ec1e02c61890663be5ddbfb74d4a8bca2741394e9056.scope.
Nov 26 02:32:54 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f410e3bdaf2b22a87f46a9307a216405473136744dae8319d30574f38a666131/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f410e3bdaf2b22a87f46a9307a216405473136744dae8319d30574f38a666131/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f410e3bdaf2b22a87f46a9307a216405473136744dae8319d30574f38a666131/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f410e3bdaf2b22a87f46a9307a216405473136744dae8319d30574f38a666131/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f410e3bdaf2b22a87f46a9307a216405473136744dae8319d30574f38a666131/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:32:54 compute-0 podman[482489]: 2025-11-26 02:32:54.183320638 +0000 UTC m=+0.291851366 container init 35235b1f54b1bc9812c7ec1e02c61890663be5ddbfb74d4a8bca2741394e9056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bassi, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:32:54 compute-0 podman[482489]: 2025-11-26 02:32:54.210226753 +0000 UTC m=+0.318757431 container start 35235b1f54b1bc9812c7ec1e02c61890663be5ddbfb74d4a8bca2741394e9056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bassi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:32:54 compute-0 podman[482489]: 2025-11-26 02:32:54.21936122 +0000 UTC m=+0.327891878 container attach 35235b1f54b1bc9812c7ec1e02c61890663be5ddbfb74d4a8bca2741394e9056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bassi, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 26 02:32:54 compute-0 nova_compute[350387]: 2025-11-26 02:32:54.304 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:32:54 compute-0 nova_compute[350387]: 2025-11-26 02:32:54.378 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:32:54 compute-0 nova_compute[350387]: 2025-11-26 02:32:54.379 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3923MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:32:54 compute-0 nova_compute[350387]: 2025-11-26 02:32:54.379 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:32:54 compute-0 nova_compute[350387]: 2025-11-26 02:32:54.379 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:32:54 compute-0 nova_compute[350387]: 2025-11-26 02:32:54.454 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:32:54 compute-0 nova_compute[350387]: 2025-11-26 02:32:54.454 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:32:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:32:54 compute-0 nova_compute[350387]: 2025-11-26 02:32:54.616 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:32:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:32:55 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751692236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:32:55 compute-0 nova_compute[350387]: 2025-11-26 02:32:55.167 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:32:55 compute-0 nova_compute[350387]: 2025-11-26 02:32:55.179 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:32:55 compute-0 nova_compute[350387]: 2025-11-26 02:32:55.199 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:32:55 compute-0 nova_compute[350387]: 2025-11-26 02:32:55.204 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:32:55 compute-0 nova_compute[350387]: 2025-11-26 02:32:55.205 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:32:55 compute-0 practical_bassi[482505]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:32:55 compute-0 practical_bassi[482505]: --> relative data size: 1.0
Nov 26 02:32:55 compute-0 practical_bassi[482505]: --> All data devices are unavailable
Nov 26 02:32:55 compute-0 systemd[1]: libpod-35235b1f54b1bc9812c7ec1e02c61890663be5ddbfb74d4a8bca2741394e9056.scope: Deactivated successfully.
Nov 26 02:32:55 compute-0 systemd[1]: libpod-35235b1f54b1bc9812c7ec1e02c61890663be5ddbfb74d4a8bca2741394e9056.scope: Consumed 1.315s CPU time.
Nov 26 02:32:55 compute-0 podman[482489]: 2025-11-26 02:32:55.601680582 +0000 UTC m=+1.710211250 container died 35235b1f54b1bc9812c7ec1e02c61890663be5ddbfb74d4a8bca2741394e9056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f410e3bdaf2b22a87f46a9307a216405473136744dae8319d30574f38a666131-merged.mount: Deactivated successfully.
Nov 26 02:32:55 compute-0 podman[482489]: 2025-11-26 02:32:55.707335909 +0000 UTC m=+1.815866537 container remove 35235b1f54b1bc9812c7ec1e02c61890663be5ddbfb74d4a8bca2741394e9056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bassi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 02:32:55 compute-0 systemd[1]: libpod-conmon-35235b1f54b1bc9812c7ec1e02c61890663be5ddbfb74d4a8bca2741394e9056.scope: Deactivated successfully.
Nov 26 02:32:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:32:56 compute-0 nova_compute[350387]: 2025-11-26 02:32:56.208 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:32:56 compute-0 nova_compute[350387]: 2025-11-26 02:32:56.209 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:32:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:32:56 compute-0 podman[482708]: 2025-11-26 02:32:56.949407574 +0000 UTC m=+0.099099863 container create d2741d62c96e75eeea7ab7ea16b89e77630c5f9049875f7abb405accedbe0136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 02:32:57 compute-0 podman[482708]: 2025-11-26 02:32:56.911683305 +0000 UTC m=+0.061375664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:32:57 compute-0 systemd[1]: Started libpod-conmon-d2741d62c96e75eeea7ab7ea16b89e77630c5f9049875f7abb405accedbe0136.scope.
Nov 26 02:32:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:32:57 compute-0 podman[482708]: 2025-11-26 02:32:57.088813558 +0000 UTC m=+0.238505867 container init d2741d62c96e75eeea7ab7ea16b89e77630c5f9049875f7abb405accedbe0136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:32:57 compute-0 podman[482708]: 2025-11-26 02:32:57.101565547 +0000 UTC m=+0.251257816 container start d2741d62c96e75eeea7ab7ea16b89e77630c5f9049875f7abb405accedbe0136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 02:32:57 compute-0 podman[482708]: 2025-11-26 02:32:57.107160694 +0000 UTC m=+0.256853003 container attach d2741d62c96e75eeea7ab7ea16b89e77630c5f9049875f7abb405accedbe0136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_proskuriakova, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:32:57 compute-0 serene_proskuriakova[482736]: 167 167
Nov 26 02:32:57 compute-0 systemd[1]: libpod-d2741d62c96e75eeea7ab7ea16b89e77630c5f9049875f7abb405accedbe0136.scope: Deactivated successfully.
Nov 26 02:32:57 compute-0 podman[482708]: 2025-11-26 02:32:57.113709078 +0000 UTC m=+0.263401417 container died d2741d62c96e75eeea7ab7ea16b89e77630c5f9049875f7abb405accedbe0136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 02:32:57 compute-0 podman[482724]: 2025-11-26 02:32:57.137239988 +0000 UTC m=+0.105691078 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 02:32:57 compute-0 podman[482721]: 2025-11-26 02:32:57.137590138 +0000 UTC m=+0.111050459 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Nov 26 02:32:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f990f7adf81f3eb2cc93f42a5067d475a6a35d3772f2b90a8f6070083c0d4b48-merged.mount: Deactivated successfully.
Nov 26 02:32:57 compute-0 podman[482708]: 2025-11-26 02:32:57.171193962 +0000 UTC m=+0.320886231 container remove d2741d62c96e75eeea7ab7ea16b89e77630c5f9049875f7abb405accedbe0136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 02:32:57 compute-0 podman[482725]: 2025-11-26 02:32:57.175625966 +0000 UTC m=+0.134460876 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:32:57 compute-0 systemd[1]: libpod-conmon-d2741d62c96e75eeea7ab7ea16b89e77630c5f9049875f7abb405accedbe0136.scope: Deactivated successfully.
Nov 26 02:32:57 compute-0 podman[482803]: 2025-11-26 02:32:57.416229502 +0000 UTC m=+0.078332051 container create 5ee962c3e8d6b92fee4413068a503ae87a2a565cedfb6e945c99cee4eec9a5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_stonebraker, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:32:57 compute-0 podman[482803]: 2025-11-26 02:32:57.380370345 +0000 UTC m=+0.042472894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:32:57 compute-0 systemd[1]: Started libpod-conmon-5ee962c3e8d6b92fee4413068a503ae87a2a565cedfb6e945c99cee4eec9a5d0.scope.
Nov 26 02:32:57 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456feac6a7310ac7d0afcc3cbf133d5180d9de903bed1fe84dddd90b1ddbc608/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456feac6a7310ac7d0afcc3cbf133d5180d9de903bed1fe84dddd90b1ddbc608/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456feac6a7310ac7d0afcc3cbf133d5180d9de903bed1fe84dddd90b1ddbc608/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456feac6a7310ac7d0afcc3cbf133d5180d9de903bed1fe84dddd90b1ddbc608/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:32:57 compute-0 podman[482803]: 2025-11-26 02:32:57.582049058 +0000 UTC m=+0.244151667 container init 5ee962c3e8d6b92fee4413068a503ae87a2a565cedfb6e945c99cee4eec9a5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_stonebraker, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:32:57 compute-0 podman[482803]: 2025-11-26 02:32:57.594509098 +0000 UTC m=+0.256611647 container start 5ee962c3e8d6b92fee4413068a503ae87a2a565cedfb6e945c99cee4eec9a5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 02:32:57 compute-0 podman[482803]: 2025-11-26 02:32:57.602160063 +0000 UTC m=+0.264262662 container attach 5ee962c3e8d6b92fee4413068a503ae87a2a565cedfb6e945c99cee4eec9a5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]: {
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:    "0": [
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:        {
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "devices": [
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "/dev/loop3"
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            ],
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "lv_name": "ceph_lv0",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "lv_size": "21470642176",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "name": "ceph_lv0",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "tags": {
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.cluster_name": "ceph",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.crush_device_class": "",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.encrypted": "0",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.osd_id": "0",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.type": "block",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.vdo": "0"
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            },
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "type": "block",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "vg_name": "ceph_vg0"
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:        }
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:    ],
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:    "1": [
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:        {
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "devices": [
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "/dev/loop4"
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            ],
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "lv_name": "ceph_lv1",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "lv_size": "21470642176",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "name": "ceph_lv1",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "tags": {
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.cluster_name": "ceph",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.crush_device_class": "",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.encrypted": "0",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.osd_id": "1",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.type": "block",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.vdo": "0"
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            },
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "type": "block",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "vg_name": "ceph_vg1"
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:        }
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:    ],
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:    "2": [
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:        {
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "devices": [
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "/dev/loop5"
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            ],
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "lv_name": "ceph_lv2",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "lv_size": "21470642176",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "name": "ceph_lv2",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "tags": {
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.cluster_name": "ceph",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.crush_device_class": "",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.encrypted": "0",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.osd_id": "2",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.type": "block",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:                "ceph.vdo": "0"
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            },
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "type": "block",
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:            "vg_name": "ceph_vg2"
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:        }
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]:    ]
Nov 26 02:32:58 compute-0 fervent_stonebraker[482819]: }
Nov 26 02:32:58 compute-0 systemd[1]: libpod-5ee962c3e8d6b92fee4413068a503ae87a2a565cedfb6e945c99cee4eec9a5d0.scope: Deactivated successfully.
Nov 26 02:32:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:32:58 compute-0 podman[482828]: 2025-11-26 02:32:58.537782173 +0000 UTC m=+0.055438137 container died 5ee962c3e8d6b92fee4413068a503ae87a2a565cedfb6e945c99cee4eec9a5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_stonebraker, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 02:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-456feac6a7310ac7d0afcc3cbf133d5180d9de903bed1fe84dddd90b1ddbc608-merged.mount: Deactivated successfully.
Nov 26 02:32:58 compute-0 podman[482828]: 2025-11-26 02:32:58.633485541 +0000 UTC m=+0.151141505 container remove 5ee962c3e8d6b92fee4413068a503ae87a2a565cedfb6e945c99cee4eec9a5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_stonebraker, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:32:58 compute-0 systemd[1]: libpod-conmon-5ee962c3e8d6b92fee4413068a503ae87a2a565cedfb6e945c99cee4eec9a5d0.scope: Deactivated successfully.
Nov 26 02:32:59 compute-0 nova_compute[350387]: 2025-11-26 02:32:59.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:32:59 compute-0 nova_compute[350387]: 2025-11-26 02:32:59.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:32:59 compute-0 nova_compute[350387]: 2025-11-26 02:32:59.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:32:59 compute-0 nova_compute[350387]: 2025-11-26 02:32:59.307 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:32:59 compute-0 nova_compute[350387]: 2025-11-26 02:32:59.310 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:32:59 compute-0 nova_compute[350387]: 2025-11-26 02:32:59.328 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 02:32:59 compute-0 nova_compute[350387]: 2025-11-26 02:32:59.329 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:32:59 compute-0 nova_compute[350387]: 2025-11-26 02:32:59.330 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:32:59 compute-0 podman[158021]: time="2025-11-26T02:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:32:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:32:59 compute-0 podman[482981]: 2025-11-26 02:32:59.777661496 +0000 UTC m=+0.080254344 container create 3785f8800bbc21e914cb223c1bc7691332077fa68352b2d269b7017d7a2cc5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cori, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:32:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8207 "" "Go-http-client/1.1"
Nov 26 02:32:59 compute-0 podman[482981]: 2025-11-26 02:32:59.74500699 +0000 UTC m=+0.047599888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:32:59 compute-0 systemd[1]: Started libpod-conmon-3785f8800bbc21e914cb223c1bc7691332077fa68352b2d269b7017d7a2cc5a5.scope.
Nov 26 02:32:59 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:32:59 compute-0 podman[482981]: 2025-11-26 02:32:59.929352146 +0000 UTC m=+0.231945064 container init 3785f8800bbc21e914cb223c1bc7691332077fa68352b2d269b7017d7a2cc5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cori, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:32:59 compute-0 podman[482981]: 2025-11-26 02:32:59.947006402 +0000 UTC m=+0.249599250 container start 3785f8800bbc21e914cb223c1bc7691332077fa68352b2d269b7017d7a2cc5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cori, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:32:59 compute-0 podman[482981]: 2025-11-26 02:32:59.953555165 +0000 UTC m=+0.256148073 container attach 3785f8800bbc21e914cb223c1bc7691332077fa68352b2d269b7017d7a2cc5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 26 02:32:59 compute-0 upbeat_cori[482997]: 167 167
Nov 26 02:32:59 compute-0 systemd[1]: libpod-3785f8800bbc21e914cb223c1bc7691332077fa68352b2d269b7017d7a2cc5a5.scope: Deactivated successfully.
Nov 26 02:32:59 compute-0 podman[482981]: 2025-11-26 02:32:59.958168435 +0000 UTC m=+0.260761283 container died 3785f8800bbc21e914cb223c1bc7691332077fa68352b2d269b7017d7a2cc5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 02:33:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0147769f77060d28bc9600975283dfb7550cb4679a1a44d105359d6c2813f42d-merged.mount: Deactivated successfully.
Nov 26 02:33:00 compute-0 podman[482981]: 2025-11-26 02:33:00.036283128 +0000 UTC m=+0.338875976 container remove 3785f8800bbc21e914cb223c1bc7691332077fa68352b2d269b7017d7a2cc5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cori, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:33:00 compute-0 systemd[1]: libpod-conmon-3785f8800bbc21e914cb223c1bc7691332077fa68352b2d269b7017d7a2cc5a5.scope: Deactivated successfully.
Nov 26 02:33:00 compute-0 podman[483020]: 2025-11-26 02:33:00.288807169 +0000 UTC m=+0.089991628 container create 2e444f614cb7f5c800f01045fb5b6e9f22c903ca93ddb27bb8144807c377a86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:33:00 compute-0 podman[483020]: 2025-11-26 02:33:00.25645598 +0000 UTC m=+0.057640499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:33:00 compute-0 systemd[1]: Started libpod-conmon-2e444f614cb7f5c800f01045fb5b6e9f22c903ca93ddb27bb8144807c377a86f.scope.
Nov 26 02:33:00 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bda188a5dc3c00c3cfd4304c0340df9cf6cf38f8b8bb8a42ec514eb21f0e69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bda188a5dc3c00c3cfd4304c0340df9cf6cf38f8b8bb8a42ec514eb21f0e69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bda188a5dc3c00c3cfd4304c0340df9cf6cf38f8b8bb8a42ec514eb21f0e69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bda188a5dc3c00c3cfd4304c0340df9cf6cf38f8b8bb8a42ec514eb21f0e69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:33:00 compute-0 podman[483020]: 2025-11-26 02:33:00.469169423 +0000 UTC m=+0.270353912 container init 2e444f614cb7f5c800f01045fb5b6e9f22c903ca93ddb27bb8144807c377a86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:33:00 compute-0 podman[483020]: 2025-11-26 02:33:00.489682399 +0000 UTC m=+0.290866858 container start 2e444f614cb7f5c800f01045fb5b6e9f22c903ca93ddb27bb8144807c377a86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 02:33:00 compute-0 podman[483020]: 2025-11-26 02:33:00.496033577 +0000 UTC m=+0.297218036 container attach 2e444f614cb7f5c800f01045fb5b6e9f22c903ca93ddb27bb8144807c377a86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 02:33:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:33:01 compute-0 openstack_network_exporter[367323]: ERROR   02:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:33:01 compute-0 openstack_network_exporter[367323]: ERROR   02:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:33:01 compute-0 openstack_network_exporter[367323]: ERROR   02:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:33:01 compute-0 openstack_network_exporter[367323]: ERROR   02:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:33:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:33:01 compute-0 openstack_network_exporter[367323]: ERROR   02:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:33:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:33:01 compute-0 zealous_nobel[483036]: {
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:        "osd_id": 0,
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:        "type": "bluestore"
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:    },
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:        "osd_id": 2,
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:        "type": "bluestore"
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:    },
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:        "osd_id": 1,
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:        "type": "bluestore"
Nov 26 02:33:01 compute-0 zealous_nobel[483036]:    }
Nov 26 02:33:01 compute-0 zealous_nobel[483036]: }
Nov 26 02:33:01 compute-0 systemd[1]: libpod-2e444f614cb7f5c800f01045fb5b6e9f22c903ca93ddb27bb8144807c377a86f.scope: Deactivated successfully.
Nov 26 02:33:01 compute-0 systemd[1]: libpod-2e444f614cb7f5c800f01045fb5b6e9f22c903ca93ddb27bb8144807c377a86f.scope: Consumed 1.311s CPU time.
Nov 26 02:33:01 compute-0 podman[483020]: 2025-11-26 02:33:01.805458954 +0000 UTC m=+1.606643423 container died 2e444f614cb7f5c800f01045fb5b6e9f22c903ca93ddb27bb8144807c377a86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 26 02:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2bda188a5dc3c00c3cfd4304c0340df9cf6cf38f8b8bb8a42ec514eb21f0e69-merged.mount: Deactivated successfully.
Nov 26 02:33:01 compute-0 podman[483020]: 2025-11-26 02:33:01.901173022 +0000 UTC m=+1.702357441 container remove 2e444f614cb7f5c800f01045fb5b6e9f22c903ca93ddb27bb8144807c377a86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 02:33:01 compute-0 systemd[1]: libpod-conmon-2e444f614cb7f5c800f01045fb5b6e9f22c903ca93ddb27bb8144807c377a86f.scope: Deactivated successfully.
Nov 26 02:33:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:33:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:33:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:33:01 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:33:01 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 3a7efda1-8714-4840-a5e1-e8e841d17f64 does not exist
Nov 26 02:33:01 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 8c36c57c-4790-45ca-bb38-4c7bae65a8da does not exist
Nov 26 02:33:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:33:02 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:33:04 compute-0 nova_compute[350387]: 2025-11-26 02:33:04.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:33:04 compute-0 nova_compute[350387]: 2025-11-26 02:33:04.308 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:33:04 compute-0 nova_compute[350387]: 2025-11-26 02:33:04.313 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:33:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:05 compute-0 nova_compute[350387]: 2025-11-26 02:33:05.321 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:33:05 compute-0 nova_compute[350387]: 2025-11-26 02:33:05.322 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:33:05 compute-0 nova_compute[350387]: 2025-11-26 02:33:05.322 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:33:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:33:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:08 compute-0 podman[483131]: 2025-11-26 02:33:08.622452854 +0000 UTC m=+0.172005051 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Nov 26 02:33:08 compute-0 podman[483132]: 2025-11-26 02:33:08.639369539 +0000 UTC m=+0.181358354 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 26 02:33:09 compute-0 nova_compute[350387]: 2025-11-26 02:33:09.313 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:33:09 compute-0 nova_compute[350387]: 2025-11-26 02:33:09.315 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:33:10 compute-0 nova_compute[350387]: 2025-11-26 02:33:10.295 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:33:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:33:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:33:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:33:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:33:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:33:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:33:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:33:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:12 compute-0 podman[483175]: 2025-11-26 02:33:12.586300951 +0000 UTC m=+0.129047334 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release-0.7.12=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.29.0, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, release=1214.1726694543, io.openshift.expose-services=)
Nov 26 02:33:12 compute-0 podman[483176]: 2025-11-26 02:33:12.630561674 +0000 UTC m=+0.165926590 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:33:14 compute-0 nova_compute[350387]: 2025-11-26 02:33:14.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:33:14 compute-0 nova_compute[350387]: 2025-11-26 02:33:14.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 02:33:14 compute-0 nova_compute[350387]: 2025-11-26 02:33:14.317 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:33:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:33:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:17 compute-0 podman[483215]: 2025-11-26 02:33:17.566112266 +0000 UTC m=+0.108764875 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 02:33:17 compute-0 podman[483214]: 2025-11-26 02:33:17.575062687 +0000 UTC m=+0.121670217 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 02:33:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:19 compute-0 nova_compute[350387]: 2025-11-26 02:33:19.319 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:33:19 compute-0 nova_compute[350387]: 2025-11-26 02:33:19.320 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:33:19 compute-0 nova_compute[350387]: 2025-11-26 02:33:19.321 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:33:19 compute-0 nova_compute[350387]: 2025-11-26 02:33:19.321 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:33:19 compute-0 nova_compute[350387]: 2025-11-26 02:33:19.321 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:33:19 compute-0 nova_compute[350387]: 2025-11-26 02:33:19.323 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:33:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:33:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:24 compute-0 nova_compute[350387]: 2025-11-26 02:33:24.324 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:33:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:33:25.023 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:33:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:33:25.023 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:33:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:33:25.023 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:33:25 compute-0 nova_compute[350387]: 2025-11-26 02:33:25.325 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:33:25 compute-0 nova_compute[350387]: 2025-11-26 02:33:25.325 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 02:33:25 compute-0 nova_compute[350387]: 2025-11-26 02:33:25.349 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 02:33:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:33:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:33:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2511758827' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:33:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:33:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2511758827' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:33:27 compute-0 podman[483261]: 2025-11-26 02:33:27.554060589 +0000 UTC m=+0.097927790 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:33:27 compute-0 podman[483260]: 2025-11-26 02:33:27.576696834 +0000 UTC m=+0.113724953 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 02:33:27 compute-0 podman[483259]: 2025-11-26 02:33:27.589608867 +0000 UTC m=+0.133847428 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 26 02:33:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:29 compute-0 nova_compute[350387]: 2025-11-26 02:33:29.326 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:33:29 compute-0 podman[158021]: time="2025-11-26T02:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:33:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:33:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8210 "" "Go-http-client/1.1"
Nov 26 02:33:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:30 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:33:31 compute-0 openstack_network_exporter[367323]: ERROR   02:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:33:31 compute-0 openstack_network_exporter[367323]: ERROR   02:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:33:31 compute-0 openstack_network_exporter[367323]: ERROR   02:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:33:31 compute-0 openstack_network_exporter[367323]: ERROR   02:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:33:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:33:31 compute-0 openstack_network_exporter[367323]: ERROR   02:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:33:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:33:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:34 compute-0 nova_compute[350387]: 2025-11-26 02:33:34.328 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:33:34 compute-0 nova_compute[350387]: 2025-11-26 02:33:34.330 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:33:34 compute-0 nova_compute[350387]: 2025-11-26 02:33:34.330 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:33:34 compute-0 nova_compute[350387]: 2025-11-26 02:33:34.330 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:33:34 compute-0 nova_compute[350387]: 2025-11-26 02:33:34.330 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:33:34 compute-0 nova_compute[350387]: 2025-11-26 02:33:34.331 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:33:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:35 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:33:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:39 compute-0 nova_compute[350387]: 2025-11-26 02:33:39.332 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:33:39 compute-0 podman[483320]: 2025-11-26 02:33:39.571806467 +0000 UTC m=+0.123334954 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Nov 26 02:33:39 compute-0 podman[483321]: 2025-11-26 02:33:39.652329708 +0000 UTC m=+0.195596693 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 26 02:33:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:33:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:33:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:33:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:33:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:33:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:33:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:33:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:33:41
Nov 26 02:33:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:33:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:33:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'images', 'backups', 'vms', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 26 02:33:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:33:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:33:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:33:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:33:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:33:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:33:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:33:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:33:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:33:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:33:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:33:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:43 compute-0 podman[483364]: 2025-11-26 02:33:43.600712242 +0000 UTC m=+0.158842281 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, name=ubi9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.29.0, release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4)
Nov 26 02:33:43 compute-0 podman[483365]: 2025-11-26 02:33:43.605125606 +0000 UTC m=+0.146577167 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 26 02:33:44 compute-0 nova_compute[350387]: 2025-11-26 02:33:44.335 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:33:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:45 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:33:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:48 compute-0 podman[483400]: 2025-11-26 02:33:48.578407818 +0000 UTC m=+0.125735632 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Nov 26 02:33:48 compute-0 podman[483401]: 2025-11-26 02:33:48.585345093 +0000 UTC m=+0.120781423 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 02:33:49 compute-0 nova_compute[350387]: 2025-11-26 02:33:49.337 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:33:49 compute-0 nova_compute[350387]: 2025-11-26 02:33:49.340 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:33:49 compute-0 nova_compute[350387]: 2025-11-26 02:33:49.340 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:33:49 compute-0 nova_compute[350387]: 2025-11-26 02:33:49.340 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:33:49 compute-0 nova_compute[350387]: 2025-11-26 02:33:49.341 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:33:49 compute-0 nova_compute[350387]: 2025-11-26 02:33:49.342 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:33:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:33:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:33:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:53 compute-0 nova_compute[350387]: 2025-11-26 02:33:53.322 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:33:53 compute-0 nova_compute[350387]: 2025-11-26 02:33:53.361 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:33:53 compute-0 nova_compute[350387]: 2025-11-26 02:33:53.362 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:33:53 compute-0 nova_compute[350387]: 2025-11-26 02:33:53.362 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:33:53 compute-0 nova_compute[350387]: 2025-11-26 02:33:53.362 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:33:53 compute-0 nova_compute[350387]: 2025-11-26 02:33:53.363 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:33:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:33:53 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 11K writes, 51K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1345 writes, 6173 keys, 1345 commit groups, 1.0 writes per commit group, ingest: 8.69 MB, 0.01 MB/s#012Interval WAL: 1345 writes, 1345 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    113.3      0.56              0.30        36    0.015       0      0       0.0       0.0#012  L6      1/0    7.17 MB   0.0      0.3     0.1      0.2       0.3      0.0       0.0   4.2    164.1    135.4      1.94              1.18        35    0.055    193K    18K       0.0       0.0#012 Sum      1/0    7.17 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.2    127.6    130.5      2.50              1.48        71    0.035    193K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.7    124.2    126.1      0.38              0.22        10    0.038     34K   2537       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.3      0.0       0.0   0.0    164.1    135.4      1.94              1.18        35    0.055    193K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    114.7      0.55              0.30        35    0.016       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.061, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.32 GB write, 0.07 MB/s write, 0.31 GB read, 0.07 MB/s read, 2.5 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5636b955b1f0#2 capacity: 304.00 MB usage: 40.50 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.00038 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2756,39.10 MB,12.8631%) FilterBlock(72,541.80 KB,0.174046%) IndexBlock(72,885.25 KB,0.284376%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 26 02:33:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:33:53 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2872261608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:33:53 compute-0 nova_compute[350387]: 2025-11-26 02:33:53.924 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:33:54 compute-0 nova_compute[350387]: 2025-11-26 02:33:54.343 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:33:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:54 compute-0 nova_compute[350387]: 2025-11-26 02:33:54.555 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:33:54 compute-0 nova_compute[350387]: 2025-11-26 02:33:54.558 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3948MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:33:54 compute-0 nova_compute[350387]: 2025-11-26 02:33:54.558 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:33:54 compute-0 nova_compute[350387]: 2025-11-26 02:33:54.559 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:33:54 compute-0 nova_compute[350387]: 2025-11-26 02:33:54.794 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:33:54 compute-0 nova_compute[350387]: 2025-11-26 02:33:54.795 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:33:54 compute-0 nova_compute[350387]: 2025-11-26 02:33:54.815 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:33:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:33:55 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/266847648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:33:55 compute-0 nova_compute[350387]: 2025-11-26 02:33:55.345 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:33:55 compute-0 nova_compute[350387]: 2025-11-26 02:33:55.359 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:33:55 compute-0 nova_compute[350387]: 2025-11-26 02:33:55.375 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:33:55 compute-0 nova_compute[350387]: 2025-11-26 02:33:55.376 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:33:55 compute-0 nova_compute[350387]: 2025-11-26 02:33:55.377 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:33:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:33:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:58 compute-0 nova_compute[350387]: 2025-11-26 02:33:58.353 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:33:58 compute-0 nova_compute[350387]: 2025-11-26 02:33:58.354 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:33:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:33:58 compute-0 podman[483490]: 2025-11-26 02:33:58.574140931 +0000 UTC m=+0.109858996 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 02:33:58 compute-0 podman[483488]: 2025-11-26 02:33:58.577541927 +0000 UTC m=+0.132360858 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 26 02:33:58 compute-0 podman[483489]: 2025-11-26 02:33:58.580283213 +0000 UTC m=+0.123318243 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 26 02:33:59 compute-0 nova_compute[350387]: 2025-11-26 02:33:59.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:33:59 compute-0 nova_compute[350387]: 2025-11-26 02:33:59.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:33:59 compute-0 nova_compute[350387]: 2025-11-26 02:33:59.346 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:33:59 compute-0 podman[158021]: time="2025-11-26T02:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:33:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:33:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8208 "" "Go-http-client/1.1"
Nov 26 02:34:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:34:01 compute-0 nova_compute[350387]: 2025-11-26 02:34:01.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:34:01 compute-0 nova_compute[350387]: 2025-11-26 02:34:01.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:34:01 compute-0 nova_compute[350387]: 2025-11-26 02:34:01.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:34:01 compute-0 nova_compute[350387]: 2025-11-26 02:34:01.323 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 02:34:01 compute-0 openstack_network_exporter[367323]: ERROR   02:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:34:01 compute-0 openstack_network_exporter[367323]: ERROR   02:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:34:01 compute-0 openstack_network_exporter[367323]: ERROR   02:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:34:01 compute-0 openstack_network_exporter[367323]: ERROR   02:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:34:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:34:01 compute-0 openstack_network_exporter[367323]: ERROR   02:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:34:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:34:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:03 compute-0 podman[483712]: 2025-11-26 02:34:03.807506494 +0000 UTC m=+0.136888145 container exec 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 02:34:03 compute-0 podman[483712]: 2025-11-26 02:34:03.910388212 +0000 UTC m=+0.239769883 container exec_died 4ef91eb781dd92f710c42404d4be295680dfd99b2b127b2e6ce3afcc38f6439d (image=quay.io/ceph/ceph:v18, name=ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:34:04 compute-0 nova_compute[350387]: 2025-11-26 02:34:04.349 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:34:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:34:05 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:34:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:34:05 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:34:05 compute-0 nova_compute[350387]: 2025-11-26 02:34:05.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:34:05 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:34:05 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:34:05 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:34:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:34:06 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:34:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:34:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:34:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:34:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:34:06 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5bf9ce6c-260a-4650-ba31-824b23c52f70 does not exist
Nov 26 02:34:06 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev b7732d1e-cb8d-4b67-a6bd-261a57730e23 does not exist
Nov 26 02:34:06 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 21d9e903-41df-42de-97b1-3fbd8b852066 does not exist
Nov 26 02:34:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:34:06 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:34:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:34:06 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:34:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:34:06 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:34:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:06 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:34:06 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:34:06 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:34:07 compute-0 nova_compute[350387]: 2025-11-26 02:34:07.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:34:07 compute-0 nova_compute[350387]: 2025-11-26 02:34:07.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:34:07 compute-0 podman[484131]: 2025-11-26 02:34:07.610702981 +0000 UTC m=+0.090066860 container create 96aaba34d96dad35bb1e7a3ff2c0d54f26ac4529990f837317b1bf846ef6006c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_khayyam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:34:07 compute-0 podman[484131]: 2025-11-26 02:34:07.577436917 +0000 UTC m=+0.056800816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:34:07 compute-0 systemd[1]: Started libpod-conmon-96aaba34d96dad35bb1e7a3ff2c0d54f26ac4529990f837317b1bf846ef6006c.scope.
Nov 26 02:34:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:34:07 compute-0 podman[484131]: 2025-11-26 02:34:07.76736829 +0000 UTC m=+0.246732229 container init 96aaba34d96dad35bb1e7a3ff2c0d54f26ac4529990f837317b1bf846ef6006c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:34:07 compute-0 podman[484131]: 2025-11-26 02:34:07.77806176 +0000 UTC m=+0.257425619 container start 96aaba34d96dad35bb1e7a3ff2c0d54f26ac4529990f837317b1bf846ef6006c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_khayyam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 26 02:34:07 compute-0 podman[484131]: 2025-11-26 02:34:07.783351249 +0000 UTC m=+0.262715118 container attach 96aaba34d96dad35bb1e7a3ff2c0d54f26ac4529990f837317b1bf846ef6006c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_khayyam, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:34:07 compute-0 focused_khayyam[484147]: 167 167
Nov 26 02:34:07 compute-0 systemd[1]: libpod-96aaba34d96dad35bb1e7a3ff2c0d54f26ac4529990f837317b1bf846ef6006c.scope: Deactivated successfully.
Nov 26 02:34:07 compute-0 podman[484131]: 2025-11-26 02:34:07.789111111 +0000 UTC m=+0.268475000 container died 96aaba34d96dad35bb1e7a3ff2c0d54f26ac4529990f837317b1bf846ef6006c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-96496aba584f4d88b10e7804bd22f59793a9c1749817438fcec16279224c2b36-merged.mount: Deactivated successfully.
Nov 26 02:34:07 compute-0 podman[484131]: 2025-11-26 02:34:07.862689997 +0000 UTC m=+0.342053856 container remove 96aaba34d96dad35bb1e7a3ff2c0d54f26ac4529990f837317b1bf846ef6006c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_khayyam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 02:34:07 compute-0 systemd[1]: libpod-conmon-96aaba34d96dad35bb1e7a3ff2c0d54f26ac4529990f837317b1bf846ef6006c.scope: Deactivated successfully.
Nov 26 02:34:08 compute-0 podman[484170]: 2025-11-26 02:34:08.12672078 +0000 UTC m=+0.087980371 container create 2c07b1673699b00447f4c46d6ccbe4af2fba867319a112ec7500b28440dde677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 02:34:08 compute-0 systemd[1]: Started libpod-conmon-2c07b1673699b00447f4c46d6ccbe4af2fba867319a112ec7500b28440dde677.scope.
Nov 26 02:34:08 compute-0 podman[484170]: 2025-11-26 02:34:08.09965886 +0000 UTC m=+0.060918431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:34:08 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/619d4c8244a3aebf477e0b654c347edeef75f1f5cd3c620643e1bbe03539c4be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/619d4c8244a3aebf477e0b654c347edeef75f1f5cd3c620643e1bbe03539c4be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/619d4c8244a3aebf477e0b654c347edeef75f1f5cd3c620643e1bbe03539c4be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/619d4c8244a3aebf477e0b654c347edeef75f1f5cd3c620643e1bbe03539c4be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/619d4c8244a3aebf477e0b654c347edeef75f1f5cd3c620643e1bbe03539c4be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:34:08 compute-0 podman[484170]: 2025-11-26 02:34:08.280740235 +0000 UTC m=+0.241999856 container init 2c07b1673699b00447f4c46d6ccbe4af2fba867319a112ec7500b28440dde677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:34:08 compute-0 podman[484170]: 2025-11-26 02:34:08.312037354 +0000 UTC m=+0.273296925 container start 2c07b1673699b00447f4c46d6ccbe4af2fba867319a112ec7500b28440dde677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 26 02:34:08 compute-0 podman[484170]: 2025-11-26 02:34:08.318364821 +0000 UTC m=+0.279624452 container attach 2c07b1673699b00447f4c46d6ccbe4af2fba867319a112ec7500b28440dde677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 02:34:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:09 compute-0 nova_compute[350387]: 2025-11-26 02:34:09.352 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:34:09 compute-0 gallant_boyd[484186]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:34:09 compute-0 gallant_boyd[484186]: --> relative data size: 1.0
Nov 26 02:34:09 compute-0 gallant_boyd[484186]: --> All data devices are unavailable
Nov 26 02:34:09 compute-0 systemd[1]: libpod-2c07b1673699b00447f4c46d6ccbe4af2fba867319a112ec7500b28440dde677.scope: Deactivated successfully.
Nov 26 02:34:09 compute-0 systemd[1]: libpod-2c07b1673699b00447f4c46d6ccbe4af2fba867319a112ec7500b28440dde677.scope: Consumed 1.247s CPU time.
Nov 26 02:34:09 compute-0 podman[484170]: 2025-11-26 02:34:09.622062597 +0000 UTC m=+1.583322228 container died 2c07b1673699b00447f4c46d6ccbe4af2fba867319a112ec7500b28440dde677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 02:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-619d4c8244a3aebf477e0b654c347edeef75f1f5cd3c620643e1bbe03539c4be-merged.mount: Deactivated successfully.
Nov 26 02:34:09 compute-0 podman[484170]: 2025-11-26 02:34:09.737938941 +0000 UTC m=+1.699198482 container remove 2c07b1673699b00447f4c46d6ccbe4af2fba867319a112ec7500b28440dde677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 02:34:09 compute-0 systemd[1]: libpod-conmon-2c07b1673699b00447f4c46d6ccbe4af2fba867319a112ec7500b28440dde677.scope: Deactivated successfully.
Nov 26 02:34:09 compute-0 podman[484216]: 2025-11-26 02:34:09.81768734 +0000 UTC m=+0.142024379 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 02:34:09 compute-0 podman[484229]: 2025-11-26 02:34:09.876289695 +0000 UTC m=+0.158312596 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 02:34:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:10 compute-0 podman[484405]: 2025-11-26 02:34:10.858163514 +0000 UTC m=+0.087780526 container create e062506de6a3108b556ecc0ab34bc8be1eee51b6a87d84db38a0483c775b6172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:34:10 compute-0 podman[484405]: 2025-11-26 02:34:10.828416849 +0000 UTC m=+0.058033881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:34:10 compute-0 systemd[1]: Started libpod-conmon-e062506de6a3108b556ecc0ab34bc8be1eee51b6a87d84db38a0483c775b6172.scope.
Nov 26 02:34:10 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:34:10 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:34:11 compute-0 podman[484405]: 2025-11-26 02:34:10.999627166 +0000 UTC m=+0.229244228 container init e062506de6a3108b556ecc0ab34bc8be1eee51b6a87d84db38a0483c775b6172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tu, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 02:34:11 compute-0 podman[484405]: 2025-11-26 02:34:11.017386905 +0000 UTC m=+0.247003927 container start e062506de6a3108b556ecc0ab34bc8be1eee51b6a87d84db38a0483c775b6172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:34:11 compute-0 podman[484405]: 2025-11-26 02:34:11.024163625 +0000 UTC m=+0.253780647 container attach e062506de6a3108b556ecc0ab34bc8be1eee51b6a87d84db38a0483c775b6172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tu, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:34:11 compute-0 infallible_tu[484420]: 167 167
Nov 26 02:34:11 compute-0 systemd[1]: libpod-e062506de6a3108b556ecc0ab34bc8be1eee51b6a87d84db38a0483c775b6172.scope: Deactivated successfully.
Nov 26 02:34:11 compute-0 podman[484405]: 2025-11-26 02:34:11.030549394 +0000 UTC m=+0.260166416 container died e062506de6a3108b556ecc0ab34bc8be1eee51b6a87d84db38a0483c775b6172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tu, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 26 02:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f5957ced40f2e22de0f6fa4564c44bf443e34b83c80ca754e69f1b7276fa3f0-merged.mount: Deactivated successfully.
Nov 26 02:34:11 compute-0 podman[484405]: 2025-11-26 02:34:11.108117012 +0000 UTC m=+0.337734004 container remove e062506de6a3108b556ecc0ab34bc8be1eee51b6a87d84db38a0483c775b6172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 26 02:34:11 compute-0 systemd[1]: libpod-conmon-e062506de6a3108b556ecc0ab34bc8be1eee51b6a87d84db38a0483c775b6172.scope: Deactivated successfully.
Nov 26 02:34:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:34:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:34:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:34:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:34:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:34:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:34:11 compute-0 podman[484443]: 2025-11-26 02:34:11.426127102 +0000 UTC m=+0.104191847 container create 69a869d5df285b1210a6703bba81dab310df0174d4cfbc6cdfe0111099ea0ac9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_neumann, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:34:11 compute-0 podman[484443]: 2025-11-26 02:34:11.390883852 +0000 UTC m=+0.068948637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:34:11 compute-0 systemd[1]: Started libpod-conmon-69a869d5df285b1210a6703bba81dab310df0174d4cfbc6cdfe0111099ea0ac9.scope.
Nov 26 02:34:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6895a13f0ef375d5f3b9182729cdf8bb7b997794a698dbe831161a9d065b44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6895a13f0ef375d5f3b9182729cdf8bb7b997794a698dbe831161a9d065b44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6895a13f0ef375d5f3b9182729cdf8bb7b997794a698dbe831161a9d065b44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6895a13f0ef375d5f3b9182729cdf8bb7b997794a698dbe831161a9d065b44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:34:11 compute-0 podman[484443]: 2025-11-26 02:34:11.604056118 +0000 UTC m=+0.282120913 container init 69a869d5df285b1210a6703bba81dab310df0174d4cfbc6cdfe0111099ea0ac9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 02:34:11 compute-0 podman[484443]: 2025-11-26 02:34:11.639259106 +0000 UTC m=+0.317323851 container start 69a869d5df285b1210a6703bba81dab310df0174d4cfbc6cdfe0111099ea0ac9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_neumann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 02:34:11 compute-0 podman[484443]: 2025-11-26 02:34:11.647022064 +0000 UTC m=+0.325086859 container attach 69a869d5df285b1210a6703bba81dab310df0174d4cfbc6cdfe0111099ea0ac9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 02:34:12 compute-0 nova_compute[350387]: 2025-11-26 02:34:12.294 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:34:12 compute-0 cool_neumann[484459]: {
Nov 26 02:34:12 compute-0 cool_neumann[484459]:    "0": [
Nov 26 02:34:12 compute-0 cool_neumann[484459]:        {
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "devices": [
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "/dev/loop3"
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            ],
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "lv_name": "ceph_lv0",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "lv_size": "21470642176",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "name": "ceph_lv0",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "tags": {
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.cluster_name": "ceph",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.crush_device_class": "",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.encrypted": "0",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.osd_id": "0",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.type": "block",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.vdo": "0"
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            },
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "type": "block",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "vg_name": "ceph_vg0"
Nov 26 02:34:12 compute-0 cool_neumann[484459]:        }
Nov 26 02:34:12 compute-0 cool_neumann[484459]:    ],
Nov 26 02:34:12 compute-0 cool_neumann[484459]:    "1": [
Nov 26 02:34:12 compute-0 cool_neumann[484459]:        {
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "devices": [
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "/dev/loop4"
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            ],
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "lv_name": "ceph_lv1",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "lv_size": "21470642176",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "name": "ceph_lv1",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "tags": {
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.cluster_name": "ceph",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.crush_device_class": "",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.encrypted": "0",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.osd_id": "1",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.type": "block",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.vdo": "0"
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            },
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "type": "block",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "vg_name": "ceph_vg1"
Nov 26 02:34:12 compute-0 cool_neumann[484459]:        }
Nov 26 02:34:12 compute-0 cool_neumann[484459]:    ],
Nov 26 02:34:12 compute-0 cool_neumann[484459]:    "2": [
Nov 26 02:34:12 compute-0 cool_neumann[484459]:        {
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "devices": [
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "/dev/loop5"
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            ],
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "lv_name": "ceph_lv2",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "lv_size": "21470642176",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "name": "ceph_lv2",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "tags": {
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.cluster_name": "ceph",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.crush_device_class": "",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.encrypted": "0",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.osd_id": "2",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.type": "block",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:                "ceph.vdo": "0"
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            },
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "type": "block",
Nov 26 02:34:12 compute-0 cool_neumann[484459]:            "vg_name": "ceph_vg2"
Nov 26 02:34:12 compute-0 cool_neumann[484459]:        }
Nov 26 02:34:12 compute-0 cool_neumann[484459]:    ]
Nov 26 02:34:12 compute-0 cool_neumann[484459]: }
Nov 26 02:34:12 compute-0 systemd[1]: libpod-69a869d5df285b1210a6703bba81dab310df0174d4cfbc6cdfe0111099ea0ac9.scope: Deactivated successfully.
Nov 26 02:34:12 compute-0 podman[484443]: 2025-11-26 02:34:12.476240367 +0000 UTC m=+1.154305152 container died 69a869d5df285b1210a6703bba81dab310df0174d4cfbc6cdfe0111099ea0ac9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_neumann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 02:34:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab6895a13f0ef375d5f3b9182729cdf8bb7b997794a698dbe831161a9d065b44-merged.mount: Deactivated successfully.
Nov 26 02:34:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:12 compute-0 podman[484443]: 2025-11-26 02:34:12.597565004 +0000 UTC m=+1.275629749 container remove 69a869d5df285b1210a6703bba81dab310df0174d4cfbc6cdfe0111099ea0ac9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_neumann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 02:34:12 compute-0 systemd[1]: libpod-conmon-69a869d5df285b1210a6703bba81dab310df0174d4cfbc6cdfe0111099ea0ac9.scope: Deactivated successfully.
Nov 26 02:34:13 compute-0 podman[484617]: 2025-11-26 02:34:13.725052012 +0000 UTC m=+0.092245111 container create aebd2b42357885e59f65ebf26fcc83426bb34c29fa21b47c1bd17985092c11a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:34:13 compute-0 podman[484617]: 2025-11-26 02:34:13.689090872 +0000 UTC m=+0.056283971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:34:13 compute-0 systemd[1]: Started libpod-conmon-aebd2b42357885e59f65ebf26fcc83426bb34c29fa21b47c1bd17985092c11a2.scope.
Nov 26 02:34:13 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:34:13 compute-0 podman[484617]: 2025-11-26 02:34:13.871793282 +0000 UTC m=+0.238986391 container init aebd2b42357885e59f65ebf26fcc83426bb34c29fa21b47c1bd17985092c11a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:34:13 compute-0 podman[484617]: 2025-11-26 02:34:13.886711561 +0000 UTC m=+0.253904630 container start aebd2b42357885e59f65ebf26fcc83426bb34c29fa21b47c1bd17985092c11a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ardinghelli, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:34:13 compute-0 podman[484617]: 2025-11-26 02:34:13.892053901 +0000 UTC m=+0.259246970 container attach aebd2b42357885e59f65ebf26fcc83426bb34c29fa21b47c1bd17985092c11a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 02:34:13 compute-0 flamboyant_ardinghelli[484635]: 167 167
Nov 26 02:34:13 compute-0 podman[484617]: 2025-11-26 02:34:13.8987567 +0000 UTC m=+0.265949789 container died aebd2b42357885e59f65ebf26fcc83426bb34c29fa21b47c1bd17985092c11a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ardinghelli, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:34:13 compute-0 systemd[1]: libpod-aebd2b42357885e59f65ebf26fcc83426bb34c29fa21b47c1bd17985092c11a2.scope: Deactivated successfully.
Nov 26 02:34:13 compute-0 podman[484634]: 2025-11-26 02:34:13.946018086 +0000 UTC m=+0.140075844 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 02:34:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b24df9ec5feb12f28da65e979d1c37a2ade4b2915560716a41f43c040c32ced-merged.mount: Deactivated successfully.
Nov 26 02:34:13 compute-0 podman[484617]: 2025-11-26 02:34:13.966032138 +0000 UTC m=+0.333225207 container remove aebd2b42357885e59f65ebf26fcc83426bb34c29fa21b47c1bd17985092c11a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ardinghelli, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:34:13 compute-0 podman[484631]: 2025-11-26 02:34:13.975169295 +0000 UTC m=+0.169084089 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.expose-services=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-type=git, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, container_name=kepler)
Nov 26 02:34:13 compute-0 systemd[1]: libpod-conmon-aebd2b42357885e59f65ebf26fcc83426bb34c29fa21b47c1bd17985092c11a2.scope: Deactivated successfully.
Nov 26 02:34:14 compute-0 podman[484693]: 2025-11-26 02:34:14.223288011 +0000 UTC m=+0.094023631 container create 00379510d23784abc162adf81eeef9436489b5cd3caec620b448e2c982886a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 02:34:14 compute-0 podman[484693]: 2025-11-26 02:34:14.189041549 +0000 UTC m=+0.059777239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:34:14 compute-0 systemd[1]: Started libpod-conmon-00379510d23784abc162adf81eeef9436489b5cd3caec620b448e2c982886a93.scope.
Nov 26 02:34:14 compute-0 nova_compute[350387]: 2025-11-26 02:34:14.355 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:34:14 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d068166c91617f6c16eadeb0e99f17f2e473c0d9ea5eb3fdcc297d4dc1d74abd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d068166c91617f6c16eadeb0e99f17f2e473c0d9ea5eb3fdcc297d4dc1d74abd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d068166c91617f6c16eadeb0e99f17f2e473c0d9ea5eb3fdcc297d4dc1d74abd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d068166c91617f6c16eadeb0e99f17f2e473c0d9ea5eb3fdcc297d4dc1d74abd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:34:14 compute-0 podman[484693]: 2025-11-26 02:34:14.432111714 +0000 UTC m=+0.302847404 container init 00379510d23784abc162adf81eeef9436489b5cd3caec620b448e2c982886a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_darwin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:34:14 compute-0 podman[484693]: 2025-11-26 02:34:14.451698184 +0000 UTC m=+0.322433834 container start 00379510d23784abc162adf81eeef9436489b5cd3caec620b448e2c982886a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:34:14 compute-0 podman[484693]: 2025-11-26 02:34:14.458242328 +0000 UTC m=+0.328978018 container attach 00379510d23784abc162adf81eeef9436489b5cd3caec620b448e2c982886a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_darwin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:34:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:15 compute-0 objective_darwin[484708]: {
Nov 26 02:34:15 compute-0 objective_darwin[484708]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:34:15 compute-0 objective_darwin[484708]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:34:15 compute-0 objective_darwin[484708]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:34:15 compute-0 objective_darwin[484708]:        "osd_id": 0,
Nov 26 02:34:15 compute-0 objective_darwin[484708]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:34:15 compute-0 objective_darwin[484708]:        "type": "bluestore"
Nov 26 02:34:15 compute-0 objective_darwin[484708]:    },
Nov 26 02:34:15 compute-0 objective_darwin[484708]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:34:15 compute-0 objective_darwin[484708]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:34:15 compute-0 objective_darwin[484708]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:34:15 compute-0 objective_darwin[484708]:        "osd_id": 2,
Nov 26 02:34:15 compute-0 objective_darwin[484708]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:34:15 compute-0 objective_darwin[484708]:        "type": "bluestore"
Nov 26 02:34:15 compute-0 objective_darwin[484708]:    },
Nov 26 02:34:15 compute-0 objective_darwin[484708]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:34:15 compute-0 objective_darwin[484708]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:34:15 compute-0 objective_darwin[484708]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:34:15 compute-0 objective_darwin[484708]:        "osd_id": 1,
Nov 26 02:34:15 compute-0 objective_darwin[484708]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:34:15 compute-0 objective_darwin[484708]:        "type": "bluestore"
Nov 26 02:34:15 compute-0 objective_darwin[484708]:    }
Nov 26 02:34:15 compute-0 objective_darwin[484708]: }
Nov 26 02:34:15 compute-0 systemd[1]: libpod-00379510d23784abc162adf81eeef9436489b5cd3caec620b448e2c982886a93.scope: Deactivated successfully.
Nov 26 02:34:15 compute-0 podman[484693]: 2025-11-26 02:34:15.681018042 +0000 UTC m=+1.551753652 container died 00379510d23784abc162adf81eeef9436489b5cd3caec620b448e2c982886a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_darwin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 02:34:15 compute-0 systemd[1]: libpod-00379510d23784abc162adf81eeef9436489b5cd3caec620b448e2c982886a93.scope: Consumed 1.221s CPU time.
Nov 26 02:34:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d068166c91617f6c16eadeb0e99f17f2e473c0d9ea5eb3fdcc297d4dc1d74abd-merged.mount: Deactivated successfully.
Nov 26 02:34:15 compute-0 podman[484693]: 2025-11-26 02:34:15.78496509 +0000 UTC m=+1.655700710 container remove 00379510d23784abc162adf81eeef9436489b5cd3caec620b448e2c982886a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_darwin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 02:34:15 compute-0 systemd[1]: libpod-conmon-00379510d23784abc162adf81eeef9436489b5cd3caec620b448e2c982886a93.scope: Deactivated successfully.
Nov 26 02:34:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:34:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:34:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:34:15 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:34:15 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 0fd8e484-6042-4f49-a54f-9e3d1f64ae40 does not exist
Nov 26 02:34:15 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 10a19beb-1fbf-460a-a55a-491c393d76ea does not exist
Nov 26 02:34:15 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:15.978358) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124455978404, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1108, "num_deletes": 255, "total_data_size": 1648290, "memory_usage": 1677320, "flush_reason": "Manual Compaction"}
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124455992456, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1610819, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50989, "largest_seqno": 52096, "table_properties": {"data_size": 1605454, "index_size": 2825, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11208, "raw_average_key_size": 19, "raw_value_size": 1594703, "raw_average_value_size": 2744, "num_data_blocks": 127, "num_entries": 581, "num_filter_entries": 581, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764124350, "oldest_key_time": 1764124350, "file_creation_time": 1764124455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 14186 microseconds, and 8627 cpu microseconds.
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:15.992543) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1610819 bytes OK
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:15.992565) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:15.995592) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:15.995614) EVENT_LOG_v1 {"time_micros": 1764124455995607, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:15.995635) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1643135, prev total WAL file size 1643135, number of live WAL files 2.
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:15.997187) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303036' seq:72057594037927935, type:22 .. '6C6F676D0032323537' seq:0, type:0; will stop at (end)
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:34:15 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1573KB)], [122(7341KB)]
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124455997243, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 9128145, "oldest_snapshot_seqno": -1}
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 6563 keys, 9018022 bytes, temperature: kUnknown
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124456063487, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9018022, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8976324, "index_size": 24178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 172095, "raw_average_key_size": 26, "raw_value_size": 8859826, "raw_average_value_size": 1349, "num_data_blocks": 960, "num_entries": 6563, "num_filter_entries": 6563, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764124455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:16.063804) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9018022 bytes
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:16.066378) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.6 rd, 135.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.2 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(11.3) write-amplify(5.6) OK, records in: 7085, records dropped: 522 output_compression: NoCompression
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:16.066409) EVENT_LOG_v1 {"time_micros": 1764124456066395, "job": 74, "event": "compaction_finished", "compaction_time_micros": 66334, "compaction_time_cpu_micros": 41027, "output_level": 6, "num_output_files": 1, "total_output_size": 9018022, "num_input_records": 7085, "num_output_records": 6563, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124456067114, "job": 74, "event": "table_file_deletion", "file_number": 124}
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124456069355, "job": 74, "event": "table_file_deletion", "file_number": 122}
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:15.996804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:16.069558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:16.069567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:16.069571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:16.069574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:34:16 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:34:16.069577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:34:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:16 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:34:16 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:34:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:19 compute-0 nova_compute[350387]: 2025-11-26 02:34:19.358 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:34:19 compute-0 nova_compute[350387]: 2025-11-26 02:34:19.361 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:34:19 compute-0 nova_compute[350387]: 2025-11-26 02:34:19.361 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:34:19 compute-0 nova_compute[350387]: 2025-11-26 02:34:19.361 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:34:19 compute-0 nova_compute[350387]: 2025-11-26 02:34:19.362 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:34:19 compute-0 nova_compute[350387]: 2025-11-26 02:34:19.364 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:34:19 compute-0 podman[484806]: 2025-11-26 02:34:19.592405247 +0000 UTC m=+0.130497155 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, architecture=x86_64, io.openshift.tags=minimal rhel9)
Nov 26 02:34:19 compute-0 podman[484807]: 2025-11-26 02:34:19.60033404 +0000 UTC m=+0.135945548 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 02:34:20 compute-0 nova_compute[350387]: 2025-11-26 02:34:20.293 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:34:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:20 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:34:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:24 compute-0 nova_compute[350387]: 2025-11-26 02:34:24.364 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:34:24 compute-0 nova_compute[350387]: 2025-11-26 02:34:24.366 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:34:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:34:25.024 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:34:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:34:25.025 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:34:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:34:25.025 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:34:25 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:34:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:34:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1166869913' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:34:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:34:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1166869913' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:34:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2527: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:29 compute-0 nova_compute[350387]: 2025-11-26 02:34:29.367 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:34:29 compute-0 podman[484851]: 2025-11-26 02:34:29.561892874 +0000 UTC m=+0.091334826 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 02:34:29 compute-0 podman[484850]: 2025-11-26 02:34:29.588658195 +0000 UTC m=+0.118932040 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:34:29 compute-0 podman[484849]: 2025-11-26 02:34:29.612897656 +0000 UTC m=+0.148393028 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, config_id=edpm, tcib_managed=true)
Nov 26 02:34:29 compute-0 podman[158021]: time="2025-11-26T02:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:34:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:34:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8210 "" "Go-http-client/1.1"
Nov 26 02:34:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:34:31 compute-0 openstack_network_exporter[367323]: ERROR   02:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:34:31 compute-0 openstack_network_exporter[367323]: ERROR   02:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:34:31 compute-0 openstack_network_exporter[367323]: ERROR   02:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:34:31 compute-0 openstack_network_exporter[367323]: ERROR   02:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:34:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:34:31 compute-0 openstack_network_exporter[367323]: ERROR   02:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:34:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:34:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2529: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:34 compute-0 nova_compute[350387]: 2025-11-26 02:34:34.369 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:34:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2530: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:36 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:34:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:39 compute-0 nova_compute[350387]: 2025-11-26 02:34:39.372 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:34:40 compute-0 podman[484906]: 2025-11-26 02:34:40.579368265 +0000 UTC m=+0.125439503 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 26 02:34:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:40 compute-0 podman[484907]: 2025-11-26 02:34:40.63902474 +0000 UTC m=+0.178487872 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 02:34:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:34:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:34:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:34:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:34:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:34:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:34:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:34:41
Nov 26 02:34:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:34:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:34:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.meta', 'vms', 'default.rgw.control', 'backups', 'volumes', 'cephfs.cephfs.data']
Nov 26 02:34:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:34:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:34:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:34:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:34:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:34:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:34:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:34:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:34:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:34:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:34:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:34:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:34:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.883 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.884 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.884 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.885 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.886 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.888 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.889 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.889 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.890 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.890 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.891 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.891 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.889 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.892 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.892 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.892 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.892 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.893 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.893 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.893 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.893 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.893 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.891 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.894 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.894 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.894 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.895 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.895 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.896 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.896 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.896 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.895 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.897 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.897 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.898 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.898 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.898 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.896 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.899 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.899 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.899 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.899 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.900 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.900 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.900 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.900 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.900 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.900 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.901 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.901 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.901 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.901 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.902 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.902 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.902 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.902 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.903 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.903 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.903 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.904 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.904 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.904 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.904 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.905 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.905 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.905 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.906 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.906 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.906 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.907 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.907 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.907 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.907 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.907 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.907 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.907 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.908 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.908 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.908 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.908 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.908 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.908 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.908 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.909 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.909 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.909 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.909 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.909 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.909 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.909 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.910 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.910 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.910 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.910 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:34:42.910 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:34:44 compute-0 nova_compute[350387]: 2025-11-26 02:34:44.374 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:34:44 compute-0 nova_compute[350387]: 2025-11-26 02:34:44.376 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:34:44 compute-0 nova_compute[350387]: 2025-11-26 02:34:44.377 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:34:44 compute-0 nova_compute[350387]: 2025-11-26 02:34:44.377 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:34:44 compute-0 nova_compute[350387]: 2025-11-26 02:34:44.378 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:34:44 compute-0 nova_compute[350387]: 2025-11-26 02:34:44.379 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:34:44 compute-0 podman[484952]: 2025-11-26 02:34:44.546446562 +0000 UTC m=+0.085640746 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, container_name=kepler)
Nov 26 02:34:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2535: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:44 compute-0 podman[484953]: 2025-11-26 02:34:44.582749021 +0000 UTC m=+0.113951231 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:34:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:34:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:49 compute-0 nova_compute[350387]: 2025-11-26 02:34:49.380 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:34:49 compute-0 nova_compute[350387]: 2025-11-26 02:34:49.382 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:34:50 compute-0 podman[484994]: 2025-11-26 02:34:50.557364467 +0000 UTC m=+0.097043606 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 02:34:50 compute-0 podman[484993]: 2025-11-26 02:34:50.574714804 +0000 UTC m=+0.121009779 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41)
Nov 26 02:34:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:34:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:34:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:54 compute-0 nova_compute[350387]: 2025-11-26 02:34:54.384 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:34:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:55 compute-0 nova_compute[350387]: 2025-11-26 02:34:55.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:34:55 compute-0 nova_compute[350387]: 2025-11-26 02:34:55.368 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:34:55 compute-0 nova_compute[350387]: 2025-11-26 02:34:55.369 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:34:55 compute-0 nova_compute[350387]: 2025-11-26 02:34:55.369 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:34:55 compute-0 nova_compute[350387]: 2025-11-26 02:34:55.369 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:34:55 compute-0 nova_compute[350387]: 2025-11-26 02:34:55.370 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:34:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:34:55 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3038071272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:34:55 compute-0 nova_compute[350387]: 2025-11-26 02:34:55.867 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:34:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:34:56 compute-0 nova_compute[350387]: 2025-11-26 02:34:56.478 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:34:56 compute-0 nova_compute[350387]: 2025-11-26 02:34:56.480 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3935MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:34:56 compute-0 nova_compute[350387]: 2025-11-26 02:34:56.480 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:34:56 compute-0 nova_compute[350387]: 2025-11-26 02:34:56.481 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:34:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:56 compute-0 nova_compute[350387]: 2025-11-26 02:34:56.676 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:34:56 compute-0 nova_compute[350387]: 2025-11-26 02:34:56.677 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:34:56 compute-0 nova_compute[350387]: 2025-11-26 02:34:56.704 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:34:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:34:57 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2549338140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:34:57 compute-0 nova_compute[350387]: 2025-11-26 02:34:57.315 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:34:57 compute-0 nova_compute[350387]: 2025-11-26 02:34:57.329 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:34:57 compute-0 nova_compute[350387]: 2025-11-26 02:34:57.358 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:34:57 compute-0 nova_compute[350387]: 2025-11-26 02:34:57.361 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:34:57 compute-0 nova_compute[350387]: 2025-11-26 02:34:57.362 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:34:58 compute-0 nova_compute[350387]: 2025-11-26 02:34:58.363 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:34:58 compute-0 nova_compute[350387]: 2025-11-26 02:34:58.364 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:34:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:34:59 compute-0 nova_compute[350387]: 2025-11-26 02:34:59.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:34:59 compute-0 nova_compute[350387]: 2025-11-26 02:34:59.386 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:34:59 compute-0 nova_compute[350387]: 2025-11-26 02:34:59.387 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:34:59 compute-0 nova_compute[350387]: 2025-11-26 02:34:59.388 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:34:59 compute-0 nova_compute[350387]: 2025-11-26 02:34:59.388 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:34:59 compute-0 nova_compute[350387]: 2025-11-26 02:34:59.389 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:34:59 compute-0 nova_compute[350387]: 2025-11-26 02:34:59.392 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:34:59 compute-0 podman[158021]: time="2025-11-26T02:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:34:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:34:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8207 "" "Go-http-client/1.1"
Nov 26 02:35:00 compute-0 nova_compute[350387]: 2025-11-26 02:35:00.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:35:00 compute-0 podman[485082]: 2025-11-26 02:35:00.578179175 +0000 UTC m=+0.108847396 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 02:35:00 compute-0 podman[485081]: 2025-11-26 02:35:00.584283746 +0000 UTC m=+0.125240936 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 02:35:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:00 compute-0 podman[485080]: 2025-11-26 02:35:00.608582069 +0000 UTC m=+0.152397450 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Nov 26 02:35:01 compute-0 nova_compute[350387]: 2025-11-26 02:35:01.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:35:01 compute-0 nova_compute[350387]: 2025-11-26 02:35:01.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:35:01 compute-0 nova_compute[350387]: 2025-11-26 02:35:01.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:35:01 compute-0 nova_compute[350387]: 2025-11-26 02:35:01.315 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 02:35:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:35:01 compute-0 openstack_network_exporter[367323]: ERROR   02:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:35:01 compute-0 openstack_network_exporter[367323]: ERROR   02:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:35:01 compute-0 openstack_network_exporter[367323]: ERROR   02:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:35:01 compute-0 openstack_network_exporter[367323]: ERROR   02:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:35:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:35:01 compute-0 openstack_network_exporter[367323]: ERROR   02:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:35:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:35:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:04 compute-0 nova_compute[350387]: 2025-11-26 02:35:04.391 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:35:04 compute-0 nova_compute[350387]: 2025-11-26 02:35:04.394 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:35:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:06 compute-0 nova_compute[350387]: 2025-11-26 02:35:06.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:35:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:35:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:07 compute-0 nova_compute[350387]: 2025-11-26 02:35:07.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:35:07 compute-0 nova_compute[350387]: 2025-11-26 02:35:07.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:35:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:09 compute-0 nova_compute[350387]: 2025-11-26 02:35:09.393 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:35:09 compute-0 nova_compute[350387]: 2025-11-26 02:35:09.397 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:35:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:35:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:35:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:35:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:35:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:35:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:35:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:35:11 compute-0 podman[485134]: 2025-11-26 02:35:11.595102529 +0000 UTC m=+0.141920656 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 02:35:11 compute-0 podman[485135]: 2025-11-26 02:35:11.670538777 +0000 UTC m=+0.212335003 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 02:35:12 compute-0 nova_compute[350387]: 2025-11-26 02:35:12.295 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:35:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:14 compute-0 nova_compute[350387]: 2025-11-26 02:35:14.397 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:35:14 compute-0 nova_compute[350387]: 2025-11-26 02:35:14.399 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:35:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:14 compute-0 podman[485178]: 2025-11-26 02:35:14.817513129 +0000 UTC m=+0.102400336 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 02:35:14 compute-0 podman[485177]: 2025-11-26 02:35:14.851166664 +0000 UTC m=+0.141576396 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, name=ubi9, io.openshift.expose-services=, config_id=edpm, vcs-type=git, version=9.4, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Nov 26 02:35:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:35:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:35:17 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:35:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:35:17 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:35:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:35:17 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:35:17 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 37e99096-8049-437f-b7f1-7b1dbe621916 does not exist
Nov 26 02:35:17 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 9bd65f4c-016d-422a-92cf-6cf01d903592 does not exist
Nov 26 02:35:17 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 6ab33818-8c58-400a-8225-d4071d65079e does not exist
Nov 26 02:35:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:35:17 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:35:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:35:17 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:35:17 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:35:17 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:35:17 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:35:17 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:35:17 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:35:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:18 compute-0 podman[485484]: 2025-11-26 02:35:18.78934846 +0000 UTC m=+0.101628335 container create 6248e675f353485e7f299846438eec66d1e9f010a3c0480a498b221c0eda38e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 02:35:18 compute-0 podman[485484]: 2025-11-26 02:35:18.751231779 +0000 UTC m=+0.063511704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:35:18 compute-0 systemd[1]: Started libpod-conmon-6248e675f353485e7f299846438eec66d1e9f010a3c0480a498b221c0eda38e2.scope.
Nov 26 02:35:18 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:35:18 compute-0 podman[485484]: 2025-11-26 02:35:18.947222552 +0000 UTC m=+0.259502467 container init 6248e675f353485e7f299846438eec66d1e9f010a3c0480a498b221c0eda38e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 02:35:18 compute-0 podman[485484]: 2025-11-26 02:35:18.964002604 +0000 UTC m=+0.276282479 container start 6248e675f353485e7f299846438eec66d1e9f010a3c0480a498b221c0eda38e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 02:35:18 compute-0 podman[485484]: 2025-11-26 02:35:18.970545037 +0000 UTC m=+0.282824962 container attach 6248e675f353485e7f299846438eec66d1e9f010a3c0480a498b221c0eda38e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 26 02:35:18 compute-0 trusting_banzai[485500]: 167 167
Nov 26 02:35:18 compute-0 systemd[1]: libpod-6248e675f353485e7f299846438eec66d1e9f010a3c0480a498b221c0eda38e2.scope: Deactivated successfully.
Nov 26 02:35:18 compute-0 podman[485484]: 2025-11-26 02:35:18.977698468 +0000 UTC m=+0.289978333 container died 6248e675f353485e7f299846438eec66d1e9f010a3c0480a498b221c0eda38e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 02:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9eee7a6543b64f45a4593b5a7d5b5ec04fd2dbb18019d2423f53454969460487-merged.mount: Deactivated successfully.
Nov 26 02:35:19 compute-0 podman[485484]: 2025-11-26 02:35:19.054086153 +0000 UTC m=+0.366366018 container remove 6248e675f353485e7f299846438eec66d1e9f010a3c0480a498b221c0eda38e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 02:35:19 compute-0 systemd[1]: libpod-conmon-6248e675f353485e7f299846438eec66d1e9f010a3c0480a498b221c0eda38e2.scope: Deactivated successfully.
Nov 26 02:35:19 compute-0 podman[485522]: 2025-11-26 02:35:19.374702835 +0000 UTC m=+0.104604378 container create f4ab014a6c90788f9a5b0251172c9ec2cb62cf7b3f17faed4b1ec10a7430cd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kirch, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 02:35:19 compute-0 nova_compute[350387]: 2025-11-26 02:35:19.400 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:35:19 compute-0 podman[485522]: 2025-11-26 02:35:19.335741271 +0000 UTC m=+0.065642874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:35:19 compute-0 systemd[1]: Started libpod-conmon-f4ab014a6c90788f9a5b0251172c9ec2cb62cf7b3f17faed4b1ec10a7430cd2a.scope.
Nov 26 02:35:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4c1cb70f6ba1071d3edf6f489e520574264183b0ebd5519a4660aab9409a38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4c1cb70f6ba1071d3edf6f489e520574264183b0ebd5519a4660aab9409a38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4c1cb70f6ba1071d3edf6f489e520574264183b0ebd5519a4660aab9409a38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4c1cb70f6ba1071d3edf6f489e520574264183b0ebd5519a4660aab9409a38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:35:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4c1cb70f6ba1071d3edf6f489e520574264183b0ebd5519a4660aab9409a38/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:35:19 compute-0 podman[485522]: 2025-11-26 02:35:19.51375364 +0000 UTC m=+0.243655183 container init f4ab014a6c90788f9a5b0251172c9ec2cb62cf7b3f17faed4b1ec10a7430cd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kirch, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 02:35:19 compute-0 podman[485522]: 2025-11-26 02:35:19.540693616 +0000 UTC m=+0.270595139 container start f4ab014a6c90788f9a5b0251172c9ec2cb62cf7b3f17faed4b1ec10a7430cd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:35:19 compute-0 podman[485522]: 2025-11-26 02:35:19.546129059 +0000 UTC m=+0.276030592 container attach f4ab014a6c90788f9a5b0251172c9ec2cb62cf7b3f17faed4b1ec10a7430cd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kirch, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 02:35:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:20 compute-0 nostalgic_kirch[485537]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:35:20 compute-0 nostalgic_kirch[485537]: --> relative data size: 1.0
Nov 26 02:35:20 compute-0 nostalgic_kirch[485537]: --> All data devices are unavailable
Nov 26 02:35:20 compute-0 podman[485522]: 2025-11-26 02:35:20.855575236 +0000 UTC m=+1.585476789 container died f4ab014a6c90788f9a5b0251172c9ec2cb62cf7b3f17faed4b1ec10a7430cd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kirch, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:35:20 compute-0 systemd[1]: libpod-f4ab014a6c90788f9a5b0251172c9ec2cb62cf7b3f17faed4b1ec10a7430cd2a.scope: Deactivated successfully.
Nov 26 02:35:20 compute-0 systemd[1]: libpod-f4ab014a6c90788f9a5b0251172c9ec2cb62cf7b3f17faed4b1ec10a7430cd2a.scope: Consumed 1.266s CPU time.
Nov 26 02:35:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c4c1cb70f6ba1071d3edf6f489e520574264183b0ebd5519a4660aab9409a38-merged.mount: Deactivated successfully.
Nov 26 02:35:20 compute-0 podman[485522]: 2025-11-26 02:35:20.958032343 +0000 UTC m=+1.687933856 container remove f4ab014a6c90788f9a5b0251172c9ec2cb62cf7b3f17faed4b1ec10a7430cd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:35:20 compute-0 systemd[1]: libpod-conmon-f4ab014a6c90788f9a5b0251172c9ec2cb62cf7b3f17faed4b1ec10a7430cd2a.scope: Deactivated successfully.
Nov 26 02:35:21 compute-0 podman[485568]: 2025-11-26 02:35:21.029667545 +0000 UTC m=+0.130455294 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Nov 26 02:35:21 compute-0 podman[485571]: 2025-11-26 02:35:21.052710402 +0000 UTC m=+0.137127152 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 02:35:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:35:22 compute-0 podman[485758]: 2025-11-26 02:35:22.044671773 +0000 UTC m=+0.076441806 container create 01d97a827a38f16a13fca55cc916ade220922121362b31fdbcd1b6b8a1d26179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 02:35:22 compute-0 podman[485758]: 2025-11-26 02:35:22.01750059 +0000 UTC m=+0.049270653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:35:22 compute-0 systemd[1]: Started libpod-conmon-01d97a827a38f16a13fca55cc916ade220922121362b31fdbcd1b6b8a1d26179.scope.
Nov 26 02:35:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:35:22 compute-0 podman[485758]: 2025-11-26 02:35:22.168086719 +0000 UTC m=+0.199856762 container init 01d97a827a38f16a13fca55cc916ade220922121362b31fdbcd1b6b8a1d26179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 02:35:22 compute-0 podman[485758]: 2025-11-26 02:35:22.187938686 +0000 UTC m=+0.219708749 container start 01d97a827a38f16a13fca55cc916ade220922121362b31fdbcd1b6b8a1d26179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 02:35:22 compute-0 podman[485758]: 2025-11-26 02:35:22.195076947 +0000 UTC m=+0.226846990 container attach 01d97a827a38f16a13fca55cc916ade220922121362b31fdbcd1b6b8a1d26179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_neumann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:35:22 compute-0 wizardly_neumann[485775]: 167 167
Nov 26 02:35:22 compute-0 systemd[1]: libpod-01d97a827a38f16a13fca55cc916ade220922121362b31fdbcd1b6b8a1d26179.scope: Deactivated successfully.
Nov 26 02:35:22 compute-0 podman[485758]: 2025-11-26 02:35:22.202172806 +0000 UTC m=+0.233942859 container died 01d97a827a38f16a13fca55cc916ade220922121362b31fdbcd1b6b8a1d26179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_neumann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 02:35:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-454ab63562b795bdbd534b4f008da2ce0b8673bdb09feaa47a8339caca147fa4-merged.mount: Deactivated successfully.
Nov 26 02:35:22 compute-0 podman[485758]: 2025-11-26 02:35:22.277586423 +0000 UTC m=+0.309356486 container remove 01d97a827a38f16a13fca55cc916ade220922121362b31fdbcd1b6b8a1d26179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:35:22 compute-0 systemd[1]: libpod-conmon-01d97a827a38f16a13fca55cc916ade220922121362b31fdbcd1b6b8a1d26179.scope: Deactivated successfully.
Nov 26 02:35:22 compute-0 podman[485797]: 2025-11-26 02:35:22.583141573 +0000 UTC m=+0.080449320 container create ee016b3f4c95a0ed05fc6941311dc86107c7af9918239507888e2bb387d7ac61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lamport, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:35:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:22 compute-0 podman[485797]: 2025-11-26 02:35:22.550229789 +0000 UTC m=+0.047537586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:35:22 compute-0 systemd[1]: Started libpod-conmon-ee016b3f4c95a0ed05fc6941311dc86107c7af9918239507888e2bb387d7ac61.scope.
Nov 26 02:35:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:35:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ebd3041585fae535c88609d4443560b5b017f78550641e1386c65f1572f69e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:35:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ebd3041585fae535c88609d4443560b5b017f78550641e1386c65f1572f69e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:35:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ebd3041585fae535c88609d4443560b5b017f78550641e1386c65f1572f69e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:35:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ebd3041585fae535c88609d4443560b5b017f78550641e1386c65f1572f69e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:35:22 compute-0 podman[485797]: 2025-11-26 02:35:22.760606726 +0000 UTC m=+0.257914533 container init ee016b3f4c95a0ed05fc6941311dc86107c7af9918239507888e2bb387d7ac61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lamport, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 02:35:22 compute-0 podman[485797]: 2025-11-26 02:35:22.7913689 +0000 UTC m=+0.288676647 container start ee016b3f4c95a0ed05fc6941311dc86107c7af9918239507888e2bb387d7ac61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 02:35:22 compute-0 podman[485797]: 2025-11-26 02:35:22.797816671 +0000 UTC m=+0.295124418 container attach ee016b3f4c95a0ed05fc6941311dc86107c7af9918239507888e2bb387d7ac61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lamport, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]: {
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:    "0": [
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:        {
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "devices": [
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "/dev/loop3"
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            ],
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "lv_name": "ceph_lv0",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "lv_size": "21470642176",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "name": "ceph_lv0",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "tags": {
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.cluster_name": "ceph",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.crush_device_class": "",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.encrypted": "0",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.osd_id": "0",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.type": "block",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.vdo": "0"
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            },
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "type": "block",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "vg_name": "ceph_vg0"
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:        }
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:    ],
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:    "1": [
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:        {
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "devices": [
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "/dev/loop4"
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            ],
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "lv_name": "ceph_lv1",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "lv_size": "21470642176",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "name": "ceph_lv1",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "tags": {
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.cluster_name": "ceph",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.crush_device_class": "",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.encrypted": "0",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.osd_id": "1",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.type": "block",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.vdo": "0"
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            },
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "type": "block",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "vg_name": "ceph_vg1"
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:        }
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:    ],
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:    "2": [
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:        {
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "devices": [
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "/dev/loop5"
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            ],
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "lv_name": "ceph_lv2",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "lv_size": "21470642176",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "name": "ceph_lv2",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "tags": {
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.cluster_name": "ceph",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.crush_device_class": "",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.encrypted": "0",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.osd_id": "2",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.type": "block",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:                "ceph.vdo": "0"
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            },
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "type": "block",
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:            "vg_name": "ceph_vg2"
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:        }
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]:    ]
Nov 26 02:35:23 compute-0 heuristic_lamport[485813]: }
Nov 26 02:35:23 compute-0 systemd[1]: libpod-ee016b3f4c95a0ed05fc6941311dc86107c7af9918239507888e2bb387d7ac61.scope: Deactivated successfully.
Nov 26 02:35:23 compute-0 podman[485797]: 2025-11-26 02:35:23.656068459 +0000 UTC m=+1.153376206 container died ee016b3f4c95a0ed05fc6941311dc86107c7af9918239507888e2bb387d7ac61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lamport, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:35:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-84ebd3041585fae535c88609d4443560b5b017f78550641e1386c65f1572f69e-merged.mount: Deactivated successfully.
Nov 26 02:35:23 compute-0 podman[485797]: 2025-11-26 02:35:23.774035941 +0000 UTC m=+1.271343678 container remove ee016b3f4c95a0ed05fc6941311dc86107c7af9918239507888e2bb387d7ac61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 26 02:35:23 compute-0 systemd[1]: libpod-conmon-ee016b3f4c95a0ed05fc6941311dc86107c7af9918239507888e2bb387d7ac61.scope: Deactivated successfully.
Nov 26 02:35:24 compute-0 nova_compute[350387]: 2025-11-26 02:35:24.403 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:35:24 compute-0 nova_compute[350387]: 2025-11-26 02:35:24.407 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:35:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:35:25.025 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:35:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:35:25.026 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:35:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:35:25.026 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:35:25 compute-0 podman[485975]: 2025-11-26 02:35:25.037623141 +0000 UTC m=+0.105496053 container create 199eade027748d17d2e325f0ce50c05defbe7e7dfa659b87ac07ed2b8fef784b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 02:35:25 compute-0 podman[485975]: 2025-11-26 02:35:24.989256333 +0000 UTC m=+0.057129275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:35:25 compute-0 systemd[1]: Started libpod-conmon-199eade027748d17d2e325f0ce50c05defbe7e7dfa659b87ac07ed2b8fef784b.scope.
Nov 26 02:35:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:35:25 compute-0 podman[485975]: 2025-11-26 02:35:25.184204607 +0000 UTC m=+0.252077569 container init 199eade027748d17d2e325f0ce50c05defbe7e7dfa659b87ac07ed2b8fef784b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 02:35:25 compute-0 podman[485975]: 2025-11-26 02:35:25.204004963 +0000 UTC m=+0.271877845 container start 199eade027748d17d2e325f0ce50c05defbe7e7dfa659b87ac07ed2b8fef784b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 26 02:35:25 compute-0 podman[485975]: 2025-11-26 02:35:25.209210609 +0000 UTC m=+0.277083521 container attach 199eade027748d17d2e325f0ce50c05defbe7e7dfa659b87ac07ed2b8fef784b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 02:35:25 compute-0 lucid_wiles[485990]: 167 167
Nov 26 02:35:25 compute-0 systemd[1]: libpod-199eade027748d17d2e325f0ce50c05defbe7e7dfa659b87ac07ed2b8fef784b.scope: Deactivated successfully.
Nov 26 02:35:25 compute-0 podman[485975]: 2025-11-26 02:35:25.214951 +0000 UTC m=+0.282823932 container died 199eade027748d17d2e325f0ce50c05defbe7e7dfa659b87ac07ed2b8fef784b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:35:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-42553b392434da27a37d8eb677d22b5e61a4e97a24d1e398ba0c926ee8225694-merged.mount: Deactivated successfully.
Nov 26 02:35:25 compute-0 podman[485975]: 2025-11-26 02:35:25.296611773 +0000 UTC m=+0.364484685 container remove 199eade027748d17d2e325f0ce50c05defbe7e7dfa659b87ac07ed2b8fef784b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:35:25 compute-0 systemd[1]: libpod-conmon-199eade027748d17d2e325f0ce50c05defbe7e7dfa659b87ac07ed2b8fef784b.scope: Deactivated successfully.
Nov 26 02:35:25 compute-0 podman[486013]: 2025-11-26 02:35:25.613015056 +0000 UTC m=+0.086220881 container create dd47eb28eaaa62cee3becb6683c03139a712d8675e9d52af9d8460db2c360f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jackson, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 26 02:35:25 compute-0 podman[486013]: 2025-11-26 02:35:25.579672661 +0000 UTC m=+0.052878536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:35:25 compute-0 systemd[1]: Started libpod-conmon-dd47eb28eaaa62cee3becb6683c03139a712d8675e9d52af9d8460db2c360f33.scope.
Nov 26 02:35:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c740ec865fee33d37b61f5caecc8354c338aed3a70bcf1d3abae1ac13a5ccc94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c740ec865fee33d37b61f5caecc8354c338aed3a70bcf1d3abae1ac13a5ccc94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c740ec865fee33d37b61f5caecc8354c338aed3a70bcf1d3abae1ac13a5ccc94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c740ec865fee33d37b61f5caecc8354c338aed3a70bcf1d3abae1ac13a5ccc94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:35:25 compute-0 podman[486013]: 2025-11-26 02:35:25.745231299 +0000 UTC m=+0.218437104 container init dd47eb28eaaa62cee3becb6683c03139a712d8675e9d52af9d8460db2c360f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 02:35:25 compute-0 podman[486013]: 2025-11-26 02:35:25.763281186 +0000 UTC m=+0.236487021 container start dd47eb28eaaa62cee3becb6683c03139a712d8675e9d52af9d8460db2c360f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 02:35:25 compute-0 podman[486013]: 2025-11-26 02:35:25.771136476 +0000 UTC m=+0.244342291 container attach dd47eb28eaaa62cee3becb6683c03139a712d8675e9d52af9d8460db2c360f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:35:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:35:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:26 compute-0 boring_jackson[486029]: {
Nov 26 02:35:26 compute-0 boring_jackson[486029]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:35:26 compute-0 boring_jackson[486029]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:35:26 compute-0 boring_jackson[486029]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:35:26 compute-0 boring_jackson[486029]:        "osd_id": 0,
Nov 26 02:35:26 compute-0 boring_jackson[486029]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:35:26 compute-0 boring_jackson[486029]:        "type": "bluestore"
Nov 26 02:35:26 compute-0 boring_jackson[486029]:    },
Nov 26 02:35:26 compute-0 boring_jackson[486029]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:35:26 compute-0 boring_jackson[486029]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:35:26 compute-0 boring_jackson[486029]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:35:26 compute-0 boring_jackson[486029]:        "osd_id": 2,
Nov 26 02:35:26 compute-0 boring_jackson[486029]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:35:26 compute-0 boring_jackson[486029]:        "type": "bluestore"
Nov 26 02:35:26 compute-0 boring_jackson[486029]:    },
Nov 26 02:35:26 compute-0 boring_jackson[486029]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:35:26 compute-0 boring_jackson[486029]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:35:26 compute-0 boring_jackson[486029]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:35:26 compute-0 boring_jackson[486029]:        "osd_id": 1,
Nov 26 02:35:26 compute-0 boring_jackson[486029]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:35:26 compute-0 boring_jackson[486029]:        "type": "bluestore"
Nov 26 02:35:26 compute-0 boring_jackson[486029]:    }
Nov 26 02:35:26 compute-0 boring_jackson[486029]: }
Nov 26 02:35:27 compute-0 systemd[1]: libpod-dd47eb28eaaa62cee3becb6683c03139a712d8675e9d52af9d8460db2c360f33.scope: Deactivated successfully.
Nov 26 02:35:27 compute-0 podman[486013]: 2025-11-26 02:35:27.001507923 +0000 UTC m=+1.474713758 container died dd47eb28eaaa62cee3becb6683c03139a712d8675e9d52af9d8460db2c360f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jackson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 02:35:27 compute-0 systemd[1]: libpod-dd47eb28eaaa62cee3becb6683c03139a712d8675e9d52af9d8460db2c360f33.scope: Consumed 1.239s CPU time.
Nov 26 02:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c740ec865fee33d37b61f5caecc8354c338aed3a70bcf1d3abae1ac13a5ccc94-merged.mount: Deactivated successfully.
Nov 26 02:35:27 compute-0 podman[486013]: 2025-11-26 02:35:27.105313378 +0000 UTC m=+1.578519213 container remove dd47eb28eaaa62cee3becb6683c03139a712d8675e9d52af9d8460db2c360f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:35:27 compute-0 systemd[1]: libpod-conmon-dd47eb28eaaa62cee3becb6683c03139a712d8675e9d52af9d8460db2c360f33.scope: Deactivated successfully.
Nov 26 02:35:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:35:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:35:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:35:27 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:35:27 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 35a53a78-d60c-4b66-b8b9-b4271b0c79c7 does not exist
Nov 26 02:35:27 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 546fd3d1-868d-4fbc-ac52-c6438ab12448 does not exist
Nov 26 02:35:27 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:35:27 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:35:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:29 compute-0 nova_compute[350387]: 2025-11-26 02:35:29.406 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:35:29 compute-0 nova_compute[350387]: 2025-11-26 02:35:29.409 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:35:29 compute-0 podman[158021]: time="2025-11-26T02:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:35:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:35:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8205 "" "Go-http-client/1.1"
Nov 26 02:35:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:35:31 compute-0 openstack_network_exporter[367323]: ERROR   02:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:35:31 compute-0 openstack_network_exporter[367323]: ERROR   02:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:35:31 compute-0 openstack_network_exporter[367323]: ERROR   02:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:35:31 compute-0 openstack_network_exporter[367323]: ERROR   02:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:35:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:35:31 compute-0 openstack_network_exporter[367323]: ERROR   02:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:35:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:35:31 compute-0 podman[486126]: 2025-11-26 02:35:31.597585165 +0000 UTC m=+0.125028282 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 02:35:31 compute-0 podman[486124]: 2025-11-26 02:35:31.612648717 +0000 UTC m=+0.136022450 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 02:35:31 compute-0 podman[486125]: 2025-11-26 02:35:31.612765821 +0000 UTC m=+0.141234377 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 02:35:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:34 compute-0 nova_compute[350387]: 2025-11-26 02:35:34.410 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:35:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:36 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:35:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:39 compute-0 nova_compute[350387]: 2025-11-26 02:35:39.413 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:35:39 compute-0 nova_compute[350387]: 2025-11-26 02:35:39.414 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:35:39 compute-0 nova_compute[350387]: 2025-11-26 02:35:39.414 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:35:39 compute-0 nova_compute[350387]: 2025-11-26 02:35:39.415 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:35:39 compute-0 nova_compute[350387]: 2025-11-26 02:35:39.415 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:35:39 compute-0 nova_compute[350387]: 2025-11-26 02:35:39.417 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:35:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:35:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:35:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:35:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:35:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:35:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:35:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:35:41
Nov 26 02:35:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:35:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:35:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'default.rgw.log', '.mgr', 'vms', 'images', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control']
Nov 26 02:35:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:35:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:35:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:35:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:35:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:35:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:35:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:35:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:35:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:35:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:35:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:35:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:35:42 compute-0 podman[486180]: 2025-11-26 02:35:42.578955844 +0000 UTC m=+0.125415923 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Nov 26 02:35:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:42 compute-0 podman[486181]: 2025-11-26 02:35:42.622761914 +0000 UTC m=+0.165354584 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 02:35:44 compute-0 nova_compute[350387]: 2025-11-26 02:35:44.417 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:35:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:45 compute-0 podman[486224]: 2025-11-26 02:35:45.58475809 +0000 UTC m=+0.127549662 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 02:35:45 compute-0 podman[486223]: 2025-11-26 02:35:45.590726738 +0000 UTC m=+0.141253877 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, release-0.7.12=, container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 02:35:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:35:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:49 compute-0 nova_compute[350387]: 2025-11-26 02:35:49.420 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:35:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:35:51 compute-0 podman[486263]: 2025-11-26 02:35:51.573243087 +0000 UTC m=+0.116103851 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 02:35:51 compute-0 podman[486262]: 2025-11-26 02:35:51.60434801 +0000 UTC m=+0.156943198 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, architecture=x86_64)
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:35:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:35:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:35:52 compute-0 ceph-osd[206645]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 3084 syncs, 3.48 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 331 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:35:54 compute-0 nova_compute[350387]: 2025-11-26 02:35:54.423 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:35:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:55 compute-0 nova_compute[350387]: 2025-11-26 02:35:55.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:35:55 compute-0 nova_compute[350387]: 2025-11-26 02:35:55.342 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:35:55 compute-0 nova_compute[350387]: 2025-11-26 02:35:55.343 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:35:55 compute-0 nova_compute[350387]: 2025-11-26 02:35:55.344 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:35:55 compute-0 nova_compute[350387]: 2025-11-26 02:35:55.345 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:35:55 compute-0 nova_compute[350387]: 2025-11-26 02:35:55.346 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:35:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:35:55 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/652915633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:35:55 compute-0 nova_compute[350387]: 2025-11-26 02:35:55.893 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:35:56 compute-0 nova_compute[350387]: 2025-11-26 02:35:56.325 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:35:56 compute-0 nova_compute[350387]: 2025-11-26 02:35:56.326 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3931MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:35:56 compute-0 nova_compute[350387]: 2025-11-26 02:35:56.327 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:35:56 compute-0 nova_compute[350387]: 2025-11-26 02:35:56.327 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:35:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:35:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:57 compute-0 nova_compute[350387]: 2025-11-26 02:35:57.131 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:35:57 compute-0 nova_compute[350387]: 2025-11-26 02:35:57.132 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:35:57 compute-0 nova_compute[350387]: 2025-11-26 02:35:57.636 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:35:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:35:58 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3189700640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:35:58 compute-0 nova_compute[350387]: 2025-11-26 02:35:58.133 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:35:58 compute-0 nova_compute[350387]: 2025-11-26 02:35:58.145 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:35:58 compute-0 nova_compute[350387]: 2025-11-26 02:35:58.170 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:35:58 compute-0 nova_compute[350387]: 2025-11-26 02:35:58.172 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:35:58 compute-0 nova_compute[350387]: 2025-11-26 02:35:58.172 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:35:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:35:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:35:59 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 3375 syncs, 3.56 writes per sync, written: 0.04 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 281 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:35:59 compute-0 nova_compute[350387]: 2025-11-26 02:35:59.426 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:35:59 compute-0 podman[158021]: time="2025-11-26T02:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:35:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:35:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8206 "" "Go-http-client/1.1"
Nov 26 02:36:00 compute-0 nova_compute[350387]: 2025-11-26 02:36:00.172 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:36:00 compute-0 nova_compute[350387]: 2025-11-26 02:36:00.172 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:36:00 compute-0 nova_compute[350387]: 2025-11-26 02:36:00.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:36:00 compute-0 nova_compute[350387]: 2025-11-26 02:36:00.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:36:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:36:01 compute-0 openstack_network_exporter[367323]: ERROR   02:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:36:01 compute-0 openstack_network_exporter[367323]: ERROR   02:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:36:01 compute-0 openstack_network_exporter[367323]: ERROR   02:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:36:01 compute-0 openstack_network_exporter[367323]: ERROR   02:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:36:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:36:01 compute-0 openstack_network_exporter[367323]: ERROR   02:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:36:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:36:02 compute-0 nova_compute[350387]: 2025-11-26 02:36:02.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:36:02 compute-0 nova_compute[350387]: 2025-11-26 02:36:02.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:36:02 compute-0 nova_compute[350387]: 2025-11-26 02:36:02.300 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:36:02 compute-0 nova_compute[350387]: 2025-11-26 02:36:02.324 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 02:36:02 compute-0 podman[486348]: 2025-11-26 02:36:02.586766238 +0000 UTC m=+0.132322766 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 26 02:36:02 compute-0 podman[486349]: 2025-11-26 02:36:02.594631999 +0000 UTC m=+0.132021698 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 26 02:36:02 compute-0 podman[486350]: 2025-11-26 02:36:02.614062295 +0000 UTC m=+0.147070761 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 02:36:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:04 compute-0 nova_compute[350387]: 2025-11-26 02:36:04.430 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:36:04 compute-0 nova_compute[350387]: 2025-11-26 02:36:04.431 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:36:04 compute-0 nova_compute[350387]: 2025-11-26 02:36:04.432 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:36:04 compute-0 nova_compute[350387]: 2025-11-26 02:36:04.432 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:36:04 compute-0 nova_compute[350387]: 2025-11-26 02:36:04.433 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:36:04 compute-0 nova_compute[350387]: 2025-11-26 02:36:04.435 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:36:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:36:05 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2792 syncs, 3.63 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 281 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:36:06 compute-0 ceph-mgr[193049]: [devicehealth INFO root] Check health
Nov 26 02:36:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:36:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:07 compute-0 nova_compute[350387]: 2025-11-26 02:36:07.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:36:07 compute-0 nova_compute[350387]: 2025-11-26 02:36:07.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:36:08 compute-0 nova_compute[350387]: 2025-11-26 02:36:08.300 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:36:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:09 compute-0 nova_compute[350387]: 2025-11-26 02:36:09.436 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:36:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:36:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:36:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:36:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:36:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:36:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:36:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:36:12 compute-0 nova_compute[350387]: 2025-11-26 02:36:12.295 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:36:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:13 compute-0 podman[486406]: 2025-11-26 02:36:13.601611005 +0000 UTC m=+0.146596277 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:36:13 compute-0 podman[486407]: 2025-11-26 02:36:13.633531762 +0000 UTC m=+0.173843973 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:36:14 compute-0 nova_compute[350387]: 2025-11-26 02:36:14.438 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:36:14 compute-0 nova_compute[350387]: 2025-11-26 02:36:14.440 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:36:14 compute-0 nova_compute[350387]: 2025-11-26 02:36:14.441 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:36:14 compute-0 nova_compute[350387]: 2025-11-26 02:36:14.441 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:36:14 compute-0 nova_compute[350387]: 2025-11-26 02:36:14.441 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:36:14 compute-0 nova_compute[350387]: 2025-11-26 02:36:14.443 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:36:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:36:16 compute-0 podman[486449]: 2025-11-26 02:36:16.607999379 +0000 UTC m=+0.135672170 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 02:36:16 compute-0 podman[486448]: 2025-11-26 02:36:16.621264841 +0000 UTC m=+0.155827556 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vendor=Red Hat, Inc., architecture=x86_64, release=1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 26 02:36:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:19 compute-0 nova_compute[350387]: 2025-11-26 02:36:19.444 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:36:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.832241) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124580832318, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1188, "num_deletes": 251, "total_data_size": 1823612, "memory_usage": 1850784, "flush_reason": "Manual Compaction"}
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124580848219, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 1806582, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52097, "largest_seqno": 53284, "table_properties": {"data_size": 1800802, "index_size": 3175, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11944, "raw_average_key_size": 19, "raw_value_size": 1789333, "raw_average_value_size": 2957, "num_data_blocks": 142, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764124456, "oldest_key_time": 1764124456, "file_creation_time": 1764124580, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 16061 microseconds, and 10112 cpu microseconds.
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.848310) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 1806582 bytes OK
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.848338) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.851276) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.851297) EVENT_LOG_v1 {"time_micros": 1764124580851290, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.851321) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1818213, prev total WAL file size 1818213, number of live WAL files 2.
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.852799) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(1764KB)], [125(8806KB)]
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124580852937, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 10824604, "oldest_snapshot_seqno": -1}
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 6654 keys, 9133627 bytes, temperature: kUnknown
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124580903272, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 9133627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9091268, "index_size": 24646, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 174612, "raw_average_key_size": 26, "raw_value_size": 8972991, "raw_average_value_size": 1348, "num_data_blocks": 975, "num_entries": 6654, "num_filter_entries": 6654, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764119629, "oldest_key_time": 0, "file_creation_time": 1764124580, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1b66c307-a42f-4c02-bd88-eabf0b9b04cc", "db_session_id": "U5291X29YJY3W7NSASL8", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.903552) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 9133627 bytes
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.906213) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.7 rd, 181.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.6 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(11.0) write-amplify(5.1) OK, records in: 7168, records dropped: 514 output_compression: NoCompression
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.906244) EVENT_LOG_v1 {"time_micros": 1764124580906229, "job": 76, "event": "compaction_finished", "compaction_time_micros": 50413, "compaction_time_cpu_micros": 30288, "output_level": 6, "num_output_files": 1, "total_output_size": 9133627, "num_input_records": 7168, "num_output_records": 6654, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124580906986, "job": 76, "event": "table_file_deletion", "file_number": 127}
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764124580910425, "job": 76, "event": "table_file_deletion", "file_number": 125}
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.852555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.910730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.910734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.910749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.910752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:36:20 compute-0 ceph-mon[192746]: rocksdb: (Original Log Time 2025/11/26-02:36:20.910755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 02:36:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:36:22 compute-0 nova_compute[350387]: 2025-11-26 02:36:22.293 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:36:22 compute-0 podman[486487]: 2025-11-26 02:36:22.568489919 +0000 UTC m=+0.113496498 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 02:36:22 compute-0 podman[486486]: 2025-11-26 02:36:22.568478129 +0000 UTC m=+0.117843870 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41)
Nov 26 02:36:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:24 compute-0 nova_compute[350387]: 2025-11-26 02:36:24.447 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:36:24 compute-0 nova_compute[350387]: 2025-11-26 02:36:24.448 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:36:24 compute-0 nova_compute[350387]: 2025-11-26 02:36:24.448 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:36:24 compute-0 nova_compute[350387]: 2025-11-26 02:36:24.449 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:36:24 compute-0 nova_compute[350387]: 2025-11-26 02:36:24.449 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:36:24 compute-0 nova_compute[350387]: 2025-11-26 02:36:24.451 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:36:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:36:25.027 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:36:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:36:25.027 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:36:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:36:25.027 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:36:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:36:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:36:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2356394333' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:36:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:36:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2356394333' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:36:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:36:28 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:36:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:36:28 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:36:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:36:28 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:36:28 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev ed1e3263-faf5-406b-af95-3f8982b490ce does not exist
Nov 26 02:36:28 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 55bf079f-4771-43c7-81de-fecf551b0ae4 does not exist
Nov 26 02:36:28 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 03715951-d761-40f7-865a-b82f425ef26d does not exist
Nov 26 02:36:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:36:28 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:36:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:36:28 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:36:28 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:36:28 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:36:28 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:36:28 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:36:28 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:36:29 compute-0 nova_compute[350387]: 2025-11-26 02:36:29.452 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:36:29 compute-0 podman[158021]: time="2025-11-26T02:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:36:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:36:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8202 "" "Go-http-client/1.1"
Nov 26 02:36:30 compute-0 podman[486799]: 2025-11-26 02:36:30.001574907 +0000 UTC m=+0.098943039 container create 72d418b3fd666c0512b8a16454e188ac1074c88759428805f450b2af5893a77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:36:30 compute-0 podman[486799]: 2025-11-26 02:36:29.962162741 +0000 UTC m=+0.059530923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:36:30 compute-0 systemd[1]: Started libpod-conmon-72d418b3fd666c0512b8a16454e188ac1074c88759428805f450b2af5893a77d.scope.
Nov 26 02:36:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:36:30 compute-0 podman[486799]: 2025-11-26 02:36:30.156544318 +0000 UTC m=+0.253912470 container init 72d418b3fd666c0512b8a16454e188ac1074c88759428805f450b2af5893a77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:36:30 compute-0 podman[486799]: 2025-11-26 02:36:30.172155546 +0000 UTC m=+0.269523668 container start 72d418b3fd666c0512b8a16454e188ac1074c88759428805f450b2af5893a77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 02:36:30 compute-0 podman[486799]: 2025-11-26 02:36:30.179104121 +0000 UTC m=+0.276472313 container attach 72d418b3fd666c0512b8a16454e188ac1074c88759428805f450b2af5893a77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 02:36:30 compute-0 happy_varahamihira[486814]: 167 167
Nov 26 02:36:30 compute-0 systemd[1]: libpod-72d418b3fd666c0512b8a16454e188ac1074c88759428805f450b2af5893a77d.scope: Deactivated successfully.
Nov 26 02:36:30 compute-0 podman[486799]: 2025-11-26 02:36:30.185275985 +0000 UTC m=+0.282644117 container died 72d418b3fd666c0512b8a16454e188ac1074c88759428805f450b2af5893a77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:36:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b59cf1e212495d702ac29e13e32d88c8830d03d4393c8beafae4aaab4556a3aa-merged.mount: Deactivated successfully.
Nov 26 02:36:30 compute-0 podman[486799]: 2025-11-26 02:36:30.268370998 +0000 UTC m=+0.365739130 container remove 72d418b3fd666c0512b8a16454e188ac1074c88759428805f450b2af5893a77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 02:36:30 compute-0 systemd[1]: libpod-conmon-72d418b3fd666c0512b8a16454e188ac1074c88759428805f450b2af5893a77d.scope: Deactivated successfully.
Nov 26 02:36:30 compute-0 podman[486837]: 2025-11-26 02:36:30.567751344 +0000 UTC m=+0.102664254 container create e3cfeeeec8c0922a4b45c1c38b504b814fd10728d9450c65531346cb9f7ef89a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 26 02:36:30 compute-0 podman[486837]: 2025-11-26 02:36:30.523113221 +0000 UTC m=+0.058026181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:36:30 compute-0 systemd[1]: Started libpod-conmon-e3cfeeeec8c0922a4b45c1c38b504b814fd10728d9450c65531346cb9f7ef89a.scope.
Nov 26 02:36:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:30 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fea6df2a42ec99f9f16d97999f68439c37e4aa4feeb9480bcfa4ef626011ba6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fea6df2a42ec99f9f16d97999f68439c37e4aa4feeb9480bcfa4ef626011ba6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fea6df2a42ec99f9f16d97999f68439c37e4aa4feeb9480bcfa4ef626011ba6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fea6df2a42ec99f9f16d97999f68439c37e4aa4feeb9480bcfa4ef626011ba6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fea6df2a42ec99f9f16d97999f68439c37e4aa4feeb9480bcfa4ef626011ba6f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:36:30 compute-0 podman[486837]: 2025-11-26 02:36:30.731464561 +0000 UTC m=+0.266377521 container init e3cfeeeec8c0922a4b45c1c38b504b814fd10728d9450c65531346cb9f7ef89a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ellis, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:36:30 compute-0 podman[486837]: 2025-11-26 02:36:30.763080048 +0000 UTC m=+0.297992968 container start e3cfeeeec8c0922a4b45c1c38b504b814fd10728d9450c65531346cb9f7ef89a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 02:36:30 compute-0 podman[486837]: 2025-11-26 02:36:30.769481358 +0000 UTC m=+0.304394278 container attach e3cfeeeec8c0922a4b45c1c38b504b814fd10728d9450c65531346cb9f7ef89a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 02:36:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:36:31 compute-0 openstack_network_exporter[367323]: ERROR   02:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:36:31 compute-0 openstack_network_exporter[367323]: ERROR   02:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:36:31 compute-0 openstack_network_exporter[367323]: ERROR   02:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:36:31 compute-0 openstack_network_exporter[367323]: ERROR   02:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:36:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:36:31 compute-0 openstack_network_exporter[367323]: ERROR   02:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:36:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:36:32 compute-0 happy_ellis[486854]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:36:32 compute-0 happy_ellis[486854]: --> relative data size: 1.0
Nov 26 02:36:32 compute-0 happy_ellis[486854]: --> All data devices are unavailable
Nov 26 02:36:32 compute-0 systemd[1]: libpod-e3cfeeeec8c0922a4b45c1c38b504b814fd10728d9450c65531346cb9f7ef89a.scope: Deactivated successfully.
Nov 26 02:36:32 compute-0 systemd[1]: libpod-e3cfeeeec8c0922a4b45c1c38b504b814fd10728d9450c65531346cb9f7ef89a.scope: Consumed 1.294s CPU time.
Nov 26 02:36:32 compute-0 podman[486837]: 2025-11-26 02:36:32.117626462 +0000 UTC m=+1.652539352 container died e3cfeeeec8c0922a4b45c1c38b504b814fd10728d9450c65531346cb9f7ef89a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:36:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-fea6df2a42ec99f9f16d97999f68439c37e4aa4feeb9480bcfa4ef626011ba6f-merged.mount: Deactivated successfully.
Nov 26 02:36:32 compute-0 podman[486837]: 2025-11-26 02:36:32.223898386 +0000 UTC m=+1.758811296 container remove e3cfeeeec8c0922a4b45c1c38b504b814fd10728d9450c65531346cb9f7ef89a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ellis, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:36:32 compute-0 systemd[1]: libpod-conmon-e3cfeeeec8c0922a4b45c1c38b504b814fd10728d9450c65531346cb9f7ef89a.scope: Deactivated successfully.
Nov 26 02:36:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:32 compute-0 podman[486971]: 2025-11-26 02:36:32.864231716 +0000 UTC m=+0.114746003 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:36:32 compute-0 podman[486969]: 2025-11-26 02:36:32.880153233 +0000 UTC m=+0.128717255 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Nov 26 02:36:32 compute-0 podman[486970]: 2025-11-26 02:36:32.89429119 +0000 UTC m=+0.140490316 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 26 02:36:33 compute-0 podman[487090]: 2025-11-26 02:36:33.364107341 +0000 UTC m=+0.082755954 container create fb38ec45f307b4de9992d181f04a117d079e9e5fbf781fba6f1606da6bead78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jennings, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 02:36:33 compute-0 podman[487090]: 2025-11-26 02:36:33.32842715 +0000 UTC m=+0.047075813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:36:33 compute-0 systemd[1]: Started libpod-conmon-fb38ec45f307b4de9992d181f04a117d079e9e5fbf781fba6f1606da6bead78a.scope.
Nov 26 02:36:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:36:33 compute-0 podman[487090]: 2025-11-26 02:36:33.525775181 +0000 UTC m=+0.244423844 container init fb38ec45f307b4de9992d181f04a117d079e9e5fbf781fba6f1606da6bead78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:36:33 compute-0 podman[487090]: 2025-11-26 02:36:33.545453323 +0000 UTC m=+0.264101926 container start fb38ec45f307b4de9992d181f04a117d079e9e5fbf781fba6f1606da6bead78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:36:33 compute-0 podman[487090]: 2025-11-26 02:36:33.551440071 +0000 UTC m=+0.270088724 container attach fb38ec45f307b4de9992d181f04a117d079e9e5fbf781fba6f1606da6bead78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 26 02:36:33 compute-0 pedantic_jennings[487106]: 167 167
Nov 26 02:36:33 compute-0 systemd[1]: libpod-fb38ec45f307b4de9992d181f04a117d079e9e5fbf781fba6f1606da6bead78a.scope: Deactivated successfully.
Nov 26 02:36:33 compute-0 podman[487090]: 2025-11-26 02:36:33.559803006 +0000 UTC m=+0.278451609 container died fb38ec45f307b4de9992d181f04a117d079e9e5fbf781fba6f1606da6bead78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jennings, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 02:36:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f65ef28c69b4600b962a6de1678979cf57e542a4043ed813b71371104dc2b92-merged.mount: Deactivated successfully.
Nov 26 02:36:33 compute-0 podman[487090]: 2025-11-26 02:36:33.644128763 +0000 UTC m=+0.362777346 container remove fb38ec45f307b4de9992d181f04a117d079e9e5fbf781fba6f1606da6bead78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_jennings, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:36:33 compute-0 systemd[1]: libpod-conmon-fb38ec45f307b4de9992d181f04a117d079e9e5fbf781fba6f1606da6bead78a.scope: Deactivated successfully.
Nov 26 02:36:33 compute-0 podman[487128]: 2025-11-26 02:36:33.956286258 +0000 UTC m=+0.085294476 container create 14400d094ad7c85a7042ed37c4fbc66a91331d7f51329017ba26167f9c9595ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 02:36:34 compute-0 podman[487128]: 2025-11-26 02:36:33.925145064 +0000 UTC m=+0.054153322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:36:34 compute-0 systemd[1]: Started libpod-conmon-14400d094ad7c85a7042ed37c4fbc66a91331d7f51329017ba26167f9c9595ee.scope.
Nov 26 02:36:34 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5711fbb7c999b497b2f756f4276ab8375e77d1b640baa40c1be3d6c9fb76f4a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5711fbb7c999b497b2f756f4276ab8375e77d1b640baa40c1be3d6c9fb76f4a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5711fbb7c999b497b2f756f4276ab8375e77d1b640baa40c1be3d6c9fb76f4a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5711fbb7c999b497b2f756f4276ab8375e77d1b640baa40c1be3d6c9fb76f4a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:36:34 compute-0 podman[487128]: 2025-11-26 02:36:34.117406962 +0000 UTC m=+0.246415220 container init 14400d094ad7c85a7042ed37c4fbc66a91331d7f51329017ba26167f9c9595ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 02:36:34 compute-0 podman[487128]: 2025-11-26 02:36:34.137221949 +0000 UTC m=+0.266230167 container start 14400d094ad7c85a7042ed37c4fbc66a91331d7f51329017ba26167f9c9595ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 26 02:36:34 compute-0 podman[487128]: 2025-11-26 02:36:34.14476856 +0000 UTC m=+0.273776828 container attach 14400d094ad7c85a7042ed37c4fbc66a91331d7f51329017ba26167f9c9595ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:36:34 compute-0 nova_compute[350387]: 2025-11-26 02:36:34.455 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:36:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]: {
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:    "0": [
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:        {
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "devices": [
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "/dev/loop3"
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            ],
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "lv_name": "ceph_lv0",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "lv_size": "21470642176",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "name": "ceph_lv0",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "tags": {
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.cluster_name": "ceph",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.crush_device_class": "",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.encrypted": "0",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.osd_id": "0",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.type": "block",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.vdo": "0"
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            },
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "type": "block",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "vg_name": "ceph_vg0"
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:        }
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:    ],
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:    "1": [
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:        {
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "devices": [
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "/dev/loop4"
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            ],
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "lv_name": "ceph_lv1",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "lv_size": "21470642176",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "name": "ceph_lv1",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "tags": {
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.cluster_name": "ceph",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.crush_device_class": "",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.encrypted": "0",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.osd_id": "1",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.type": "block",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.vdo": "0"
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            },
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "type": "block",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "vg_name": "ceph_vg1"
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:        }
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:    ],
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:    "2": [
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:        {
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "devices": [
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "/dev/loop5"
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            ],
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "lv_name": "ceph_lv2",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "lv_size": "21470642176",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "name": "ceph_lv2",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "tags": {
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.cluster_name": "ceph",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.crush_device_class": "",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.encrypted": "0",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.osd_id": "2",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.type": "block",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:                "ceph.vdo": "0"
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            },
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "type": "block",
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:            "vg_name": "ceph_vg2"
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:        }
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]:    ]
Nov 26 02:36:34 compute-0 agitated_kapitsa[487142]: }
Nov 26 02:36:35 compute-0 systemd[1]: libpod-14400d094ad7c85a7042ed37c4fbc66a91331d7f51329017ba26167f9c9595ee.scope: Deactivated successfully.
Nov 26 02:36:35 compute-0 podman[487128]: 2025-11-26 02:36:35.04045349 +0000 UTC m=+1.169461698 container died 14400d094ad7c85a7042ed37c4fbc66a91331d7f51329017ba26167f9c9595ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kapitsa, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 02:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5711fbb7c999b497b2f756f4276ab8375e77d1b640baa40c1be3d6c9fb76f4a7-merged.mount: Deactivated successfully.
Nov 26 02:36:35 compute-0 podman[487128]: 2025-11-26 02:36:35.135089447 +0000 UTC m=+1.264097645 container remove 14400d094ad7c85a7042ed37c4fbc66a91331d7f51329017ba26167f9c9595ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:36:35 compute-0 systemd[1]: libpod-conmon-14400d094ad7c85a7042ed37c4fbc66a91331d7f51329017ba26167f9c9595ee.scope: Deactivated successfully.
Nov 26 02:36:36 compute-0 podman[487299]: 2025-11-26 02:36:36.331668516 +0000 UTC m=+0.096048228 container create 783a65092878f32c77af3632890b8ae06badff288d72062c3cf5377acf69024c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 02:36:36 compute-0 podman[487299]: 2025-11-26 02:36:36.29552031 +0000 UTC m=+0.059900072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:36:36 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:36:36 compute-0 systemd[1]: Started libpod-conmon-783a65092878f32c77af3632890b8ae06badff288d72062c3cf5377acf69024c.scope.
Nov 26 02:36:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:36:36 compute-0 podman[487299]: 2025-11-26 02:36:36.496553305 +0000 UTC m=+0.260933067 container init 783a65092878f32c77af3632890b8ae06badff288d72062c3cf5377acf69024c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mendeleev, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:36:36 compute-0 podman[487299]: 2025-11-26 02:36:36.515921449 +0000 UTC m=+0.280301151 container start 783a65092878f32c77af3632890b8ae06badff288d72062c3cf5377acf69024c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:36:36 compute-0 podman[487299]: 2025-11-26 02:36:36.522762821 +0000 UTC m=+0.287142523 container attach 783a65092878f32c77af3632890b8ae06badff288d72062c3cf5377acf69024c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mendeleev, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:36:36 compute-0 heuristic_mendeleev[487315]: 167 167
Nov 26 02:36:36 compute-0 systemd[1]: libpod-783a65092878f32c77af3632890b8ae06badff288d72062c3cf5377acf69024c.scope: Deactivated successfully.
Nov 26 02:36:36 compute-0 podman[487299]: 2025-11-26 02:36:36.527537055 +0000 UTC m=+0.291916767 container died 783a65092878f32c77af3632890b8ae06badff288d72062c3cf5377acf69024c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-25748358f040d483caf12e7c911c2219233c72fd1afb8006b6acc10d8bbdc931-merged.mount: Deactivated successfully.
Nov 26 02:36:36 compute-0 podman[487299]: 2025-11-26 02:36:36.607684716 +0000 UTC m=+0.372064428 container remove 783a65092878f32c77af3632890b8ae06badff288d72062c3cf5377acf69024c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mendeleev, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:36:36 compute-0 systemd[1]: libpod-conmon-783a65092878f32c77af3632890b8ae06badff288d72062c3cf5377acf69024c.scope: Deactivated successfully.
Nov 26 02:36:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:36 compute-0 podman[487337]: 2025-11-26 02:36:36.901797094 +0000 UTC m=+0.081503380 container create 8d963b6d5550fb38a87eab0a65788ad37b500776a009876b55ff7387f62d6b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_driscoll, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 02:36:36 compute-0 podman[487337]: 2025-11-26 02:36:36.866787881 +0000 UTC m=+0.046494227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:36:36 compute-0 systemd[1]: Started libpod-conmon-8d963b6d5550fb38a87eab0a65788ad37b500776a009876b55ff7387f62d6b5e.scope.
Nov 26 02:36:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e84dc93be67580c5c5b9248d65936c2bd60e2f3a81269af940d8c95360c0771/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e84dc93be67580c5c5b9248d65936c2bd60e2f3a81269af940d8c95360c0771/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e84dc93be67580c5c5b9248d65936c2bd60e2f3a81269af940d8c95360c0771/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e84dc93be67580c5c5b9248d65936c2bd60e2f3a81269af940d8c95360c0771/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:36:37 compute-0 podman[487337]: 2025-11-26 02:36:37.057958169 +0000 UTC m=+0.237664425 container init 8d963b6d5550fb38a87eab0a65788ad37b500776a009876b55ff7387f62d6b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:36:37 compute-0 podman[487337]: 2025-11-26 02:36:37.073096094 +0000 UTC m=+0.252802340 container start 8d963b6d5550fb38a87eab0a65788ad37b500776a009876b55ff7387f62d6b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_driscoll, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:36:37 compute-0 podman[487337]: 2025-11-26 02:36:37.078277849 +0000 UTC m=+0.257984105 container attach 8d963b6d5550fb38a87eab0a65788ad37b500776a009876b55ff7387f62d6b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_driscoll, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]: {
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:        "osd_id": 0,
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:        "type": "bluestore"
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:    },
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:        "osd_id": 2,
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:        "type": "bluestore"
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:    },
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:        "osd_id": 1,
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:        "type": "bluestore"
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]:    }
Nov 26 02:36:38 compute-0 youthful_driscoll[487353]: }
Nov 26 02:36:38 compute-0 systemd[1]: libpod-8d963b6d5550fb38a87eab0a65788ad37b500776a009876b55ff7387f62d6b5e.scope: Deactivated successfully.
Nov 26 02:36:38 compute-0 podman[487337]: 2025-11-26 02:36:38.228148375 +0000 UTC m=+1.407854641 container died 8d963b6d5550fb38a87eab0a65788ad37b500776a009876b55ff7387f62d6b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_driscoll, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 26 02:36:38 compute-0 systemd[1]: libpod-8d963b6d5550fb38a87eab0a65788ad37b500776a009876b55ff7387f62d6b5e.scope: Consumed 1.148s CPU time.
Nov 26 02:36:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e84dc93be67580c5c5b9248d65936c2bd60e2f3a81269af940d8c95360c0771-merged.mount: Deactivated successfully.
Nov 26 02:36:38 compute-0 podman[487337]: 2025-11-26 02:36:38.317517644 +0000 UTC m=+1.497223890 container remove 8d963b6d5550fb38a87eab0a65788ad37b500776a009876b55ff7387f62d6b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 26 02:36:38 compute-0 systemd[1]: libpod-conmon-8d963b6d5550fb38a87eab0a65788ad37b500776a009876b55ff7387f62d6b5e.scope: Deactivated successfully.
Nov 26 02:36:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:36:38 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:36:38 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:36:38 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:36:38 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev bdac8e83-e8b1-4373-9b44-09819a1d0373 does not exist
Nov 26 02:36:38 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev fca235d7-5728-46c2-bbf9-7ed4900cfd3e does not exist
Nov 26 02:36:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:39 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:36:39 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:36:39 compute-0 nova_compute[350387]: 2025-11-26 02:36:39.461 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:36:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:36:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:36:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:36:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:36:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:36:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:36:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:36:41
Nov 26 02:36:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:36:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:36:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', '.rgw.root']
Nov 26 02:36:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:36:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:36:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:36:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:36:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:36:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:36:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:36:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:36:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:36:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:36:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:36:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:36:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.884 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.885 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.885 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.886 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f50ab85b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.887 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7000b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7001a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab859a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85ba70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.888 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7002c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.889 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.889 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ad98c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.889 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.889 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7003b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.890 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.890 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f50ab85bfe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.890 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.891 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f50ab85b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.890 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa700440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.891 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.891 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f50aa700080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.892 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.892 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f50aa7000e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.892 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.892 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f50aa700170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.892 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.892 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f50ab859a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.892 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.891 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.892 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f50aa700200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.893 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.893 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f50ab85ba40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.893 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.893 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f50aa700290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.893 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.894 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.894 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f50ab85baa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.894 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.895 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f50ab85bb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.895 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.895 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f50aa700320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.895 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.895 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f50ab8b1250>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.894 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85bda0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.895 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.896 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f50aa700410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.896 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.896 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f50ab859a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.896 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.896 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f50ab85b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.896 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.897 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f50ab85bfb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.897 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.896 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.897 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.898 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.898 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.898 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f50ab85b590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.899 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.899 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f50ab85b5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.899 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.899 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f50ab85b650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.899 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.899 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f50ab85b6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.900 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.899 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50aa7006e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.900 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.901 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.901 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f50ab85a7b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f50ab85aea0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'cpu': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.901 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f50aa7006b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.901 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.901 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f50ab85b710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.902 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.902 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f50ab85b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.902 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.902 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f50ab85a840>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f50ab81fd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.902 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.902 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.903 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.903 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.903 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.903 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.903 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.903 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.904 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.904 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.904 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.904 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.904 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.904 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.905 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.905 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.905 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.905 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.905 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.905 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.906 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.906 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.906 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.906 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.906 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.906 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:42 compute-0 ceilometer_agent_compute[361163]: 2025-11-26 02:36:42.906 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 02:36:44 compute-0 nova_compute[350387]: 2025-11-26 02:36:44.465 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:36:44 compute-0 nova_compute[350387]: 2025-11-26 02:36:44.466 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:36:44 compute-0 podman[487450]: 2025-11-26 02:36:44.574555621 +0000 UTC m=+0.117932153 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:36:44 compute-0 podman[487451]: 2025-11-26 02:36:44.609184243 +0000 UTC m=+0.149637703 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 02:36:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:36:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:47 compute-0 podman[487496]: 2025-11-26 02:36:47.570469253 +0000 UTC m=+0.119707622 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, name=ubi9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, managed_by=edpm_ansible)
Nov 26 02:36:47 compute-0 podman[487497]: 2025-11-26 02:36:47.583769206 +0000 UTC m=+0.135029012 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 02:36:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:49 compute-0 nova_compute[350387]: 2025-11-26 02:36:49.467 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:36:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:36:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:36:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:53 compute-0 podman[487535]: 2025-11-26 02:36:53.520374836 +0000 UTC m=+0.082823356 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 02:36:53 compute-0 podman[487534]: 2025-11-26 02:36:53.529482082 +0000 UTC m=+0.093773524 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, version=9.6, vendor=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Nov 26 02:36:54 compute-0 nova_compute[350387]: 2025-11-26 02:36:54.469 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:36:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:55 compute-0 nova_compute[350387]: 2025-11-26 02:36:55.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:36:55 compute-0 nova_compute[350387]: 2025-11-26 02:36:55.334 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:36:55 compute-0 nova_compute[350387]: 2025-11-26 02:36:55.335 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:36:55 compute-0 nova_compute[350387]: 2025-11-26 02:36:55.335 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:36:55 compute-0 nova_compute[350387]: 2025-11-26 02:36:55.335 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:36:55 compute-0 nova_compute[350387]: 2025-11-26 02:36:55.336 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:36:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:36:55 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1859338892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:36:55 compute-0 nova_compute[350387]: 2025-11-26 02:36:55.829 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:36:56 compute-0 nova_compute[350387]: 2025-11-26 02:36:56.333 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:36:56 compute-0 nova_compute[350387]: 2025-11-26 02:36:56.336 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3916MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:36:56 compute-0 nova_compute[350387]: 2025-11-26 02:36:56.336 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:36:56 compute-0 nova_compute[350387]: 2025-11-26 02:36:56.337 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:36:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:36:56 compute-0 nova_compute[350387]: 2025-11-26 02:36:56.428 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:36:56 compute-0 nova_compute[350387]: 2025-11-26 02:36:56.429 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:36:56 compute-0 nova_compute[350387]: 2025-11-26 02:36:56.447 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing inventories for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 02:36:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:56 compute-0 nova_compute[350387]: 2025-11-26 02:36:56.955 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating ProviderTree inventory for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 02:36:56 compute-0 nova_compute[350387]: 2025-11-26 02:36:56.956 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Updating inventory in ProviderTree for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 02:36:56 compute-0 nova_compute[350387]: 2025-11-26 02:36:56.971 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing aggregate associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 02:36:57 compute-0 nova_compute[350387]: 2025-11-26 02:36:57.008 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Refreshing trait associations for resource provider 0e9e5c9b-dee2-4076-966b-e19b2697b966, traits: COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 02:36:57 compute-0 nova_compute[350387]: 2025-11-26 02:36:57.022 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:36:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:36:57 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1128859423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:36:57 compute-0 nova_compute[350387]: 2025-11-26 02:36:57.492 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:36:57 compute-0 nova_compute[350387]: 2025-11-26 02:36:57.504 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:36:57 compute-0 nova_compute[350387]: 2025-11-26 02:36:57.534 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:36:57 compute-0 nova_compute[350387]: 2025-11-26 02:36:57.537 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:36:57 compute-0 nova_compute[350387]: 2025-11-26 02:36:57.538 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:36:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:36:59 compute-0 nova_compute[350387]: 2025-11-26 02:36:59.473 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:36:59 compute-0 nova_compute[350387]: 2025-11-26 02:36:59.475 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:36:59 compute-0 nova_compute[350387]: 2025-11-26 02:36:59.475 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:36:59 compute-0 nova_compute[350387]: 2025-11-26 02:36:59.475 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:36:59 compute-0 nova_compute[350387]: 2025-11-26 02:36:59.476 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:36:59 compute-0 nova_compute[350387]: 2025-11-26 02:36:59.478 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:36:59 compute-0 podman[158021]: time="2025-11-26T02:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:36:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:36:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8204 "" "Go-http-client/1.1"
Nov 26 02:37:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:37:01 compute-0 openstack_network_exporter[367323]: ERROR   02:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:37:01 compute-0 openstack_network_exporter[367323]: ERROR   02:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:37:01 compute-0 openstack_network_exporter[367323]: ERROR   02:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:37:01 compute-0 openstack_network_exporter[367323]: ERROR   02:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:37:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:37:01 compute-0 openstack_network_exporter[367323]: ERROR   02:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:37:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:37:01 compute-0 nova_compute[350387]: 2025-11-26 02:37:01.548 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:37:01 compute-0 nova_compute[350387]: 2025-11-26 02:37:01.550 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:37:01 compute-0 nova_compute[350387]: 2025-11-26 02:37:01.551 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:37:01 compute-0 nova_compute[350387]: 2025-11-26 02:37:01.551 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:37:02 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:03 compute-0 podman[487621]: 2025-11-26 02:37:03.576593307 +0000 UTC m=+0.100597005 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 02:37:03 compute-0 podman[487620]: 2025-11-26 02:37:03.604963894 +0000 UTC m=+0.132467881 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:37:03 compute-0 podman[487619]: 2025-11-26 02:37:03.611677632 +0000 UTC m=+0.154772237 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 26 02:37:04 compute-0 nova_compute[350387]: 2025-11-26 02:37:04.298 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:37:04 compute-0 nova_compute[350387]: 2025-11-26 02:37:04.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 02:37:04 compute-0 nova_compute[350387]: 2025-11-26 02:37:04.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 02:37:04 compute-0 nova_compute[350387]: 2025-11-26 02:37:04.331 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 02:37:04 compute-0 nova_compute[350387]: 2025-11-26 02:37:04.331 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:37:04 compute-0 nova_compute[350387]: 2025-11-26 02:37:04.480 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:04 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:06 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:37:06 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:08 compute-0 nova_compute[350387]: 2025-11-26 02:37:08.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:37:08 compute-0 nova_compute[350387]: 2025-11-26 02:37:08.299 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:37:08 compute-0 nova_compute[350387]: 2025-11-26 02:37:08.299 350391 DEBUG nova.compute.manager [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 02:37:08 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:09 compute-0 nova_compute[350387]: 2025-11-26 02:37:09.483 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:10 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:37:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:37:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:37:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:37:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:37:11 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:37:11 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:37:12 compute-0 nova_compute[350387]: 2025-11-26 02:37:12.295 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:37:12 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:14 compute-0 nova_compute[350387]: 2025-11-26 02:37:14.485 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:37:14 compute-0 nova_compute[350387]: 2025-11-26 02:37:14.486 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:14 compute-0 nova_compute[350387]: 2025-11-26 02:37:14.486 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:37:14 compute-0 nova_compute[350387]: 2025-11-26 02:37:14.486 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:37:14 compute-0 nova_compute[350387]: 2025-11-26 02:37:14.487 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:37:14 compute-0 nova_compute[350387]: 2025-11-26 02:37:14.487 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:14 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:14 compute-0 podman[487677]: 2025-11-26 02:37:14.758650399 +0000 UTC m=+0.075436969 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 26 02:37:14 compute-0 podman[487678]: 2025-11-26 02:37:14.821505044 +0000 UTC m=+0.123888840 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:37:16 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:37:16 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:18 compute-0 podman[487722]: 2025-11-26 02:37:18.563080852 +0000 UTC m=+0.111814961 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.expose-services=)
Nov 26 02:37:18 compute-0 podman[487723]: 2025-11-26 02:37:18.585008078 +0000 UTC m=+0.137239755 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 02:37:18 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:19 compute-0 nova_compute[350387]: 2025-11-26 02:37:19.489 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:20 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:21 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:37:22 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:24 compute-0 nova_compute[350387]: 2025-11-26 02:37:24.491 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:37:24 compute-0 nova_compute[350387]: 2025-11-26 02:37:24.493 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:24 compute-0 nova_compute[350387]: 2025-11-26 02:37:24.493 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:37:24 compute-0 nova_compute[350387]: 2025-11-26 02:37:24.493 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:37:24 compute-0 nova_compute[350387]: 2025-11-26 02:37:24.494 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:37:24 compute-0 nova_compute[350387]: 2025-11-26 02:37:24.496 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:24 compute-0 podman[487763]: 2025-11-26 02:37:24.585562624 +0000 UTC m=+0.125907116 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, version=9.6, vcs-type=git, vendor=Red Hat, Inc.)
Nov 26 02:37:24 compute-0 podman[487764]: 2025-11-26 02:37:24.60997873 +0000 UTC m=+0.145985830 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 02:37:24 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:37:25.028 286844 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:37:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:37:25.029 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:37:25 compute-0 ovn_metadata_agent[286828]: 2025-11-26 02:37:25.029 286844 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:37:26 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:37:26 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 02:37:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3405437229' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 02:37:27 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 02:37:27 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3405437229' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 02:37:28 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:29 compute-0 nova_compute[350387]: 2025-11-26 02:37:29.495 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:29 compute-0 nova_compute[350387]: 2025-11-26 02:37:29.497 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:29 compute-0 podman[158021]: time="2025-11-26T02:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:37:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:37:29 compute-0 podman[158021]: @ - - [26/Nov/2025:02:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8203 "" "Go-http-client/1.1"
Nov 26 02:37:30 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:31 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:37:31 compute-0 openstack_network_exporter[367323]: ERROR   02:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:37:31 compute-0 openstack_network_exporter[367323]: ERROR   02:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:37:31 compute-0 openstack_network_exporter[367323]: ERROR   02:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:37:31 compute-0 openstack_network_exporter[367323]: ERROR   02:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:37:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:37:31 compute-0 openstack_network_exporter[367323]: ERROR   02:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:37:31 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:37:32 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:34 compute-0 nova_compute[350387]: 2025-11-26 02:37:34.497 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:34 compute-0 podman[487806]: 2025-11-26 02:37:34.576701176 +0000 UTC m=+0.130978818 container health_status bf437a65d4f068e81722e77a9dc92e189f26b413c36c0bbe700f6300d652b9b0 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Nov 26 02:37:34 compute-0 podman[487808]: 2025-11-26 02:37:34.591060149 +0000 UTC m=+0.126681378 container health_status fee20866b26b7c31ec36f5c538d23b9799f12062267185d7ef9300885e28339e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 02:37:34 compute-0 podman[487807]: 2025-11-26 02:37:34.617047948 +0000 UTC m=+0.159270293 container health_status e941ed66e7939cb03bfe4d9bb83104e66e32aeefc9051a1c218e8f80b34471ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 02:37:34 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Nov 26 02:37:36 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:37:36 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 16 op/s
Nov 26 02:37:38 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Nov 26 02:37:39 compute-0 nova_compute[350387]: 2025-11-26 02:37:39.500 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 02:37:39 compute-0 nova_compute[350387]: 2025-11-26 02:37:39.502 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:39 compute-0 nova_compute[350387]: 2025-11-26 02:37:39.502 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 02:37:39 compute-0 nova_compute[350387]: 2025-11-26 02:37:39.503 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:37:39 compute-0 nova_compute[350387]: 2025-11-26 02:37:39.503 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 02:37:39 compute-0 nova_compute[350387]: 2025-11-26 02:37:39.505 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:39 compute-0 systemd-logind[800]: New session 66 of user zuul.
Nov 26 02:37:39 compute-0 systemd[1]: Started Session 66 of User zuul.
Nov 26 02:37:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:37:40 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:37:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 02:37:40 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:37:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 02:37:40 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:37:40 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 41376838-51a9-4760-81d4-0e413c36739f does not exist
Nov 26 02:37:40 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev b28d8f06-d256-4773-bb1a-e80b8586c906 does not exist
Nov 26 02:37:40 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev 5fcd5ccc-b591-4311-8b2b-4408f46b3587 does not exist
Nov 26 02:37:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 02:37:40 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 02:37:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 02:37:40 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:37:40 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:37:40 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:37:40 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Nov 26 02:37:40 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 02:37:40 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:37:40 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 02:37:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:37:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:37:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:37:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:37:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 02:37:41 compute-0 ceph-mgr[193049]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 02:37:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Optimize plan auto_2025-11-26_02:37:41
Nov 26 02:37:41 compute-0 ceph-mgr[193049]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 02:37:41 compute-0 ceph-mgr[193049]: [balancer INFO root] do_upmap
Nov 26 02:37:41 compute-0 ceph-mgr[193049]: [balancer INFO root] pools ['.rgw.root', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'volumes', 'images', '.mgr', 'default.rgw.control', 'default.rgw.meta']
Nov 26 02:37:41 compute-0 ceph-mgr[193049]: [balancer INFO root] prepared 0/10 changes
Nov 26 02:37:41 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:37:41 compute-0 podman[488228]: 2025-11-26 02:37:41.995885553 +0000 UTC m=+0.061819796 container create 2fb36ee440167725a23b4a738d63d06c0f0b7f99196dd146bb447b6055d0bec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_raman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 02:37:42 compute-0 systemd[1]: Started libpod-conmon-2fb36ee440167725a23b4a738d63d06c0f0b7f99196dd146bb447b6055d0bec8.scope.
Nov 26 02:37:42 compute-0 podman[488228]: 2025-11-26 02:37:41.972303511 +0000 UTC m=+0.038237784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:37:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:37:42 compute-0 podman[488228]: 2025-11-26 02:37:42.141231995 +0000 UTC m=+0.207166288 container init 2fb36ee440167725a23b4a738d63d06c0f0b7f99196dd146bb447b6055d0bec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_raman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 02:37:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 02:37:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:37:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 02:37:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:37:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 02:37:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:37:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 02:37:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:37:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 02:37:42 compute-0 ceph-mgr[193049]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 02:37:42 compute-0 podman[488228]: 2025-11-26 02:37:42.164264361 +0000 UTC m=+0.230198624 container start 2fb36ee440167725a23b4a738d63d06c0f0b7f99196dd146bb447b6055d0bec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 02:37:42 compute-0 mystifying_raman[488247]: 167 167
Nov 26 02:37:42 compute-0 podman[488228]: 2025-11-26 02:37:42.171066172 +0000 UTC m=+0.237000445 container attach 2fb36ee440167725a23b4a738d63d06c0f0b7f99196dd146bb447b6055d0bec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_raman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:37:42 compute-0 systemd[1]: libpod-2fb36ee440167725a23b4a738d63d06c0f0b7f99196dd146bb447b6055d0bec8.scope: Deactivated successfully.
Nov 26 02:37:42 compute-0 podman[488228]: 2025-11-26 02:37:42.177191094 +0000 UTC m=+0.243125337 container died 2fb36ee440167725a23b4a738d63d06c0f0b7f99196dd146bb447b6055d0bec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_raman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:37:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-78fec0f2bcc183c8d3b528cd24b10ba7bf3c191b0327a3d69468d6714d3596b7-merged.mount: Deactivated successfully.
Nov 26 02:37:42 compute-0 podman[488228]: 2025-11-26 02:37:42.230899832 +0000 UTC m=+0.296834065 container remove 2fb36ee440167725a23b4a738d63d06c0f0b7f99196dd146bb447b6055d0bec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 26 02:37:42 compute-0 systemd[1]: libpod-conmon-2fb36ee440167725a23b4a738d63d06c0f0b7f99196dd146bb447b6055d0bec8.scope: Deactivated successfully.
Nov 26 02:37:42 compute-0 podman[488276]: 2025-11-26 02:37:42.526491962 +0000 UTC m=+0.094701130 container create 148c9876fc510a7d7e7b5d3ec470e325c4bce90feca8ea7875260e44642cb3be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bardeen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:37:42 compute-0 podman[488276]: 2025-11-26 02:37:42.493394523 +0000 UTC m=+0.061603741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:37:42 compute-0 systemd[1]: Started libpod-conmon-148c9876fc510a7d7e7b5d3ec470e325c4bce90feca8ea7875260e44642cb3be.scope.
Nov 26 02:37:42 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f65d1e3e6e8b306c40e872dc78c1c94f1d8bbf6682292e973002acff9b1ada/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f65d1e3e6e8b306c40e872dc78c1c94f1d8bbf6682292e973002acff9b1ada/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f65d1e3e6e8b306c40e872dc78c1c94f1d8bbf6682292e973002acff9b1ada/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f65d1e3e6e8b306c40e872dc78c1c94f1d8bbf6682292e973002acff9b1ada/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f65d1e3e6e8b306c40e872dc78c1c94f1d8bbf6682292e973002acff9b1ada/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 02:37:42 compute-0 podman[488276]: 2025-11-26 02:37:42.712424883 +0000 UTC m=+0.280634031 container init 148c9876fc510a7d7e7b5d3ec470e325c4bce90feca8ea7875260e44642cb3be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 02:37:42 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 02:37:42 compute-0 podman[488276]: 2025-11-26 02:37:42.733133834 +0000 UTC m=+0.301343002 container start 148c9876fc510a7d7e7b5d3ec470e325c4bce90feca8ea7875260e44642cb3be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 02:37:42 compute-0 podman[488276]: 2025-11-26 02:37:42.739555975 +0000 UTC m=+0.307765123 container attach 148c9876fc510a7d7e7b5d3ec470e325c4bce90feca8ea7875260e44642cb3be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bardeen, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 02:37:43 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15853 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:37:43 compute-0 agitated_bardeen[488294]: --> passed data devices: 0 physical, 3 LVM
Nov 26 02:37:43 compute-0 agitated_bardeen[488294]: --> relative data size: 1.0
Nov 26 02:37:43 compute-0 agitated_bardeen[488294]: --> All data devices are unavailable
Nov 26 02:37:43 compute-0 systemd[1]: libpod-148c9876fc510a7d7e7b5d3ec470e325c4bce90feca8ea7875260e44642cb3be.scope: Deactivated successfully.
Nov 26 02:37:43 compute-0 systemd[1]: libpod-148c9876fc510a7d7e7b5d3ec470e325c4bce90feca8ea7875260e44642cb3be.scope: Consumed 1.201s CPU time.
Nov 26 02:37:43 compute-0 podman[488276]: 2025-11-26 02:37:43.984691986 +0000 UTC m=+1.552901134 container died 148c9876fc510a7d7e7b5d3ec470e325c4bce90feca8ea7875260e44642cb3be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bardeen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 02:37:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7f65d1e3e6e8b306c40e872dc78c1c94f1d8bbf6682292e973002acff9b1ada-merged.mount: Deactivated successfully.
Nov 26 02:37:44 compute-0 podman[488276]: 2025-11-26 02:37:44.059146727 +0000 UTC m=+1.627355875 container remove 148c9876fc510a7d7e7b5d3ec470e325c4bce90feca8ea7875260e44642cb3be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:37:44 compute-0 systemd[1]: libpod-conmon-148c9876fc510a7d7e7b5d3ec470e325c4bce90feca8ea7875260e44642cb3be.scope: Deactivated successfully.
Nov 26 02:37:44 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15855 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:37:44 compute-0 nova_compute[350387]: 2025-11-26 02:37:44.505 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:44 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 26 02:37:44 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/911542765' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 26 02:37:44 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 26 02:37:45 compute-0 podman[488612]: 2025-11-26 02:37:45.199033712 +0000 UTC m=+0.092474676 container create 9df7debe9b754bf7a679037daadc26fad1bfdd2f23b291681dbec20a48130f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 02:37:45 compute-0 podman[488612]: 2025-11-26 02:37:45.16616754 +0000 UTC m=+0.059608554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:37:45 compute-0 systemd[1]: Started libpod-conmon-9df7debe9b754bf7a679037daadc26fad1bfdd2f23b291681dbec20a48130f26.scope.
Nov 26 02:37:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:37:45 compute-0 podman[488612]: 2025-11-26 02:37:45.371980308 +0000 UTC m=+0.265421282 container init 9df7debe9b754bf7a679037daadc26fad1bfdd2f23b291681dbec20a48130f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:37:45 compute-0 podman[488612]: 2025-11-26 02:37:45.388990286 +0000 UTC m=+0.282431260 container start 9df7debe9b754bf7a679037daadc26fad1bfdd2f23b291681dbec20a48130f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 02:37:45 compute-0 eloquent_mendeleev[488642]: 167 167
Nov 26 02:37:45 compute-0 podman[488612]: 2025-11-26 02:37:45.399007197 +0000 UTC m=+0.292448171 container attach 9df7debe9b754bf7a679037daadc26fad1bfdd2f23b291681dbec20a48130f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mendeleev, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 02:37:45 compute-0 systemd[1]: libpod-9df7debe9b754bf7a679037daadc26fad1bfdd2f23b291681dbec20a48130f26.scope: Deactivated successfully.
Nov 26 02:37:45 compute-0 podman[488612]: 2025-11-26 02:37:45.401478266 +0000 UTC m=+0.294919200 container died 9df7debe9b754bf7a679037daadc26fad1bfdd2f23b291681dbec20a48130f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mendeleev, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:37:45 compute-0 podman[488624]: 2025-11-26 02:37:45.415016887 +0000 UTC m=+0.146014831 container health_status 576873a708f4d336ed7aa19b49139db6fa2977228b58364ba453c9bdf301db22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 02:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9af29dd337d4d7ba322e9336bcc245d3ad361bb7a587fdc2b5aa59e0abe9f86f-merged.mount: Deactivated successfully.
Nov 26 02:37:45 compute-0 podman[488612]: 2025-11-26 02:37:45.461761059 +0000 UTC m=+0.355201993 container remove 9df7debe9b754bf7a679037daadc26fad1bfdd2f23b291681dbec20a48130f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 26 02:37:45 compute-0 podman[488625]: 2025-11-26 02:37:45.472241433 +0000 UTC m=+0.202551998 container health_status e53150071afd5e7c64acb279355ad1c4d9cc3315f07b6826b479ab0624c9eb16 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 26 02:37:45 compute-0 systemd[1]: libpod-conmon-9df7debe9b754bf7a679037daadc26fad1bfdd2f23b291681dbec20a48130f26.scope: Deactivated successfully.
Nov 26 02:37:45 compute-0 podman[488699]: 2025-11-26 02:37:45.747020229 +0000 UTC m=+0.096040678 container create d9fbe11a7bc24e855a27da743ba733b009b72c0e578ece071e1109f96afd0b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 26 02:37:45 compute-0 podman[488699]: 2025-11-26 02:37:45.7150067 +0000 UTC m=+0.064027239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:37:45 compute-0 systemd[1]: Started libpod-conmon-d9fbe11a7bc24e855a27da743ba733b009b72c0e578ece071e1109f96afd0b0f.scope.
Nov 26 02:37:45 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214c0cf7a5c659e705e1a011984a0a37496c0697b5d690b9b04f0ff2955881c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214c0cf7a5c659e705e1a011984a0a37496c0697b5d690b9b04f0ff2955881c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214c0cf7a5c659e705e1a011984a0a37496c0697b5d690b9b04f0ff2955881c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214c0cf7a5c659e705e1a011984a0a37496c0697b5d690b9b04f0ff2955881c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:37:45 compute-0 podman[488699]: 2025-11-26 02:37:45.927343942 +0000 UTC m=+0.276364391 container init d9fbe11a7bc24e855a27da743ba733b009b72c0e578ece071e1109f96afd0b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_knuth, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 02:37:45 compute-0 podman[488699]: 2025-11-26 02:37:45.95611743 +0000 UTC m=+0.305137909 container start d9fbe11a7bc24e855a27da743ba733b009b72c0e578ece071e1109f96afd0b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_knuth, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 26 02:37:45 compute-0 podman[488699]: 2025-11-26 02:37:45.96254325 +0000 UTC m=+0.311563709 container attach d9fbe11a7bc24e855a27da743ba733b009b72c0e578ece071e1109f96afd0b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_knuth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 02:37:46 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:37:46 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]: {
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:    "0": [
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:        {
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "devices": [
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "/dev/loop3"
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            ],
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "lv_name": "ceph_lv0",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "lv_size": "21470642176",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=835781ef-644a-4834-abb3-029e5bcba0ff,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "lv_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "name": "ceph_lv0",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "tags": {
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.block_uuid": "MP1Wps-d3pE-soBW-1C0H-xNvU-vGH8-dLLxKu",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.cluster_name": "ceph",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.crush_device_class": "",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.encrypted": "0",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.osd_fsid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.osd_id": "0",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.type": "block",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.vdo": "0"
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            },
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "type": "block",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "vg_name": "ceph_vg0"
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:        }
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:    ],
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:    "1": [
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:        {
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "devices": [
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "/dev/loop4"
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            ],
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "lv_name": "ceph_lv1",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "lv_size": "21470642176",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a345f9b0-19f1-464f-95c4-9c68bb202f1e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "lv_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "name": "ceph_lv1",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "tags": {
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.block_uuid": "vjOebm-9ZJr-BmeM-oLOI-EJ6b-FzPw-k2RwuU",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.cluster_name": "ceph",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.crush_device_class": "",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.encrypted": "0",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.osd_fsid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.osd_id": "1",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.type": "block",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.vdo": "0"
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            },
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "type": "block",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "vg_name": "ceph_vg1"
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:        }
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:    ],
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:    "2": [
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:        {
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "devices": [
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "/dev/loop5"
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            ],
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "lv_name": "ceph_lv2",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "lv_size": "21470642176",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=36901f64-240e-5c29-a2e2-29b56f2c329c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8f697525-afad-4f38-820d-80587338cf3b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "lv_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "name": "ceph_lv2",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "tags": {
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.block_uuid": "laSivP-Qw1W-E7iL-FypG-Sdeq-0fLk-EDtXTc",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.cephx_lockbox_secret": "",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.cluster_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.cluster_name": "ceph",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.crush_device_class": "",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.encrypted": "0",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.osd_fsid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.osd_id": "2",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.type": "block",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:                "ceph.vdo": "0"
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            },
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "type": "block",
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:            "vg_name": "ceph_vg2"
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:        }
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]:    ]
Nov 26 02:37:46 compute-0 beautiful_knuth[488715]: }
Nov 26 02:37:46 compute-0 systemd[1]: libpod-d9fbe11a7bc24e855a27da743ba733b009b72c0e578ece071e1109f96afd0b0f.scope: Deactivated successfully.
Nov 26 02:37:46 compute-0 podman[488699]: 2025-11-26 02:37:46.862076608 +0000 UTC m=+1.211097087 container died d9fbe11a7bc24e855a27da743ba733b009b72c0e578ece071e1109f96afd0b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 02:37:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-214c0cf7a5c659e705e1a011984a0a37496c0697b5d690b9b04f0ff2955881c7-merged.mount: Deactivated successfully.
Nov 26 02:37:46 compute-0 podman[488699]: 2025-11-26 02:37:46.951309253 +0000 UTC m=+1.300329712 container remove d9fbe11a7bc24e855a27da743ba733b009b72c0e578ece071e1109f96afd0b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:37:46 compute-0 systemd[1]: libpod-conmon-d9fbe11a7bc24e855a27da743ba733b009b72c0e578ece071e1109f96afd0b0f.scope: Deactivated successfully.
Nov 26 02:37:48 compute-0 ovs-vsctl[488906]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 26 02:37:48 compute-0 podman[488902]: 2025-11-26 02:37:48.130877804 +0000 UTC m=+0.078077263 container create f402022d540770d7b032b4b17845a0e9dfdf99137d5e2e71cfef1086883757c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 02:37:48 compute-0 podman[488902]: 2025-11-26 02:37:48.097945919 +0000 UTC m=+0.045145448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:37:48 compute-0 systemd[1]: Started libpod-conmon-f402022d540770d7b032b4b17845a0e9dfdf99137d5e2e71cfef1086883757c9.scope.
Nov 26 02:37:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:37:48 compute-0 podman[488902]: 2025-11-26 02:37:48.278036466 +0000 UTC m=+0.225235945 container init f402022d540770d7b032b4b17845a0e9dfdf99137d5e2e71cfef1086883757c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lewin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 02:37:48 compute-0 podman[488902]: 2025-11-26 02:37:48.298652975 +0000 UTC m=+0.245852454 container start f402022d540770d7b032b4b17845a0e9dfdf99137d5e2e71cfef1086883757c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lewin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:37:48 compute-0 podman[488902]: 2025-11-26 02:37:48.304356625 +0000 UTC m=+0.251556184 container attach f402022d540770d7b032b4b17845a0e9dfdf99137d5e2e71cfef1086883757c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lewin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 02:37:48 compute-0 condescending_lewin[488932]: 167 167
Nov 26 02:37:48 compute-0 systemd[1]: libpod-f402022d540770d7b032b4b17845a0e9dfdf99137d5e2e71cfef1086883757c9.scope: Deactivated successfully.
Nov 26 02:37:48 compute-0 podman[488902]: 2025-11-26 02:37:48.31094628 +0000 UTC m=+0.258145819 container died f402022d540770d7b032b4b17845a0e9dfdf99137d5e2e71cfef1086883757c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 02:37:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1a6cd0588278a64da6f1e34f3301b98e68cb4c5a805f2a5949c97d205c60b1b-merged.mount: Deactivated successfully.
Nov 26 02:37:48 compute-0 podman[488902]: 2025-11-26 02:37:48.391021969 +0000 UTC m=+0.338221448 container remove f402022d540770d7b032b4b17845a0e9dfdf99137d5e2e71cfef1086883757c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lewin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 02:37:48 compute-0 systemd[1]: libpod-conmon-f402022d540770d7b032b4b17845a0e9dfdf99137d5e2e71cfef1086883757c9.scope: Deactivated successfully.
Nov 26 02:37:48 compute-0 podman[488981]: 2025-11-26 02:37:48.638976601 +0000 UTC m=+0.082036685 container create dbf8785754bf959320231b0542eb68a2e823270a3c442f55d91598ce008aa6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 26 02:37:48 compute-0 podman[488981]: 2025-11-26 02:37:48.604709329 +0000 UTC m=+0.047769483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 02:37:48 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Nov 26 02:37:48 compute-0 systemd[1]: Started libpod-conmon-dbf8785754bf959320231b0542eb68a2e823270a3c442f55d91598ce008aa6c8.scope.
Nov 26 02:37:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 02:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8970e9d52b40a2f3799671edcdb1e5f6e5bbcc59b9d57d0e0d3812b16c37e818/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 02:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8970e9d52b40a2f3799671edcdb1e5f6e5bbcc59b9d57d0e0d3812b16c37e818/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 02:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8970e9d52b40a2f3799671edcdb1e5f6e5bbcc59b9d57d0e0d3812b16c37e818/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 02:37:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8970e9d52b40a2f3799671edcdb1e5f6e5bbcc59b9d57d0e0d3812b16c37e818/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 02:37:48 compute-0 podman[488981]: 2025-11-26 02:37:48.827256216 +0000 UTC m=+0.270316330 container init dbf8785754bf959320231b0542eb68a2e823270a3c442f55d91598ce008aa6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 02:37:48 compute-0 podman[488981]: 2025-11-26 02:37:48.847789213 +0000 UTC m=+0.290849297 container start dbf8785754bf959320231b0542eb68a2e823270a3c442f55d91598ce008aa6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_merkle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 02:37:48 compute-0 podman[488981]: 2025-11-26 02:37:48.854304076 +0000 UTC m=+0.297364170 container attach dbf8785754bf959320231b0542eb68a2e823270a3c442f55d91598ce008aa6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 02:37:48 compute-0 podman[488992]: 2025-11-26 02:37:48.862509416 +0000 UTC m=+0.157045009 container health_status 1798a0c3bf8b2e2e3398f3aaca7bb87b6313dac736689186d7f0d4aa2fed44d9 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, architecture=x86_64, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, maintainer=Red Hat, Inc.)
Nov 26 02:37:48 compute-0 podman[488996]: 2025-11-26 02:37:48.875415909 +0000 UTC m=+0.160538188 container health_status ff84fbbcab370802bff2f3fb33f019af22ed83c0746e51792584e59702666cd2 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd)
Nov 26 02:37:49 compute-0 virtqemud[138515]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 26 02:37:49 compute-0 nova_compute[350387]: 2025-11-26 02:37:49.507 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:49 compute-0 nova_compute[350387]: 2025-11-26 02:37:49.509 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:49 compute-0 virtqemud[138515]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 26 02:37:49 compute-0 virtqemud[138515]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 26 02:37:49 compute-0 hungry_merkle[489014]: {
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:    "835781ef-644a-4834-abb3-029e5bcba0ff": {
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:        "osd_id": 0,
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:        "osd_uuid": "835781ef-644a-4834-abb3-029e5bcba0ff",
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:        "type": "bluestore"
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:    },
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:    "8f697525-afad-4f38-820d-80587338cf3b": {
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:        "osd_id": 2,
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:        "osd_uuid": "8f697525-afad-4f38-820d-80587338cf3b",
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:        "type": "bluestore"
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:    },
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:    "a345f9b0-19f1-464f-95c4-9c68bb202f1e": {
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:        "ceph_fsid": "36901f64-240e-5c29-a2e2-29b56f2c329c",
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:        "osd_id": 1,
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:        "osd_uuid": "a345f9b0-19f1-464f-95c4-9c68bb202f1e",
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:        "type": "bluestore"
Nov 26 02:37:49 compute-0 hungry_merkle[489014]:    }
Nov 26 02:37:49 compute-0 hungry_merkle[489014]: }
Nov 26 02:37:49 compute-0 systemd[1]: libpod-dbf8785754bf959320231b0542eb68a2e823270a3c442f55d91598ce008aa6c8.scope: Deactivated successfully.
Nov 26 02:37:49 compute-0 systemd[1]: libpod-dbf8785754bf959320231b0542eb68a2e823270a3c442f55d91598ce008aa6c8.scope: Consumed 1.133s CPU time.
Nov 26 02:37:49 compute-0 podman[488981]: 2025-11-26 02:37:49.997811564 +0000 UTC m=+1.440871668 container died dbf8785754bf959320231b0542eb68a2e823270a3c442f55d91598ce008aa6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_merkle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 02:37:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8970e9d52b40a2f3799671edcdb1e5f6e5bbcc59b9d57d0e0d3812b16c37e818-merged.mount: Deactivated successfully.
Nov 26 02:37:50 compute-0 podman[488981]: 2025-11-26 02:37:50.077924073 +0000 UTC m=+1.520984147 container remove dbf8785754bf959320231b0542eb68a2e823270a3c442f55d91598ce008aa6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 26 02:37:50 compute-0 systemd[1]: libpod-conmon-dbf8785754bf959320231b0542eb68a2e823270a3c442f55d91598ce008aa6c8.scope: Deactivated successfully.
Nov 26 02:37:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 02:37:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:37:50 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 02:37:50 compute-0 ceph-mon[192746]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:37:50 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev e34c3da9-18f1-4122-a0ea-36525c432417 does not exist
Nov 26 02:37:50 compute-0 ceph-mgr[193049]: [progress WARNING root] complete: ev e3a13aac-09e9-422f-beea-dd762051a567 does not exist
Nov 26 02:37:50 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: cache status {prefix=cache status} (starting...)
Nov 26 02:37:50 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: client ls {prefix=client ls} (starting...)
Nov 26 02:37:50 compute-0 lvm[489414]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 02:37:50 compute-0 lvm[489414]: VG ceph_vg2 finished
Nov 26 02:37:50 compute-0 lvm[489415]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 02:37:50 compute-0 lvm[489415]: VG ceph_vg1 finished
Nov 26 02:37:50 compute-0 lvm[489426]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 02:37:50 compute-0 lvm[489426]: VG ceph_vg0 finished
Nov 26 02:37:50 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Nov 26 02:37:51 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: damage ls {prefix=damage ls} (starting...)
Nov 26 02:37:51 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:37:51 compute-0 ceph-mon[192746]: from='mgr.14130 192.168.122.100:0/1830119506' entity='mgr.compute-0.vbisdw' 
Nov 26 02:37:51 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: dump loads {prefix=dump loads} (starting...)
Nov 26 02:37:51 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15859 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:37:51 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 26 02:37:51 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:37:51 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 26 02:37:51 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 26 02:37:51 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15861 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:37:51 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 26 02:37:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 02:37:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:37:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 02:37:51 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 26 02:37:52 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 26 02:37:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 26 02:37:52 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3110533149' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 26 02:37:52 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 26 02:37:52 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: ops {prefix=ops} (starting...)
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15867 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 02:37:52 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T02:37:52.450+0000 7f7615e48640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 02:37:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 02:37:52 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/302956444' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 02:37:52 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2629: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 0 B/s wr, 5 op/s
Nov 26 02:37:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 26 02:37:52 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3045051777' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 26 02:37:52 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 26 02:37:52 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/951885161' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 26 02:37:53 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: session ls {prefix=session ls} (starting...)
Nov 26 02:37:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 26 02:37:53 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3102888754' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 02:37:53 compute-0 ceph-mds[220183]: mds.cephfs.compute-0.gmppdy asok_command: status {prefix=status} (starting...)
Nov 26 02:37:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 26 02:37:53 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947572816' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 26 02:37:53 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15881 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:37:53 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 26 02:37:53 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/909931035' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 02:37:54 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15883 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:37:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 26 02:37:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1932465236' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 02:37:54 compute-0 nova_compute[350387]: 2025-11-26 02:37:54.509 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:54 compute-0 nova_compute[350387]: 2025-11-26 02:37:54.511 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 26 02:37:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1824462380' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 26 02:37:54 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:54 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 02:37:54 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1862027396' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 02:37:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 26 02:37:55 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1009786934' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 26 02:37:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 26 02:37:55 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140025449' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 26 02:37:55 compute-0 podman[490031]: 2025-11-26 02:37:55.195753924 +0000 UTC m=+0.092090997 container health_status 27da8481e21f3cb38660129bebff7f1df58d975a9874126f6a57604623857670 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 02:37:55 compute-0 podman[490033]: 2025-11-26 02:37:55.215271592 +0000 UTC m=+0.122129211 container health_status 5c2227d5f262d20353f45bb6c130a9043d4c746a796aeca00226cf4e122a22ce (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 02:37:55 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15895 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:37:55 compute-0 ceph-mgr[193049]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 26 02:37:55 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T02:37:55.467+0000 7f7615e48640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 26 02:37:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 26 02:37:55 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/696925301' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 02:37:55 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 26 02:37:55 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3376947379' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 26 02:37:55 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15901 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:37:56 compute-0 nova_compute[350387]: 2025-11-26 02:37:56.297 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:37:56 compute-0 nova_compute[350387]: 2025-11-26 02:37:56.328 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:37:56 compute-0 nova_compute[350387]: 2025-11-26 02:37:56.329 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:37:56 compute-0 nova_compute[350387]: 2025-11-26 02:37:56.329 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:37:56 compute-0 nova_compute[350387]: 2025-11-26 02:37:56.329 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 02:37:56 compute-0 nova_compute[350387]: 2025-11-26 02:37:56.329 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:37:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:37:56 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15903 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:37:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 26 02:37:56 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2779012476' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 26 02:37:56 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:37:56 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4241612261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:37:56 compute-0 nova_compute[350387]: 2025-11-26 02:37:56.826 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:37:56 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15910 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:37:56 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 26 02:37:56 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3468421627' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101212160 unmapped: 28000256 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 7526 writes, 29K keys, 7526 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7526 writes, 1699 syncs, 4.43 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 848 writes, 2356 keys, 848 commit groups, 1.0 writes per commit group, ingest: 1.52 MB, 0.00 MB/s#012Interval WAL: 848 writes, 383 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 27992064 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212527 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa000000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 27983872 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 173.495849609s of 173.693954468s, submitted: 43
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101244928 unmapped: 27967488 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557315b9d000 session 0x557318bfe960
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557315b9d800 session 0x557318bfef00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557315bd5000 session 0x557318bff860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4f9bf1000/0x0/0x4ffc00000, data 0x19b15d2/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 101261312 unmapped: 27951104 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211443 data_alloc: 218103808 data_used: 12652544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98861056 unmapped: 30351360 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557315b9d000 session 0x557315f221e0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa690000/0x0/0x4ffc00000, data 0xf14560/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093316 data_alloc: 218103808 data_used: 8089600
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa690000/0x0/0x4ffc00000, data 0xf14560/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093316 data_alloc: 218103808 data_used: 8089600
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa690000/0x0/0x4ffc00000, data 0xf14560/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093316 data_alloc: 218103808 data_used: 8089600
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa690000/0x0/0x4ffc00000, data 0xf14560/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.161890030s of 19.983861923s, submitted: 122
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557317859800 session 0x557317897860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557317859c00 session 0x557315fc3c20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557316fc5000 session 0x557315efd860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 98877440 unmapped: 30334976 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad30000/0x0/0x4ffc00000, data 0x874560/0x93e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad30000/0x0/0x4ffc00000, data 0x874560/0x93e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 ms_handle_reset con 0x557315b9d800 session 0x5573177a9a40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005350 data_alloc: 218103808 data_used: 3854336
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 33267712 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 74.880226135s of 75.004554749s, submitted: 22
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fad55000/0x0/0x4ffc00000, data 0x850551/0x919000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95985664 unmapped: 33226752 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa8e3000/0x0/0x4ffc00000, data 0xcc0584/0xd8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 134 ms_handle_reset con 0x557315b9d000 session 0x557317128d20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 95993856 unmapped: 33218560 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045751 data_alloc: 218103808 data_used: 3862528
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa8de000/0x0/0x4ffc00000, data 0xcc2124/0xd8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96051200 unmapped: 33161216 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x55731891be00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96059392 unmapped: 33153024 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96059392 unmapped: 33153024 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105742 data_alloc: 218103808 data_used: 3862528
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859800 session 0x557314ee7e00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859c00 session 0x5573177dc5a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315bd5000 session 0x557315f23a40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa0da000/0x0/0x4ffc00000, data 0x14c3ca1/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x5573179565a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 33144832 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x5573177dda40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99246080 unmapped: 29966336 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99246080 unmapped: 29966336 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106702 data_alloc: 218103808 data_used: 6815744
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859800 session 0x557316052d20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 62.977394104s of 63.218254089s, submitted: 29
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104603648 unmapped: 24608768 heap: 129212416 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859c00 session 0x557315f21860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc4000 session 0x557317897680
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fa2400 session 0x5573176741e0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x557315e0d860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x5573177dc1e0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859c00 session 0x557317544b40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859800 session 0x5573177dde00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x557315b19c20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5400 session 0x5573175445a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fa2400 session 0x55731470c5a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 33660928 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8c06000/0x0/0x4ffc00000, data 0x2998cb1/0x2a68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8c06000/0x0/0x4ffc00000, data 0x2998cb1/0x2a68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99762176 unmapped: 33652736 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8c06000/0x0/0x4ffc00000, data 0x2998cb1/0x2a68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99762176 unmapped: 33652736 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99762176 unmapped: 33652736 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266658 data_alloc: 218103808 data_used: 6815744
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x557318bffe00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 100114432 unmapped: 33300480 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 100139008 unmapped: 33275904 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x55731781f400 session 0x557317675860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8bdc000/0x0/0x4ffc00000, data 0x29c2cb1/0x2a92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 100139008 unmapped: 33275904 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99647488 unmapped: 33767424 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 99123200 unmapped: 34291712 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297436 data_alloc: 218103808 data_used: 8544256
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105283584 unmapped: 28131328 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8bdc000/0x0/0x4ffc00000, data 0x29c2cb1/0x2a92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111550464 unmapped: 21864448 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111550464 unmapped: 21864448 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8bdc000/0x0/0x4ffc00000, data 0x29c2cb1/0x2a92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111583232 unmapped: 21831680 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8bdc000/0x0/0x4ffc00000, data 0x29c2cb1/0x2a92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 21798912 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413596 data_alloc: 234881024 data_used: 24965120
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x557317675e00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fa2400 session 0x557318045a40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.679318428s of 14.867232323s, submitted: 25
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x5573169b9680
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9900000/0x0/0x4ffc00000, data 0x1c9ecb1/0x1d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233888 data_alloc: 234881024 data_used: 12312576
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9900000/0x0/0x4ffc00000, data 0x1c9ecb1/0x1d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233888 data_alloc: 234881024 data_used: 12312576
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9900000/0x0/0x4ffc00000, data 0x1c9ecb1/0x1d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9900000/0x0/0x4ffc00000, data 0x1c9ecb1/0x1d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233888 data_alloc: 234881024 data_used: 12312576
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233888 data_alloc: 234881024 data_used: 12312576
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9900000/0x0/0x4ffc00000, data 0x1c9ecb1/0x1d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9900000/0x0/0x4ffc00000, data 0x1c9ecb1/0x1d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9900000/0x0/0x4ffc00000, data 0x1c9ecb1/0x1d6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233888 data_alloc: 234881024 data_used: 12312576
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 28975104 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.922222137s of 25.935543060s, submitted: 5
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 24354816 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109133824 unmapped: 24281088 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8799000/0x0/0x4ffc00000, data 0x2e05cb1/0x2ed5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 24166400 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 24166400 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379986 data_alloc: 234881024 data_used: 13197312
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 24166400 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 24158208 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 24158208 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 24158208 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2e0fcb1/0x2edf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2e0fcb1/0x2edf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 24158208 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379986 data_alloc: 234881024 data_used: 13197312
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5400 session 0x557315efc780
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x55731788dc00 session 0x557315efc3c0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x557315b0fe00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fa2400 session 0x557315b0f4a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 24117248 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x557315ed1c20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5400 session 0x557315ae34a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x55731788c000 session 0x557315ae3680
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x5573177dda40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fa2400 session 0x5573177dc5a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 26722304 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 26722304 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 26722304 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26714112 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1428552 data_alloc: 234881024 data_used: 13201408
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f80e2000/0x0/0x4ffc00000, data 0x34bbcc1/0x358c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26714112 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 26714112 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x5573177dd2c0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5400 session 0x5573177dd0e0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315f59c00 session 0x55731867e000
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x557315f203c0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fa2400 session 0x5573169b8f00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.991237640s of 16.390045166s, submitted: 115
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x55731867ef00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5400 session 0x557316053e00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106831872 unmapped: 26583040 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315f58c00 session 0x5573160525a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315b9d000 session 0x557317956b40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x557318044d20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 26550272 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7d57000/0x0/0x4ffc00000, data 0x3844d33/0x3917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 26542080 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464210 data_alloc: 234881024 data_used: 13205504
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106881024 unmapped: 26533888 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7d57000/0x0/0x4ffc00000, data 0x3844d33/0x3917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 26517504 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 26845184 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f7d57000/0x0/0x4ffc00000, data 0x3844d33/0x3917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fa2400 session 0x557318bffa40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5400 session 0x5573175f2960
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106463232 unmapped: 26951680 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557315f58800 session 0x557317675680
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410974 data_alloc: 234881024 data_used: 13201408
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557318462800 session 0x557317120960
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411266 data_alloc: 234881024 data_used: 13205504
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105119744 unmapped: 28295168 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 28286976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.252437592s of 16.516160965s, submitted: 43
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 28286976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 28286976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411618 data_alloc: 234881024 data_used: 13205504
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 28286976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 28286976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105447424 unmapped: 27967488 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437378 data_alloc: 234881024 data_used: 16900096
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437378 data_alloc: 234881024 data_used: 16900096
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106315776 unmapped: 27099136 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 27090944 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437378 data_alloc: 234881024 data_used: 16900096
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 27090944 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 27090944 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 27090944 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859800 session 0x5573161445a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557317859c00 session 0x557317110b40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.091217041s of 20.105909348s, submitted: 2
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f8404000/0x0/0x4ffc00000, data 0x3198d23/0x326a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103112704 unmapped: 30302208 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 ms_handle_reset con 0x557316fc5000 session 0x5573180441e0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184560 data_alloc: 218103808 data_used: 8294400
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d51000/0x0/0x4ffc00000, data 0x184cd13/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184560 data_alloc: 218103808 data_used: 8294400
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d51000/0x0/0x4ffc00000, data 0x184cd13/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d51000/0x0/0x4ffc00000, data 0x184cd13/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184560 data_alloc: 218103808 data_used: 8294400
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d51000/0x0/0x4ffc00000, data 0x184cd13/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d51000/0x0/0x4ffc00000, data 0x184cd13/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184560 data_alloc: 218103808 data_used: 8294400
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d51000/0x0/0x4ffc00000, data 0x184cd13/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9d51000/0x0/0x4ffc00000, data 0x184cd13/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 30334976 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.659467697s of 17.694372177s, submitted: 12
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104923136 unmapped: 28491776 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104906752 unmapped: 28508160 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 28475392 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229706 data_alloc: 218103808 data_used: 8359936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 28352512 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 28352512 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f98cc000/0x0/0x4ffc00000, data 0x1cc9d13/0x1d9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 28352512 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 28352512 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f98cc000/0x0/0x4ffc00000, data 0x1cc9d13/0x1d9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 28352512 heap: 133414912 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f98cc000/0x0/0x4ffc00000, data 0x1cc9d13/0x1d9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285957 data_alloc: 218103808 data_used: 8372224
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104849408 unmapped: 36962304 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5400 session 0x557315ed0960
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104849408 unmapped: 36962304 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x5573171114a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x5573169b8000
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859c00 session 0x5573177dd4a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318462800 session 0x557317896960
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.090188980s of 10.480854988s, submitted: 80
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104865792 unmapped: 36945920 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x5573173c9680
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x557315ec03c0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x557315ed1c20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x557318044d20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859c00 session 0x557318045a40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 36937728 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8ec3000/0x0/0x4ffc00000, data 0x26d5948/0x27ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 36937728 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307456 data_alloc: 218103808 data_used: 8376320
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 36937728 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 36937728 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104882176 unmapped: 36929536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318462800 session 0x557315ba65a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x5573177a8780
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104882176 unmapped: 36929536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8ec3000/0x0/0x4ffc00000, data 0x26d5948/0x27ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104882176 unmapped: 36929536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x557318bff860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x557318bff2c0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309294 data_alloc: 218103808 data_used: 8376320
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104906752 unmapped: 36904960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318a82000 session 0x557315b4a000
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318fef400 session 0x557315b0e960
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x5573180443c0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x557315e1b860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 104906752 unmapped: 36904960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x5573173c8780
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8ec2000/0x0/0x4ffc00000, data 0x26d5958/0x27ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318a82000 session 0x557318bfef00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 29294592 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318fee800 session 0x557315aced20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318fee800 session 0x557318bfed20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x557315ba6960
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.451944351s of 10.590756416s, submitted: 24
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x557315f21680
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x5573177dd0e0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318feec00 session 0x557315e0c780
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x557315f20780
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x557315efd860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318a82000 session 0x5573176752c0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x5573171290e0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318fee800 session 0x557315b4ba40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 29253632 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x55731867e960
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x557316041860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x557317129e00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 29253632 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318a82000 session 0x557317545e00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859c00 session 0x5573171281e0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731788c000 session 0x557315ae2f00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x557315b0fa40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420502 data_alloc: 234881024 data_used: 17334272
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 29302784 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x557316040d20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318a82000 session 0x557318044b40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fc5000 session 0x55731470d2c0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x557315ed1a40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 30212096 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859800 session 0x557315ba6b40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731788c000 session 0x5573177f05a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557318a82000 session 0x557318bffc20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 30203904 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87db000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 30187520 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 30187520 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1396081 data_alloc: 234881024 data_used: 15851520
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 30187520 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87db000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 30179328 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448081 data_alloc: 234881024 data_used: 23171072
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87db000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448081 data_alloc: 234881024 data_used: 23171072
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87db000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448241 data_alloc: 234881024 data_used: 23175168
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87db000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87db000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448241 data_alloc: 234881024 data_used: 23175168
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114450432 unmapped: 27361280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.193988800s of 28.378929138s, submitted: 34
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450177 data_alloc: 234881024 data_used: 23162880
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87dd000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87dd000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114507776 unmapped: 27303936 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731788b800 session 0x557315b0fc20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315f59c00 session 0x5573173c9e00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450177 data_alloc: 234881024 data_used: 23162880
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x557315fc34a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 114393088 unmapped: 27418624 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a800 session 0x55731470d860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f87dd000/0x0/0x4ffc00000, data 0x2dba906/0x2e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.920627594s of 10.017802238s, submitted: 28
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557315acf860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557319694c00 session 0x5573177a90e0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557317544d20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x557317956780
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a800 session 0x557315e0cb40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 115859456 unmapped: 25952256 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19693568 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122265600 unmapped: 19546112 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f74c8000/0x0/0x4ffc00000, data 0x40ce916/0x41a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 20504576 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1618073 data_alloc: 234881024 data_used: 24219648
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122118144 unmapped: 19693568 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 19677184 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557319695c00 session 0x557315404960
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 20291584 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317821c00 session 0x557315b4ba40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 20291584 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557315b4a000
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x557315f22d20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121872384 unmapped: 19939328 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f73f1000/0x0/0x4ffc00000, data 0x41a4926/0x427d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1628325 data_alloc: 234881024 data_used: 24231936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 19922944 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731788a000 session 0x5573173c9a40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731788bc00 session 0x557315b190e0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 19922944 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.479908943s of 11.012975693s, submitted: 119
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 25452544 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557319695c00 session 0x557317110d20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 25100288 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7be2000/0x0/0x4ffc00000, data 0x2f89906/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7be2000/0x0/0x4ffc00000, data 0x2f89906/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 24150016 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494134 data_alloc: 234881024 data_used: 22036480
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 24150016 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7be2000/0x0/0x4ffc00000, data 0x2f89906/0x3060000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 24150016 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 24133632 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b9d000 session 0x557315b19e00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fa2400 session 0x5573175f21e0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 24125440 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557314703e00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402270 data_alloc: 234881024 data_used: 18280448
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e15000/0x0/0x4ffc00000, data 0x2783894/0x2858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402270 data_alloc: 234881024 data_used: 18280448
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e15000/0x0/0x4ffc00000, data 0x2783894/0x2858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402270 data_alloc: 234881024 data_used: 18280448
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e15000/0x0/0x4ffc00000, data 0x2783894/0x2858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402270 data_alloc: 234881024 data_used: 18280448
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e15000/0x0/0x4ffc00000, data 0x2783894/0x2858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e15000/0x0/0x4ffc00000, data 0x2783894/0x2858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e15000/0x0/0x4ffc00000, data 0x2783894/0x2858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1402270 data_alloc: 234881024 data_used: 18280448
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 25337856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.028324127s of 32.471443176s, submitted: 47
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 22847488 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8e15000/0x0/0x4ffc00000, data 0x2783894/0x2858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445974 data_alloc: 234881024 data_used: 18694144
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117620736 unmapped: 24190976 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24862720 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24715264 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24715264 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24715264 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456186 data_alloc: 234881024 data_used: 18501632
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24715264 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8946000/0x0/0x4ffc00000, data 0x2c4a894/0x2d1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24715264 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450154 data_alloc: 234881024 data_used: 18501632
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8933000/0x0/0x4ffc00000, data 0x2c66894/0x2d3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24616960 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450154 data_alloc: 234881024 data_used: 18501632
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24608768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24608768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24608768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8933000/0x0/0x4ffc00000, data 0x2c66894/0x2d3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24608768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x55731891ab40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731788a000 session 0x557317128960
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x5573177dc1e0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24608768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x557318045680
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.408416748s of 20.744815826s, submitted: 74
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b9d000 session 0x5573179565a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fa2400 session 0x5573161454a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731788bc00 session 0x557316053860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557316053e00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x55731867f0e0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1496135 data_alloc: 234881024 data_used: 18501632
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23085056 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23085056 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 23085056 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f83bc000/0x0/0x4ffc00000, data 0x31db906/0x32b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118734848 unmapped: 23076864 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118734848 unmapped: 23076864 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b9d000 session 0x55731867eb40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f83bc000/0x0/0x4ffc00000, data 0x31db906/0x32b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fa2400 session 0x55731867f4a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1496135 data_alloc: 234881024 data_used: 18501632
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118734848 unmapped: 23076864 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f83bc000/0x0/0x4ffc00000, data 0x31db906/0x32b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118734848 unmapped: 23076864 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317858c00 session 0x55731867e000
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x55731867f2c0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 22863872 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 22863872 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 22863872 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1500742 data_alloc: 234881024 data_used: 18542592
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 22863872 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8391000/0x0/0x4ffc00000, data 0x3205929/0x32dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8391000/0x0/0x4ffc00000, data 0x3205929/0x32dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 118996992 unmapped: 22814720 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.138267517s of 12.369709969s, submitted: 33
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 20340736 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8391000/0x0/0x4ffc00000, data 0x3205929/0x32dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 20340736 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 20217856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543806 data_alloc: 234881024 data_used: 23457792
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 20217856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 20217856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a800 session 0x557315ba6960
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317821c00 session 0x55731867fa40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 20217856 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fa2400 session 0x557315404f00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 20193280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 20193280 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543966 data_alloc: 234881024 data_used: 23461888
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 20185088 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 20185088 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121847808 unmapped: 19963904 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 121970688 unmapped: 19841024 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 19554304 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543806 data_alloc: 234881024 data_used: 23588864
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 19554304 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 19521536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 19521536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 19521536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 19521536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543806 data_alloc: 234881024 data_used: 23588864
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122290176 unmapped: 19521536 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 19488768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 19488768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 19488768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 19488768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543806 data_alloc: 234881024 data_used: 23588864
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122322944 unmapped: 19488768 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122355712 unmapped: 19456000 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122363904 unmapped: 19447808 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122363904 unmapped: 19447808 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122372096 unmapped: 19439616 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f838b000/0x0/0x4ffc00000, data 0x320b929/0x32e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1543806 data_alloc: 234881024 data_used: 23588864
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122372096 unmapped: 19439616 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 122372096 unmapped: 19439616 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.034168243s of 30.101922989s, submitted: 22
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 125911040 unmapped: 15900672 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126033920 unmapped: 15777792 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7ba3000/0x0/0x4ffc00000, data 0x39eb929/0x3ac3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126500864 unmapped: 15310848 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7b9e000/0x0/0x4ffc00000, data 0x39f8929/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1624362 data_alloc: 234881024 data_used: 24428544
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126533632 unmapped: 15278080 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7b9e000/0x0/0x4ffc00000, data 0x39f8929/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 15245312 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 15245312 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 15245312 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 15245312 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1624538 data_alloc: 234881024 data_used: 24432640
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 15245312 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7b9e000/0x0/0x4ffc00000, data 0x39f8929/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 15237120 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 15237120 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7b9e000/0x0/0x4ffc00000, data 0x39f8929/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 15237120 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 15237120 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7b9e000/0x0/0x4ffc00000, data 0x39f8929/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.249171257s of 13.556386948s, submitted: 68
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1632570 data_alloc: 234881024 data_used: 24686592
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 15048704 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 15048704 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 15048704 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 15048704 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 15048704 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1633706 data_alloc: 234881024 data_used: 24715264
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 15048704 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7b9e000/0x0/0x4ffc00000, data 0x39f8929/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7b9e000/0x0/0x4ffc00000, data 0x39f8929/0x3ad0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 15048704 heap: 141811712 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731928c000 session 0x557315ae3680
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557315acf4a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a800 session 0x557317957680
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fa2400 session 0x5573169b94a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317821c00 session 0x557315ed1860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127516672 unmapped: 18489344 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x55731928c000 session 0x5573160412c0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557315b0f2c0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a800 session 0x557317129a40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557316fa2400 session 0x5573169b9c20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127590400 unmapped: 18415616 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127590400 unmapped: 18415616 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f70bb000/0x0/0x4ffc00000, data 0x44da98b/0x45b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1715758 data_alloc: 234881024 data_used: 24711168
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127590400 unmapped: 18415616 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127590400 unmapped: 18415616 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f70bb000/0x0/0x4ffc00000, data 0x44da98b/0x45b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127639552 unmapped: 18366464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.957085609s of 12.243449211s, submitted: 62
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317821c00 session 0x557315b0ed20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127492096 unmapped: 18513920 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127492096 unmapped: 18513920 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7091000/0x0/0x4ffc00000, data 0x450498b/0x45dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1720489 data_alloc: 234881024 data_used: 24670208
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127492096 unmapped: 18513920 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130916352 unmapped: 15089664 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 134160384 unmapped: 11845632 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 136699904 unmapped: 9306112 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 136699904 unmapped: 9306112 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1799529 data_alloc: 251658240 data_used: 35803136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 136749056 unmapped: 9256960 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7091000/0x0/0x4ffc00000, data 0x450498b/0x45dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 136749056 unmapped: 9256960 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7091000/0x0/0x4ffc00000, data 0x450498b/0x45dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 136790016 unmapped: 9216000 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 136806400 unmapped: 9199616 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7091000/0x0/0x4ffc00000, data 0x450498b/0x45dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.489391327s of 11.564118385s, submitted: 12
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x557315b18780
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b9d000 session 0x557315b194a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 136839168 unmapped: 9166848 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315406000 session 0x557315f21860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1638713 data_alloc: 251658240 data_used: 29650944
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7df1000/0x0/0x4ffc00000, data 0x377b8f6/0x3851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7df1000/0x0/0x4ffc00000, data 0x377b8f6/0x3851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7df1000/0x0/0x4ffc00000, data 0x377b8f6/0x3851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1638713 data_alloc: 251658240 data_used: 29650944
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f7df1000/0x0/0x4ffc00000, data 0x377b8f6/0x3851000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859000 session 0x5573171294a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557317859400 session 0x557318045c20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 133545984 unmapped: 12460032 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 ms_handle_reset con 0x557315b7a400 session 0x5573177dcb40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128188416 unmapped: 17817600 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128196608 unmapped: 17809408 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445106 data_alloc: 234881024 data_used: 20480000
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128196608 unmapped: 17809408 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8d02000/0x0/0x4ffc00000, data 0x28988d6/0x296c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128196608 unmapped: 17809408 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127541248 unmapped: 18464768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 18456576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 18456576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445106 data_alloc: 234881024 data_used: 20480000
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 18456576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 18456576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8d02000/0x0/0x4ffc00000, data 0x28988d6/0x296c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 18456576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8d02000/0x0/0x4ffc00000, data 0x28988d6/0x296c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 18456576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 18456576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.778203964s of 20.959466934s, submitted: 43
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 137 ms_handle_reset con 0x557315406000 session 0x5573180454a0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452961 data_alloc: 234881024 data_used: 20488192
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127598592 unmapped: 18407424 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 137 ms_handle_reset con 0x557315b7a400 session 0x557318bffa40
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127598592 unmapped: 18407424 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x289a876/0x2971000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127614976 unmapped: 18391040 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 138 ms_handle_reset con 0x557315b9d000 session 0x5573176743c0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8cf9000/0x0/0x4ffc00000, data 0x289c447/0x2974000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127705088 unmapped: 18300928 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 127705088 unmapped: 18300928 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455487 data_alloc: 234881024 data_used: 20516864
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128131072 unmapped: 17874944 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128860160 unmapped: 17145856 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128860160 unmapped: 17145856 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8613000/0x0/0x4ffc00000, data 0x2f85024/0x305b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 17637376 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128368640 unmapped: 17637376 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f859a000/0x0/0x4ffc00000, data 0x2ffe024/0x30d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.584693909s of 10.118380547s, submitted: 114
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528496 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8587000/0x0/0x4ffc00000, data 0x300fa87/0x30e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528544 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8587000/0x0/0x4ffc00000, data 0x300fa87/0x30e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 9454 writes, 36K keys, 9454 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9454 writes, 2477 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1928 writes, 7544 keys, 1928 commit groups, 1.0 writes per commit group, ingest: 8.85 MB, 0.01 MB/s#012Interval WAL: 1928 writes, 778 syncs, 2.48 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528544 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8587000/0x0/0x4ffc00000, data 0x300fa87/0x30e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 17563648 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128450560 unmapped: 17555456 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: mgrc ms_handle_reset ms_handle_reset con 0x557316fc4800
Nov 26 02:37:57 compute-0 ceph-osd[208794]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2845592742
Nov 26 02:37:57 compute-0 ceph-osd[208794]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2845592742,v1:192.168.122.100:6801/2845592742]
Nov 26 02:37:57 compute-0 ceph-osd[208794]: mgrc handle_mgr_configure stats_period=5
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528544 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128729088 unmapped: 17276928 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.026569366s of 16.066671371s, submitted: 15
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8587000/0x0/0x4ffc00000, data 0x300fa87/0x30e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 17235968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 17235968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 17235968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128770048 unmapped: 17235968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528700 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128720896 unmapped: 17285120 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128720896 unmapped: 17285120 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128720896 unmapped: 17285120 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128720896 unmapped: 17285120 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128720896 unmapped: 17285120 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528700 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128696320 unmapped: 17309696 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128696320 unmapped: 17309696 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128696320 unmapped: 17309696 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128696320 unmapped: 17309696 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.779787064s of 12.801182747s, submitted: 3
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128696320 unmapped: 17309696 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 17326080 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 17342464 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128671744 unmapped: 17334272 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 17326080 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 17326080 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 17326080 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 17326080 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 17317888 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 17317888 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 17317888 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 17317888 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128565248 unmapped: 17440768 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 ms_handle_reset con 0x557316e3a000 session 0x5573177dcd20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528876 data_alloc: 234881024 data_used: 21426176
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128573440 unmapped: 17432576 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 17424384 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 17424384 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 128589824 unmapped: 17416192 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 66.053329468s of 66.063583374s, submitted: 1
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129662976 unmapped: 16343040 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529788 data_alloc: 234881024 data_used: 21463040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129671168 unmapped: 16334848 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129703936 unmapped: 16302080 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529788 data_alloc: 234881024 data_used: 21463040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529788 data_alloc: 234881024 data_used: 21463040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129761280 unmapped: 16244736 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 16236544 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529788 data_alloc: 234881024 data_used: 21463040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 16236544 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 16236544 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 16236544 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 16236544 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 16236544 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529788 data_alloc: 234881024 data_used: 21463040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129769472 unmapped: 16236544 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529788 data_alloc: 234881024 data_used: 21463040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529788 data_alloc: 234881024 data_used: 21463040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 16228352 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8584000/0x0/0x4ffc00000, data 0x3012a87/0x30ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129785856 unmapped: 16220160 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 16211968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529948 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 71.231262207s of 71.723098755s, submitted: 108
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 16211968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 16211968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 16211968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129794048 unmapped: 16211968 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129802240 unmapped: 16203776 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129810432 unmapped: 16195584 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 16187392 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 16187392 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 16187392 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 16179200 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129835008 unmapped: 16171008 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 16162816 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530256 data_alloc: 234881024 data_used: 21467136
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 131.769302368s of 131.775421143s, submitted: 1
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8582000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8583000/0x0/0x4ffc00000, data 0x3013a87/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532160 data_alloc: 234881024 data_used: 21659648
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129851392 unmapped: 16154624 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532780 data_alloc: 234881024 data_used: 21659648
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1532780 data_alloc: 234881024 data_used: 21659648
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 15966208 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 15958016 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 15958016 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 15958016 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 15958016 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533260 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 15958016 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 15958016 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.488658905s of 21.519058228s, submitted: 5
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f856c000/0x0/0x4ffc00000, data 0x302aa87/0x3102000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 nova_compute[350387]: 2025-11-26 02:37:57.238 350391 WARNING nova.virt.libvirt.driver [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 02:37:57 compute-0 nova_compute[350387]: 2025-11-26 02:37:57.239 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3695MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 02:37:57 compute-0 nova_compute[350387]: 2025-11-26 02:37:57.239 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 02:37:57 compute-0 nova_compute[350387]: 2025-11-26 02:37:57.240 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533680 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533680 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533680 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.315682411s of 14.335088730s, submitted: 2
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129908736 unmapped: 16097280 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129908736 unmapped: 16097280 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129908736 unmapped: 16097280 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129908736 unmapped: 16097280 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129908736 unmapped: 16097280 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129908736 unmapped: 16097280 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129916928 unmapped: 16089088 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129925120 unmapped: 16080896 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 16072704 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129941504 unmapped: 16064512 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129949696 unmapped: 16056320 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129957888 unmapped: 16048128 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129966080 unmapped: 16039936 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533856 data_alloc: 234881024 data_used: 21671936
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8557000/0x0/0x4ffc00000, data 0x303fa87/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129974272 unmapped: 16031744 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 ms_handle_reset con 0x55731788b000 session 0x55731891ad20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 ms_handle_reset con 0x55731788a400 session 0x557317129860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 16023552 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 212.596664429s of 212.610076904s, submitted: 2
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f861f000/0x0/0x4ffc00000, data 0x2f77a87/0x304f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 ms_handle_reset con 0x55731788b000 session 0x557318bff680
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 16023552 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 16023552 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520609 data_alloc: 234881024 data_used: 21667840
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 16023552 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129982464 unmapped: 16023552 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8620000/0x0/0x4ffc00000, data 0x2f77a77/0x304e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 16015360 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 16015360 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 16015360 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520609 data_alloc: 234881024 data_used: 21667840
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 16015360 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 16015360 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f8620000/0x0/0x4ffc00000, data 0x2f77a77/0x304e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 16015360 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.107772827s of 10.144620895s, submitted: 7
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 ms_handle_reset con 0x55731928c000 session 0x557317111e00
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 ms_handle_reset con 0x55731928c400 session 0x55731867fc20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 25952256 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 ms_handle_reset con 0x55731788a400 session 0x557315e0c000
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288369 data_alloc: 218103808 data_used: 9326592
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9557000/0x0/0x4ffc00000, data 0x1ccaa15/0x1da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9557000/0x0/0x4ffc00000, data 0x1ccaa15/0x1da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288369 data_alloc: 218103808 data_used: 9326592
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.019871712s of 10.321649551s, submitted: 50
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 25935872 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 140 ms_handle_reset con 0x557315b7a800 session 0x557315b0ed20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 32325632 heap: 146006016 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fa0cb000/0x0/0x4ffc00000, data 0x14cc5c3/0x15a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328411 data_alloc: 218103808 data_used: 2514944
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 49152000 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 141 ms_handle_reset con 0x55731788a400 session 0x5573154043c0
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113639424 unmapped: 49152000 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 142 ms_handle_reset con 0x55731788b000 session 0x557317957c20
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 49086464 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 49086464 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa0c6000/0x0/0x4ffc00000, data 0x14cfd1a/0x15a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 49086464 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa0c6000/0x0/0x4ffc00000, data 0x14cfd1a/0x15a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229276 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa0c3000/0x0/0x4ffc00000, data 0x14d1799/0x15aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.369564056s of 11.736245155s, submitted: 70
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 9943 writes, 38K keys, 9943 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9943 writes, 2702 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 489 writes, 1339 keys, 489 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s#012Interval WAL: 489 writes, 225 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 50012160 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 50003968 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'config diff' '{prefix=config diff}'
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'config show' '{prefix=config show}'
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 49758208 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'counter dump' '{prefix=counter dump}'
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'counter schema' '{prefix=counter schema}'
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 49963008 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'log dump' '{prefix=log dump}'
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'perf dump' '{prefix=perf dump}'
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'perf schema' '{prefix=perf schema}'
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232250 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c0000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 117.177131653s of 117.198890686s, submitted: 15
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 49774592 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 49750016 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0c1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15913 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 49700864 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 49700864 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 49700864 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 49700864 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 49700864 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 49700864 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 49700864 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 49700864 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 49700864 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 49700864 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 49692672 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 49684480 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [1])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 49676288 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 nova_compute[350387]: 2025-11-26 02:37:57.316 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 02:37:57 compute-0 nova_compute[350387]: 2025-11-26 02:37:57.317 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 49668096 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 ms_handle_reset con 0x557317859000 session 0x557315e1b860
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113131520 unmapped: 49659904 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 49651712 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113147904 unmapped: 49643520 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 49627136 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 49627136 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 49627136 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 49627136 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 49627136 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 49627136 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 49627136 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 49627136 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 49627136 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 49627136 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 49627136 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 49627136 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 49627136 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 49627136 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 49618944 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 49610752 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 49610752 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 49610752 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 49610752 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 49610752 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 49610752 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 49610752 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 49610752 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 49610752 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 49610752 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 49602560 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 49602560 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113197056 unmapped: 49594368 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113205248 unmapped: 49586176 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113205248 unmapped: 49586176 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113205248 unmapped: 49586176 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113205248 unmapped: 49586176 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113205248 unmapped: 49586176 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113205248 unmapped: 49586176 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113205248 unmapped: 49586176 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113205248 unmapped: 49586176 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113205248 unmapped: 49586176 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113213440 unmapped: 49577984 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113213440 unmapped: 49577984 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113213440 unmapped: 49577984 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113213440 unmapped: 49577984 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 nova_compute[350387]: 2025-11-26 02:37:57.338 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113213440 unmapped: 49577984 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 49569792 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 49561600 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 49561600 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 49561600 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 49561600 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 49561600 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 49561600 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 49561600 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 49561600 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 49561600 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 49561600 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 49561600 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 49561600 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 49561600 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 49561600 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 49553408 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 49553408 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 49545216 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 49537024 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 49537024 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 49537024 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 49537024 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 49537024 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 49537024 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 49537024 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 49537024 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 49537024 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 49537024 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 49537024 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 49537024 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113262592 unmapped: 49528832 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113262592 unmapped: 49528832 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113262592 unmapped: 49528832 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113262592 unmapped: 49528832 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113262592 unmapped: 49528832 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113262592 unmapped: 49528832 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113262592 unmapped: 49528832 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113262592 unmapped: 49528832 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 49520640 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 49520640 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 49520640 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 49520640 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 49520640 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 49520640 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 49520640 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 49520640 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 49520640 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 49520640 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 49520640 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 49520640 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 49504256 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 49504256 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 49504256 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 49504256 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 49504256 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 49504256 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 49504256 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 49504256 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 49504256 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 49504256 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 49487872 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 49487872 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 49487872 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 49512448 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 49504256 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2792 syncs, 3.63 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 281 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 49504256 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 49504256 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 49504256 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 49504256 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 49496064 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 49487872 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 49487872 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 49487872 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 49487872 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 49487872 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 49487872 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 49487872 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 49487872 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 49487872 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 49487872 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 49487872 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 49487872 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 49479680 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 49479680 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 49479680 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 49479680 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 49471488 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 49463296 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 49463296 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 49463296 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 49463296 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 49463296 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 49463296 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 49463296 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 49463296 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 49463296 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 49455104 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 49455104 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 49455104 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 49455104 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 49455104 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 49455104 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 49455104 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.846313477s of 600.277404785s, submitted: 90
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 49446912 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 49381376 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113451008 unmapped: 49340416 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 49315840 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'config diff' '{prefix=config diff}'
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'config show' '{prefix=config show}'
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 49201152 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'counter dump' '{prefix=counter dump}'
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'counter schema' '{prefix=counter schema}'
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113532928 unmapped: 49258496 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x14d31fc/0x15ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 26 02:37:57 compute-0 ceph-osd[208794]: prioritycache tune_memory target: 4294967296 mapped: 113385472 unmapped: 49405952 heap: 162791424 old mem: 2845415832 new mem: 2845415832
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:37:57 compute-0 ceph-osd[208794]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:37:57 compute-0 ceph-osd[208794]: bluestore.MempoolThread(0x5573139fdb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1231370 data_alloc: 218103808 data_used: 2519040
Nov 26 02:37:57 compute-0 ceph-osd[208794]: do_command 'log dump' '{prefix=log dump}'
Nov 26 02:37:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 26 02:37:57 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017682897' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 02:37:57 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15919 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:37:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 02:37:57 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4020180510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 02:37:57 compute-0 nova_compute[350387]: 2025-11-26 02:37:57.798 350391 DEBUG oslo_concurrency.processutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 02:37:57 compute-0 nova_compute[350387]: 2025-11-26 02:37:57.804 350391 DEBUG nova.compute.provider_tree [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed in ProviderTree for provider: 0e9e5c9b-dee2-4076-966b-e19b2697b966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 02:37:57 compute-0 nova_compute[350387]: 2025-11-26 02:37:57.818 350391 DEBUG nova.scheduler.client.report [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Inventory has not changed for provider 0e9e5c9b-dee2-4076-966b-e19b2697b966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 02:37:57 compute-0 nova_compute[350387]: 2025-11-26 02:37:57.820 350391 DEBUG nova.compute.resource_tracker [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 02:37:57 compute-0 nova_compute[350387]: 2025-11-26 02:37:57.820 350391 DEBUG oslo_concurrency.lockutils [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 02:37:57 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 26 02:37:57 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/435716631' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 02:37:58 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15923 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:37:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 02:37:58 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3749687887' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 02:37:58 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15927 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 02:37:58 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 26 02:37:58 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2781925206' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 02:37:58 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:37:59 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15933 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 02:37:59 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 26 02:37:59 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1452554819' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 26 02:37:59 compute-0 nova_compute[350387]: 2025-11-26 02:37:59.510 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:59 compute-0 nova_compute[350387]: 2025-11-26 02:37:59.512 350391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 02:37:59 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15937 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 02:37:59 compute-0 podman[158021]: time="2025-11-26T02:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 02:37:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Nov 26 02:37:59 compute-0 podman[158021]: @ - - [26/Nov/2025:02:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8218 "" "Go-http-client/1.1"
Nov 26 02:38:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 26 02:38:00 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1053277775' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 26 02:38:00 compute-0 ceph-mgr[193049]: log_channel(audit) log [DBG] : from='client.15943 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 02:38:00 compute-0 ceph-mgr[193049]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 02:38:00 compute-0 ceph-36901f64-240e-5c29-a2e2-29b56f2c329c-mgr-compute-0-vbisdw[193045]: 2025-11-26T02:38:00.344+0000 7f7615e48640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 02:38:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 26 02:38:00 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3514462079' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 26 02:38:00 compute-0 ceph-mgr[193049]: log_channel(cluster) log [DBG] : pgmap v2633: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Nov 26 02:38:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 26 02:38:00 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3827924540' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 26 02:38:00 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 26 02:38:00 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4229989008' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 26 02:38:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 26 02:38:01 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4006475668' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 26 02:38:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 26 02:38:01 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890782277' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 26 02:38:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 26 02:38:01 compute-0 openstack_network_exporter[367323]: ERROR   02:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:38:01 compute-0 openstack_network_exporter[367323]: ERROR   02:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 02:38:01 compute-0 openstack_network_exporter[367323]: ERROR   02:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 02:38:01 compute-0 openstack_network_exporter[367323]: ERROR   02:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 02:38:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:38:01 compute-0 openstack_network_exporter[367323]: ERROR   02:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 02:38:01 compute-0 openstack_network_exporter[367323]: 
Nov 26 02:38:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 26 02:38:01 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2407723405' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 26 02:38:01 compute-0 ceph-mon[192746]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 26 02:38:01 compute-0 ceph-mon[192746]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3943277961' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 26 02:38:01 compute-0 nova_compute[350387]: 2025-11-26 02:38:01.820 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:38:01 compute-0 nova_compute[350387]: 2025-11-26 02:38:01.821 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:38:01 compute-0 nova_compute[350387]: 2025-11-26 02:38:01.821 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:38:01 compute-0 nova_compute[350387]: 2025-11-26 02:38:01.821 350391 DEBUG oslo_service.periodic_task [None req-db210a33-9948-451b-bf38-2d4d02353f2a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 30941184 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 02:38:02 compute-0 rsyslogd[188548]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 8724 writes, 34K keys, 8724 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8724 writes, 2036 syncs, 4.28 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 878 writes, 2698 keys, 878 commit groups, 1.0 writes per commit group, ingest: 1.85 MB, 0.00 MB/s#012Interval WAL: 878 writes, 391 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286453 data_alloc: 218103808 data_used: 15339520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4f951c000/0x0/0x4ffc00000, data 0x20857ea/0x2151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 30932992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 173.001434326s of 173.086853027s, submitted: 32
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110895104 unmapped: 31694848 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 ms_handle_reset con 0x55a56eb5d800 session 0x55a56eb44f00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 ms_handle_reset con 0x55a56d927400 session 0x55a56bb654a0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107053056 unmapped: 35536896 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 ms_handle_reset con 0x55a56c8d0800 session 0x55a56b31cf00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107143168 unmapped: 35446784 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130752 data_alloc: 218103808 data_used: 10362880
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa4a1000/0x0/0x4ffc00000, data 0x1103745/0x11cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130752 data_alloc: 218103808 data_used: 10362880
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa4a1000/0x0/0x4ffc00000, data 0x1103745/0x11cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa4a1000/0x0/0x4ffc00000, data 0x1103745/0x11cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa4a1000/0x0/0x4ffc00000, data 0x1103745/0x11cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130752 data_alloc: 218103808 data_used: 10362880
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa4a1000/0x0/0x4ffc00000, data 0x1103745/0x11cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130752 data_alloc: 218103808 data_used: 10362880
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 35438592 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 ms_handle_reset con 0x55a56d727800 session 0x55a56e425e00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.714269638s of 20.615226746s, submitted: 137
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 ms_handle_reset con 0x55a56fc29800 session 0x55a56e607680
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fac73000/0x0/0x4ffc00000, data 0x934735/0x9fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045518 data_alloc: 218103808 data_used: 7057408
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 37912576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 74.238616943s of 74.364341736s, submitted: 25
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104701952 unmapped: 37888000 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104710144 unmapped: 37879808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 134 ms_handle_reset con 0x55a56fc29400 session 0x55a56c2b2000
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa473000/0x0/0x4ffc00000, data 0x1134735/0x11fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104710144 unmapped: 37879808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29000 session 0x55a56d9ffa40
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104710144 unmapped: 37879808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104710144 unmapped: 37879808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 37871616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197032 data_alloc: 218103808 data_used: 7073792
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28c00 session 0x55a56c28c1e0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28800 session 0x55a56df770e0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 37863424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28c00 session 0x55a56df765a0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 60.036300659s of 60.289119720s, submitted: 26
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29000 session 0x55a56df77860
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f97fb000/0x0/0x4ffc00000, data 0x1da7e52/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 31752192 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29400 session 0x55a56b8d8d20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 31752192 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 31752192 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29800 session 0x55a56b353e00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28400 session 0x55a56df79680
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28000 session 0x55a56e4f50e0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28400 session 0x55a56e606000
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29000 session 0x55a56e633c20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28c00 session 0x55a56e606d20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 30105600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29400 session 0x55a56e631c20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29400 session 0x55a56b9452c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323660 data_alloc: 218103808 data_used: 13881344
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 30105600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8ba3000/0x0/0x4ffc00000, data 0x29fdf26/0x2acb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 30105600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 30105600 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28000 session 0x55a56e6594a0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28400 session 0x55a56c28c000
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 30760960 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28c00 session 0x55a56c28c1e0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 30760960 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0400 session 0x55a56eb8b680
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8ba3000/0x0/0x4ffc00000, data 0x29fdf26/0x2acb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0400 session 0x55a56eb8d2c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329098 data_alloc: 218103808 data_used: 13885440
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112181248 unmapped: 30408704 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.548220634s of 10.035059929s, submitted: 80
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112189440 unmapped: 30400512 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8b78000/0x0/0x4ffc00000, data 0x2a27f36/0x2af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8b78000/0x0/0x4ffc00000, data 0x2a27f36/0x2af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 112197632 unmapped: 30392320 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8b78000/0x0/0x4ffc00000, data 0x2a27f36/0x2af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 113319936 unmapped: 29270016 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 28794880 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1398990 data_alloc: 234881024 data_used: 23592960
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114909184 unmapped: 27680768 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114917376 unmapped: 27672576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114917376 unmapped: 27672576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28000 session 0x55a56eb7da40
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28400 session 0x55a56eb4e780
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8b78000/0x0/0x4ffc00000, data 0x2a27f36/0x2af6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114917376 unmapped: 27672576 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28c00 session 0x55a56e504960
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 27639808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369545 data_alloc: 234881024 data_used: 22966272
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 27639808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 27639808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 27639808 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 27631616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 27631616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369545 data_alloc: 234881024 data_used: 22966272
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 27631616 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 27623424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 27623424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 27623424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 27623424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369545 data_alloc: 234881024 data_used: 22966272
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 27623424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369545 data_alloc: 234881024 data_used: 22966272
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 27615232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 27607040 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369545 data_alloc: 234881024 data_used: 22966272
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 27607040 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x27abeb4/0x2877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 27607040 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 27607040 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 27607040 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.431713104s of 33.620201111s, submitted: 41
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117678080 unmapped: 24911872 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403693 data_alloc: 234881024 data_used: 23093248
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118439936 unmapped: 24150016 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 25518080 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f89b0000/0x0/0x4ffc00000, data 0x2bf1eb4/0x2cbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 24961024 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117628928 unmapped: 24961024 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f89b0000/0x0/0x4ffc00000, data 0x2bf1eb4/0x2cbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 24928256 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417167 data_alloc: 234881024 data_used: 23797760
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 24928256 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117669888 unmapped: 24920064 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 24788992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 24788992 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8991000/0x0/0x4ffc00000, data 0x2c11eb4/0x2cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29400 session 0x55a56c826d20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.656462669s of 10.237721443s, submitted: 113
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0400 session 0x55a56b368d20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 24264704 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455488 data_alloc: 234881024 data_used: 23801856
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x3073f16/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 24231936 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 24231936 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x3073f16/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 24231936 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28000 session 0x55a56e255a40
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 24231936 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28400 session 0x55a56c5acd20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x3073f16/0x3140000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 24231936 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455488 data_alloc: 234881024 data_used: 23801856
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28c00 session 0x55a56eb8d4a0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d1000 session 0x55a56b353e00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118661120 unmapped: 23928832 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0400 session 0x55a56eb4ef00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 22888448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 22888448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7f4d000/0x0/0x4ffc00000, data 0x3654f16/0x3721000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 22880256 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 22880256 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1509326 data_alloc: 234881024 data_used: 23920640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.817225456s of 10.958477974s, submitted: 17
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119799808 unmapped: 22790144 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28c00 session 0x55a56e488d20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119980032 unmapped: 22609920 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f7f4a000/0x0/0x4ffc00000, data 0x3657f16/0x3724000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28000 session 0x55a56df783c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28400 session 0x55a56e4241e0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0800 session 0x55a56c2530e0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 22446080 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e425e00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0400 session 0x55a56c7b5a40
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc28000 session 0x55a56c713860
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83ac000/0x0/0x4ffc00000, data 0x31f5eb4/0x32c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1474319 data_alloc: 234881024 data_used: 23801856
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83ac000/0x0/0x4ffc00000, data 0x31f5eb4/0x32c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83ac000/0x0/0x4ffc00000, data 0x31f5eb4/0x32c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1474319 data_alloc: 234881024 data_used: 23801856
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.621734619s of 11.774707794s, submitted: 27
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83ab000/0x0/0x4ffc00000, data 0x31f6eb4/0x32c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 22503424 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1474963 data_alloc: 234881024 data_used: 23805952
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 22495232 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83ab000/0x0/0x4ffc00000, data 0x31f6eb4/0x32c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 120184832 unmapped: 22405120 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121978880 unmapped: 20611072 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 20045824 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1516151 data_alloc: 234881024 data_used: 29446144
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83a7000/0x0/0x4ffc00000, data 0x31fbeb4/0x32c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83a7000/0x0/0x4ffc00000, data 0x31fbeb4/0x32c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1516151 data_alloc: 234881024 data_used: 29446144
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83a7000/0x0/0x4ffc00000, data 0x31fbeb4/0x32c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83a7000/0x0/0x4ffc00000, data 0x31fbeb4/0x32c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f83a7000/0x0/0x4ffc00000, data 0x31fbeb4/0x32c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1516151 data_alloc: 234881024 data_used: 29446144
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 20004864 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.601295471s of 19.642702103s, submitted: 5
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29000 session 0x55a56d9ffa40
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56fc29800 session 0x55a56e631860
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122593280 unmapped: 19996672 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e4892c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336755 data_alloc: 234881024 data_used: 19525632
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f921a000/0x0/0x4ffc00000, data 0x2388e52/0x2453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f921a000/0x0/0x4ffc00000, data 0x2388e52/0x2453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336755 data_alloc: 234881024 data_used: 19525632
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f921a000/0x0/0x4ffc00000, data 0x2388e52/0x2453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336755 data_alloc: 234881024 data_used: 19525632
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f921a000/0x0/0x4ffc00000, data 0x2388e52/0x2453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f921a000/0x0/0x4ffc00000, data 0x2388e52/0x2453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 23912448 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.012077332s of 18.237907410s, submitted: 45
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 119865344 unmapped: 22724608 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1432429 data_alloc: 234881024 data_used: 20500480
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 19464192 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8620000/0x0/0x4ffc00000, data 0x2f7be52/0x3046000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 123240448 unmapped: 19349504 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 123928576 unmapped: 18661376 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 18513920 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 18513920 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451629 data_alloc: 234881024 data_used: 20959232
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 18513920 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 18513920 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f8598000/0x0/0x4ffc00000, data 0x3003e52/0x30ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122355712 unmapped: 20234240 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e6305a0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 20250624 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56e632960
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc29000 session 0x55a56e632f00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56d926c00 session 0x55a56e632d20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 20250624 heap: 142589952 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e5043c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.896427155s of 10.405808449s, submitted: 124
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e213860
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566241 data_alloc: 234881024 data_used: 20971520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56b9463c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc29000 session 0x55a56e2125a0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56d927400 session 0x55a56c13c000
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56eb503c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 28147712 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f775c000/0x0/0x4ffc00000, data 0x3e469cf/0x3f12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 28147712 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121790464 unmapped: 28147712 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 28008448 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 28008448 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1565081 data_alloc: 234881024 data_used: 20971520
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56eb4f2c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 27992064 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56eb50780
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 27992064 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f7752000/0x0/0x4ffc00000, data 0x3e509cf/0x3f1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc29000 session 0x55a56eb7d860
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56eb5d800 session 0x55a56e607860
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 27639808 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56c28c000
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc29000 session 0x55a56c4052c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f2800 session 0x55a56b31c000
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 27639808 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e607680
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56d78c800 session 0x55a56c5acf00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 122298368 unmapped: 27639808 heap: 149938176 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc29000 session 0x55a56df78f00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56d78c000 session 0x55a56c7b3860
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.034696579s of 10.289134979s, submitted: 43
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56eb6dc00 session 0x55a56b8d6780
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1582211 data_alloc: 234881024 data_used: 22630400
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f7728000/0x0/0x4ffc00000, data 0x3e7a9cf/0x3f46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56c253e00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56b815c20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f2800 session 0x55a56b8a9860
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124567552 unmapped: 29573120 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56c252f00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56eb6dc00 session 0x55a56e3d01e0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56eb3cd20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56d78c000 session 0x55a56eb4f680
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f2800 session 0x55a56e633680
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e606000
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56eb6dc00 session 0x55a56c502f00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56b368b40
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc29000 session 0x55a56e607a40
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 129622016 unmapped: 24518656 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 22323200 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56b8a81e0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56eb3cb40
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f2800 session 0x55a56e4f41e0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56eb6dc00 session 0x55a56eb3c3c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6187000/0x0/0x4ffc00000, data 0x5419a41/0x54e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 123600896 unmapped: 30539776 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e4f45a0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e505860
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e212d20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 29499392 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e2121e0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1641379 data_alloc: 234881024 data_used: 20008960
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 29327360 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 29327360 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fad000/0x0/0x4ffc00000, data 0x45f3a41/0x46c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 124813312 unmapped: 29327360 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 128786432 unmapped: 25354240 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1785935 data_alloc: 251658240 data_used: 36515840
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 22511616 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132055040 unmapped: 22085632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132055040 unmapped: 22085632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fad000/0x0/0x4ffc00000, data 0x45f3a41/0x46c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 22052864 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.828028679s of 13.312131882s, submitted: 110
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132169728 unmapped: 21970944 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1792707 data_alloc: 251658240 data_used: 37019648
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132169728 unmapped: 21970944 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132169728 unmapped: 21970944 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 21938176 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 21938176 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 21938176 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1792707 data_alloc: 251658240 data_used: 37019648
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 21938176 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 21938176 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132235264 unmapped: 21905408 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132235264 unmapped: 21905408 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1792707 data_alloc: 251658240 data_used: 37019648
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1792707 data_alloc: 251658240 data_used: 37019648
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 21864448 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 21848064 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 21848064 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 21848064 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1792707 data_alloc: 251658240 data_used: 37019648
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6fab000/0x0/0x4ffc00000, data 0x45f4a41/0x46c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 21848064 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 21848064 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132292608 unmapped: 21848064 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec28400 session 0x55a56e297e00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec28800 session 0x55a56e2963c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e296f00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e633e00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 132308992 unmapped: 21831680 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.208217621s of 25.232786179s, submitted: 5
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e632000
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec28400 session 0x55a56e254f00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137666560 unmapped: 16474112 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6a9e000/0x0/0x4ffc00000, data 0x4b01aa3/0x4bd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1871388 data_alloc: 251658240 data_used: 37527552
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 16580608 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137437184 unmapped: 16703488 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141099008 unmapped: 13041664 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143065088 unmapped: 11075584 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143572992 unmapped: 10567680 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec29400 session 0x55a56e6323c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1965642 data_alloc: 251658240 data_used: 38883328
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143818752 unmapped: 10321920 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e632960
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5cf4000/0x0/0x4ffc00000, data 0x58a2aa3/0x5971000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143826944 unmapped: 10313728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e633680
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56c7b4b40
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143114240 unmapped: 11026432 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143114240 unmapped: 11026432 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.638150215s of 10.307005882s, submitted: 200
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56e296b40
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec28000 session 0x55a56df765a0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143122432 unmapped: 11018240 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5cfc000/0x0/0x4ffc00000, data 0x58a2ac6/0x5972000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,1])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1793725 data_alloc: 234881024 data_used: 34222080
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56b8145a0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 13385728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 13385728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 13385728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 13385728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f63ed000/0x0/0x4ffc00000, data 0x4a2ba64/0x4afa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 13385728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1805933 data_alloc: 251658240 data_used: 36061184
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 13385728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f63ed000/0x0/0x4ffc00000, data 0x4a2ba64/0x4afa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28c00 session 0x55a56df7b0e0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28400 session 0x55a56eb7c1e0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 13385728 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e3d0000
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1619351 data_alloc: 234881024 data_used: 30748672
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1619351 data_alloc: 234881024 data_used: 30748672
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1619351 data_alloc: 234881024 data_used: 30748672
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 16973824 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.527814865s of 26.195085526s, submitted: 83
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1619367 data_alloc: 234881024 data_used: 30744576
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 16965632 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1619367 data_alloc: 234881024 data_used: 30744576
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137183232 unmapped: 16957440 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79ef000/0x0/0x4ffc00000, data 0x37a0a64/0x386f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 137183232 unmapped: 16957440 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143810560 unmapped: 10330112 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143810560 unmapped: 10330112 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6b4b000/0x0/0x4ffc00000, data 0x4644a64/0x4713000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 8953856 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1740889 data_alloc: 234881024 data_used: 31739904
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 10706944 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 10706944 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 10706944 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 10706944 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 10698752 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.758532524s of 15.169282913s, submitted: 129
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6b17000/0x0/0x4ffc00000, data 0x4678a64/0x4747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1741529 data_alloc: 234881024 data_used: 31744000
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 10698752 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 10698752 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6b15000/0x0/0x4ffc00000, data 0x467aa64/0x4749000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 10698752 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 10698752 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 10698752 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1741529 data_alloc: 234881024 data_used: 31744000
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143441920 unmapped: 10698752 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6b15000/0x0/0x4ffc00000, data 0x467aa64/0x4749000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 10690560 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 10690560 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 10690560 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 10690560 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1741529 data_alloc: 234881024 data_used: 31744000
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 10690560 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 10690560 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28000 session 0x55a56df7b860
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56c8c2000
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6b15000/0x0/0x4ffc00000, data 0x467aa64/0x4749000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56c8c32c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28400 session 0x55a56e2134a0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 10690560 heap: 154140672 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.815789223s of 12.824033737s, submitted: 1
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28c00 session 0x55a56e213680
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 16744448 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 16744448 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1778373 data_alloc: 234881024 data_used: 31748096
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143736832 unmapped: 16703488 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b3000/0x0/0x4ffc00000, data 0x4adca64/0x4bab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143745024 unmapped: 16695296 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec28400 session 0x55a56b31cf00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143745024 unmapped: 16695296 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e632b40
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143745024 unmapped: 16695296 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56e6334a0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143745024 unmapped: 16695296 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b3000/0x0/0x4ffc00000, data 0x4adca64/0x4bab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28400 session 0x55a56e633e00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1781619 data_alloc: 234881024 data_used: 31748096
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143769600 unmapped: 16670720 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143785984 unmapped: 16654336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143654912 unmapped: 16785408 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143417344 unmapped: 17022976 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16990208 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.152029037s of 12.250783920s, submitted: 20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1801379 data_alloc: 251658240 data_used: 34910208
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56b8a9860
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1801323 data_alloc: 251658240 data_used: 34910208
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 17702912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 17702912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 17702912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 17702912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 17702912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1801963 data_alloc: 251658240 data_used: 34979840
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1801963 data_alloc: 251658240 data_used: 34979840
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 17686528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1801963 data_alloc: 251658240 data_used: 34979840
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142761984 unmapped: 17678336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142770176 unmapped: 17670144 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1801963 data_alloc: 251658240 data_used: 34979840
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142770176 unmapped: 17670144 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142770176 unmapped: 17670144 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142770176 unmapped: 17670144 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142770176 unmapped: 17670144 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142770176 unmapped: 17670144 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f66b2000/0x0/0x4ffc00000, data 0x4adca74/0x4bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.086006165s of 30.111179352s, submitted: 4
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1866039 data_alloc: 251658240 data_used: 35270656
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5ee4000/0x0/0x4ffc00000, data 0x52a4a74/0x5374000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143990784 unmapped: 16449536 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5e2e000/0x0/0x4ffc00000, data 0x5351a74/0x5421000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1878977 data_alloc: 251658240 data_used: 35135488
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144039936 unmapped: 16400384 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5e18000/0x0/0x4ffc00000, data 0x5376a74/0x5446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144039936 unmapped: 16400384 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5e18000/0x0/0x4ffc00000, data 0x5376a74/0x5446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1872269 data_alloc: 251658240 data_used: 35135488
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144039936 unmapped: 16400384 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144039936 unmapped: 16400384 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144039936 unmapped: 16400384 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.147089005s of 13.530894279s, submitted: 89
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144056320 unmapped: 16384000 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5e18000/0x0/0x4ffc00000, data 0x5376a74/0x5446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144179200 unmapped: 16261120 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1897909 data_alloc: 251658240 data_used: 37761024
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144465920 unmapped: 15974400 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5e18000/0x0/0x4ffc00000, data 0x5376a74/0x5446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144465920 unmapped: 15974400 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5e18000/0x0/0x4ffc00000, data 0x5376a74/0x5446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5e18000/0x0/0x4ffc00000, data 0x5376a74/0x5446000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144465920 unmapped: 15974400 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144465920 unmapped: 15974400 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144474112 unmapped: 15966208 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1941027 data_alloc: 251658240 data_used: 37781504
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56f455000 session 0x55a56e212960
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144572416 unmapped: 15867904 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5976000/0x0/0x4ffc00000, data 0x5818a74/0x58e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144572416 unmapped: 15867904 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144580608 unmapped: 15859712 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144580608 unmapped: 15859712 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56eb51e00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144580608 unmapped: 15859712 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56d926c00 session 0x55a56eb51c20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56df7ad20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1941027 data_alloc: 251658240 data_used: 37781504
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.132235527s of 12.247295380s, submitted: 20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56e4f3000 session 0x55a56b8a8f00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144605184 unmapped: 15835136 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144605184 unmapped: 15835136 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5973000/0x0/0x4ffc00000, data 0x5819aa7/0x58eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144605184 unmapped: 15835136 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 144343040 unmapped: 16097280 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 15278080 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1967442 data_alloc: 251658240 data_used: 41213952
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 13541376 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 13541376 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 13541376 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5973000/0x0/0x4ffc00000, data 0x5819aa7/0x58eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 13541376 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 13541376 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5971000/0x0/0x4ffc00000, data 0x581aaa7/0x58ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5971000/0x0/0x4ffc00000, data 0x581aaa7/0x58ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1970102 data_alloc: 251658240 data_used: 41492480
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.006687164s of 10.064773560s, submitted: 9
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146997248 unmapped: 13443072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146997248 unmapped: 13443072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56fc28c00 session 0x55a56e6323c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec29800 session 0x55a56e3d1a40
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f5966000/0x0/0x4ffc00000, data 0x5826aa7/0x58f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 146997248 unmapped: 13443072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0400 session 0x55a56eb3cb40
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6673000/0x0/0x4ffc00000, data 0x4b1aa97/0x4beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1846947 data_alloc: 251658240 data_used: 38170624
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f6669000/0x0/0x4ffc00000, data 0x4b21a97/0x4bf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,3])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56ec29c00 session 0x55a56b8d8d20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1848537 data_alloc: 251658240 data_used: 38178816
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.912253380s of 10.141081810s, submitted: 27
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 14663680 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 ms_handle_reset con 0x55a56c8d0800 session 0x55a56b8d81e0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79c2000/0x0/0x4ffc00000, data 0x37cba97/0x389c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1637779 data_alloc: 234881024 data_used: 29880320
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79c2000/0x0/0x4ffc00000, data 0x37cba12/0x389a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1637779 data_alloc: 234881024 data_used: 29880320
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 19734528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.099064827s of 12.339138985s, submitted: 39
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f79c2000/0x0/0x4ffc00000, data 0x37cba12/0x389a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 137 ms_handle_reset con 0x55a56d926c00 session 0x55a56df76d20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 137 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e254d20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140804096 unmapped: 19636224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 137 ms_handle_reset con 0x55a56c8d0800 session 0x55a56e424b40
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140795904 unmapped: 19644416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1647934 data_alloc: 234881024 data_used: 29888512
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140861440 unmapped: 19578880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 138 ms_handle_reset con 0x55a56ec29800 session 0x55a56b8145a0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f79bd000/0x0/0x4ffc00000, data 0x37cf160/0x38a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140861440 unmapped: 19578880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 140861440 unmapped: 19578880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f79bd000/0x0/0x4ffc00000, data 0x37cf160/0x38a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142221312 unmapped: 18219008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141230080 unmapped: 19210240 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1691241 data_alloc: 234881024 data_used: 30142464
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141303808 unmapped: 19136512 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141410304 unmapped: 19030016 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141410304 unmapped: 19030016 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d71160/0x3e42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 138 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.054459572s of 10.655910492s, submitted: 117
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141418496 unmapped: 19021824 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141418496 unmapped: 19021824 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 3056 syncs, 3.70 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2571 writes, 9774 keys, 2571 commit groups, 1.0 writes per commit group, ingest: 11.14 MB, 0.02 MB/s#012Interval WAL: 2571 writes, 1020 syncs, 2.52 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1706799 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141418496 unmapped: 19021824 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141418496 unmapped: 19021824 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141418496 unmapped: 19021824 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141418496 unmapped: 19021824 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f740e000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141418496 unmapped: 19021824 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1707031 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141418496 unmapped: 19021824 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 ms_handle_reset con 0x55a56eb5ec00 session 0x55a56c405c20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: mgrc ms_handle_reset ms_handle_reset con 0x55a56c81b800
Nov 26 02:38:02 compute-0 ceph-osd[207774]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2845592742
Nov 26 02:38:02 compute-0 ceph-osd[207774]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2845592742,v1:192.168.122.100:6801/2845592742]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: mgrc handle_mgr_configure stats_period=5
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141508608 unmapped: 18931712 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 ms_handle_reset con 0x55a56d726c00 session 0x55a56eb4f0e0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 ms_handle_reset con 0x55a56d726800 session 0x55a56bb64f00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f740e000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141508608 unmapped: 18931712 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f740e000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141516800 unmapped: 18923520 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141516800 unmapped: 18923520 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1707031 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141516800 unmapped: 18923520 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f740e000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141516800 unmapped: 18923520 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141516800 unmapped: 18923520 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f740e000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141516800 unmapped: 18923520 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141516800 unmapped: 18923520 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1707031 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141516800 unmapped: 18923520 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141524992 unmapped: 18915328 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f740e000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141524992 unmapped: 18915328 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141524992 unmapped: 18915328 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141524992 unmapped: 18915328 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1707031 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141524992 unmapped: 18915328 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141524992 unmapped: 18915328 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141524992 unmapped: 18915328 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f740e000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141524992 unmapped: 18915328 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141533184 unmapped: 18907136 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1707031 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141533184 unmapped: 18907136 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141533184 unmapped: 18907136 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141533184 unmapped: 18907136 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f740e000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141533184 unmapped: 18907136 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141533184 unmapped: 18907136 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1707031 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141533184 unmapped: 18907136 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.436485291s of 33.464859009s, submitted: 16
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141467648 unmapped: 18972672 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141475840 unmapped: 18964480 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141475840 unmapped: 18964480 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141475840 unmapped: 18964480 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1700375 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141475840 unmapped: 18964480 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141475840 unmapped: 18964480 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141475840 unmapped: 18964480 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141475840 unmapped: 18964480 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141475840 unmapped: 18964480 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1700375 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141484032 unmapped: 18956288 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141484032 unmapped: 18956288 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141484032 unmapped: 18956288 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141492224 unmapped: 18948096 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141492224 unmapped: 18948096 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1700375 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141492224 unmapped: 18948096 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141492224 unmapped: 18948096 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141492224 unmapped: 18948096 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141492224 unmapped: 18948096 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141492224 unmapped: 18948096 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1700375 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141492224 unmapped: 18948096 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141492224 unmapped: 18948096 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141492224 unmapped: 18948096 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141492224 unmapped: 18948096 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141492224 unmapped: 18948096 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1700375 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141500416 unmapped: 18939904 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 141500416 unmapped: 18939904 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.569644928s of 25.575139999s, submitted: 1
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142557184 unmapped: 17883136 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142557184 unmapped: 17883136 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142565376 unmapped: 17874944 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701607 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142565376 unmapped: 17874944 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142565376 unmapped: 17874944 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142565376 unmapped: 17874944 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142565376 unmapped: 17874944 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142565376 unmapped: 17874944 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701607 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142565376 unmapped: 17874944 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142565376 unmapped: 17874944 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142565376 unmapped: 17874944 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142565376 unmapped: 17874944 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142565376 unmapped: 17874944 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701607 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142573568 unmapped: 17866752 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142573568 unmapped: 17866752 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142573568 unmapped: 17866752 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142573568 unmapped: 17866752 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142573568 unmapped: 17866752 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701607 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142581760 unmapped: 17858560 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142581760 unmapped: 17858560 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142581760 unmapped: 17858560 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142581760 unmapped: 17858560 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142598144 unmapped: 17842176 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701607 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142598144 unmapped: 17842176 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142598144 unmapped: 17842176 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142598144 unmapped: 17842176 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142598144 unmapped: 17842176 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142598144 unmapped: 17842176 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701607 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142598144 unmapped: 17842176 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 ms_handle_reset con 0x55a56eb5f800 session 0x55a56c405e00
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142598144 unmapped: 17842176 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142606336 unmapped: 17833984 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142606336 unmapped: 17833984 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142606336 unmapped: 17833984 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701607 data_alloc: 234881024 data_used: 29970432
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142606336 unmapped: 17833984 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142614528 unmapped: 17825792 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.374835968s of 35.401542664s, submitted: 8
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142622720 unmapped: 17817600 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 142630912 unmapped: 17809408 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143712256 unmapped: 16728064 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1700343 data_alloc: 234881024 data_used: 29982720
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143736832 unmapped: 16703488 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143736832 unmapped: 16703488 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143736832 unmapped: 16703488 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143736832 unmapped: 16703488 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143736832 unmapped: 16703488 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1700343 data_alloc: 234881024 data_used: 29982720
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143736832 unmapped: 16703488 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143736832 unmapped: 16703488 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143736832 unmapped: 16703488 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143736832 unmapped: 16703488 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143736832 unmapped: 16703488 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1700343 data_alloc: 234881024 data_used: 29982720
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143736832 unmapped: 16703488 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143736832 unmapped: 16703488 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143745024 unmapped: 16695296 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143745024 unmapped: 16695296 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143745024 unmapped: 16695296 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1700343 data_alloc: 234881024 data_used: 29982720
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143745024 unmapped: 16695296 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1700343 data_alloc: 234881024 data_used: 29982720
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143753216 unmapped: 16687104 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 16678912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1700343 data_alloc: 234881024 data_used: 29982720
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 16678912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 16678912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 16678912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 16678912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 16678912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1700343 data_alloc: 234881024 data_used: 29982720
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 16678912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 16678912 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143769600 unmapped: 16670720 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.167800903s of 35.824729919s, submitted: 110
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143769600 unmapped: 16670720 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143769600 unmapped: 16670720 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701303 data_alloc: 234881024 data_used: 30072832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143769600 unmapped: 16670720 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143769600 unmapped: 16670720 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143769600 unmapped: 16670720 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143769600 unmapped: 16670720 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143769600 unmapped: 16670720 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701303 data_alloc: 234881024 data_used: 30072832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 16662528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 16662528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 16662528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 16662528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 16662528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701303 data_alloc: 234881024 data_used: 30072832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 16662528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 16662528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 16662528 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143785984 unmapped: 16654336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143785984 unmapped: 16654336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701463 data_alloc: 234881024 data_used: 30076928
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143785984 unmapped: 16654336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143785984 unmapped: 16654336 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143794176 unmapped: 16646144 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 16637952 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 16637952 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701463 data_alloc: 234881024 data_used: 30076928
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 16637952 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143810560 unmapped: 16629760 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143810560 unmapped: 16629760 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143810560 unmapped: 16629760 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143810560 unmapped: 16629760 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701463 data_alloc: 234881024 data_used: 30076928
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143810560 unmapped: 16629760 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143810560 unmapped: 16629760 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143810560 unmapped: 16629760 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143810560 unmapped: 16629760 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143826944 unmapped: 16613376 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701463 data_alloc: 234881024 data_used: 30076928
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143826944 unmapped: 16613376 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143826944 unmapped: 16613376 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143826944 unmapped: 16613376 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.809391022s of 35.815101624s, submitted: 1
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143843328 unmapped: 16596992 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143843328 unmapped: 16596992 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143843328 unmapped: 16596992 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143843328 unmapped: 16596992 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143843328 unmapped: 16596992 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143843328 unmapped: 16596992 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143843328 unmapped: 16596992 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143843328 unmapped: 16596992 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143843328 unmapped: 16596992 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143843328 unmapped: 16596992 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143843328 unmapped: 16596992 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143843328 unmapped: 16596992 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 16588800 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143900672 unmapped: 16539648 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143900672 unmapped: 16539648 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 16531456 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 16531456 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 16531456 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 16531456 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 16531456 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 16531456 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 16531456 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 16531456 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 16531456 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 16531456 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 16531456 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 16531456 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 16531456 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 16531456 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704263 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7417000/0x0/0x4ffc00000, data 0x3d74bc3/0x3e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 137.426910400s of 137.448806763s, submitted: 14
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704447 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7415000/0x0/0x4ffc00000, data 0x3d76bc3/0x3e49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7415000/0x0/0x4ffc00000, data 0x3d76bc3/0x3e49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704447 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7415000/0x0/0x4ffc00000, data 0x3d76bc3/0x3e49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704447 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7415000/0x0/0x4ffc00000, data 0x3d76bc3/0x3e49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704447 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.859411240s of 15.869720459s, submitted: 1
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701987 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1701987 data_alloc: 234881024 data_used: 30064640
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143859712 unmapped: 16580608 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.327033997s of 14.335655212s, submitted: 1
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704259 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704259 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704259 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.335412979s of 15.360344887s, submitted: 13
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143867904 unmapped: 16572416 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 16564224 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 16556032 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 16547840 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143900672 unmapped: 16539648 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143900672 unmapped: 16539648 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143900672 unmapped: 16539648 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 16531456 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 16523264 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 16515072 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143933440 unmapped: 16506880 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 16498688 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143949824 unmapped: 16490496 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143949824 unmapped: 16490496 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143949824 unmapped: 16490496 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143949824 unmapped: 16490496 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143949824 unmapped: 16490496 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143949824 unmapped: 16490496 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143949824 unmapped: 16490496 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143949824 unmapped: 16490496 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143958016 unmapped: 16482304 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143958016 unmapped: 16482304 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143958016 unmapped: 16482304 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143958016 unmapped: 16482304 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143958016 unmapped: 16482304 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143958016 unmapped: 16482304 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143958016 unmapped: 16482304 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143958016 unmapped: 16482304 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143958016 unmapped: 16482304 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143958016 unmapped: 16482304 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143958016 unmapped: 16482304 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143958016 unmapped: 16482304 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143958016 unmapped: 16482304 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143958016 unmapped: 16482304 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143958016 unmapped: 16482304 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143966208 unmapped: 16474112 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143966208 unmapped: 16474112 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143966208 unmapped: 16474112 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143966208 unmapped: 16474112 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143966208 unmapped: 16474112 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143966208 unmapped: 16474112 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143966208 unmapped: 16474112 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143966208 unmapped: 16474112 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143966208 unmapped: 16474112 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143966208 unmapped: 16474112 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143966208 unmapped: 16474112 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143966208 unmapped: 16474112 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 234881024 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143974400 unmapped: 16465920 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143982592 unmapped: 16457728 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143982592 unmapped: 16457728 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143982592 unmapped: 16457728 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 218103808 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143982592 unmapped: 16457728 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143990784 unmapped: 16449536 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143990784 unmapped: 16449536 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143990784 unmapped: 16449536 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143990784 unmapped: 16449536 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 218103808 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 218103808 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 218103808 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1704435 data_alloc: 218103808 data_used: 30052352
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 143998976 unmapped: 16441344 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 ms_handle_reset con 0x55a56e4f2800 session 0x55a56e255680
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 196.677001953s of 196.685287476s, submitted: 1
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 ms_handle_reset con 0x55a56eb6dc00 session 0x55a56e297680
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f7414000/0x0/0x4ffc00000, data 0x3d77bc3/0x3e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 135815168 unmapped: 24625152 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 ms_handle_reset con 0x55a56c8d0400 session 0x55a56e2554a0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 24616960 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 24616960 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f899d000/0x0/0x4ffc00000, data 0x27efbb3/0x28c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 24616960 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f899d000/0x0/0x4ffc00000, data 0x27efbb3/0x28c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1449775 data_alloc: 218103808 data_used: 17752064
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 24616960 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 24616960 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 24616960 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 24616960 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 24616960 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f899d000/0x0/0x4ffc00000, data 0x27efbb3/0x28c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1449775 data_alloc: 218103808 data_used: 17752064
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 24616960 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 ms_handle_reset con 0x55a56b5ca000 session 0x55a56c252780
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 ms_handle_reset con 0x55a56eb71000 session 0x55a56e3d1860
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 24616960 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.034541130s of 11.219616890s, submitted: 38
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 ms_handle_reset con 0x55a56c8d0800 session 0x55a56c7b43c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 26443776 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 26443776 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f93df000/0x0/0x4ffc00000, data 0x1daeb80/0x1e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 26443776 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340654 data_alloc: 218103808 data_used: 13922304
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 26443776 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 26443776 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f93df000/0x0/0x4ffc00000, data 0x1daeb80/0x1e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 26443776 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 26443776 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 26443776 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340654 data_alloc: 218103808 data_used: 13922304
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 26443776 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f93df000/0x0/0x4ffc00000, data 0x1daeb80/0x1e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 heartbeat osd_stat(store_statfs(0x4f93df000/0x0/0x4ffc00000, data 0x1daeb80/0x1e7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134012928 unmapped: 26427392 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.104556084s of 10.296481133s, submitted: 34
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 139 handle_osd_map epochs [140,140], i have 140, src has [1,140]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 140 ms_handle_reset con 0x55a56e4f2800 session 0x55a56e4252c0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f93dc000/0x0/0x4ffc00000, data 0x1db0751/0x1e81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f93dc000/0x0/0x4ffc00000, data 0x1db0751/0x1e81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 141 ms_handle_reset con 0x55a56b5ca000 session 0x55a56e633c20
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 142 ms_handle_reset con 0x55a56c8d0400 session 0x55a56df7a5a0
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350680 data_alloc: 218103808 data_used: 13930496
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f93d5000/0x0/0x4ffc00000, data 0x1db3ecb/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f93d5000/0x0/0x4ffc00000, data 0x1db3ecb/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 142 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352982 data_alloc: 218103808 data_used: 13930496
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f93d3000/0x0/0x4ffc00000, data 0x1db594a/0x1e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.565364838s of 11.843656540s, submitted: 57
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355956 data_alloc: 218103808 data_used: 13930496
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355956 data_alloc: 218103808 data_used: 13930496
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355956 data_alloc: 218103808 data_used: 13930496
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355956 data_alloc: 218103808 data_used: 13930496
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355956 data_alloc: 218103808 data_used: 13930496
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 3285 syncs, 3.60 writes per sync, written: 0.04 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 524 writes, 1394 keys, 524 commit groups, 1.0 writes per commit group, ingest: 0.69 MB, 0.00 MB/s#012Interval WAL: 524 writes, 229 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355956 data_alloc: 218103808 data_used: 13930496
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355956 data_alloc: 218103808 data_used: 13930496
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355956 data_alloc: 218103808 data_used: 13930496
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355956 data_alloc: 218103808 data_used: 13930496
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355956 data_alloc: 218103808 data_used: 13930496
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355956 data_alloc: 218103808 data_used: 13930496
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355956 data_alloc: 218103808 data_used: 13930496
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f93d0000/0x0/0x4ffc00000, data 0x1db73ad/0x1e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 02:38:02 compute-0 ceph-osd[207774]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 02:38:02 compute-0 ceph-osd[207774]: bluestore.MempoolThread(0x55a56a267b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355956 data_alloc: 218103808 data_used: 13930496
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
Nov 26 02:38:02 compute-0 ceph-osd[207774]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26411008 heap: 160440320 old mem: 2845415832 new mem: 2845415832
